The idea of equivariance to symmetry transformations provides one of the first theoretically grounded principles for neural network architecture design. In this talk we focus on equivariant convolutional networks for processing spatiotemporally structured signals like images or audio. We start by demanding equivariance w.r.t. global symmetries of the underlying space and proceed by generalizing the resulting design principles to local gauge transformation, thereby enabling the development of equivariant convolutional networks on general manifolds. Defining its feature spaces as spaces of feature fields, the theory of Gauge Equivariant Convolutional Networks shows intriguing parallels with fundamental theories in physics, in particular with the tetrad formalism of GR. In order to emphasize the coordinate free fashion of gauge equivariant convolutions, we briefly discuss their formulation in the language of fiber bundles. Beyond unifying several lines of research in equivariant and geometric deep learning, Gauge Equivariant Convolutional Networks are demonstrated to yield greatly improved performances compared to conventional CNNs in a wide range of signal processing tasks.