Many problems in chemistry and biology involve data with geometric structure: molecules have 3D coordinates, proteins have orientations, and physical forces transform predictably under rotation. Geometric deep learning addresses this by building symmetries directly into model architectures. Notes in this section cover equivariant networks, focusing on SE(3) and E(3) equivariance achieved through group representation theory and spherical harmonics. The goal is to understand how these models work and why the symmetry constraints matter for data efficiency and physical correctness.

3D Steerable CNNs: Rotationally Equivariant Features
Weiler et al.’s NeurIPS 2018 paper introducing 3D Steerable CNNs that achieve SE(3) equivariance through group representation theory and spherical harmonic convolution kernels, eliminating the need for rotational data augmentation and improving data efficiency for scientific applications with rotational symmetry like molecular and protein structures.