Many problems in chemistry and biology involve data with geometric structure: molecules have 3D coordinates, proteins have orientations, and physical forces transform predictably under rotation. Geometric deep learning addresses this by building symmetries directly into model architectures. Notes in this section cover equivariant networks, focusing on SO(3) and SE(3) equivariance achieved through group representation theory and spherical harmonics. The goal is to understand how these models work and why the symmetry constraints matter for data efficiency and physical correctness.

YearPaperKey Idea
20183D Steerable CNNs: Rotationally Equivariant FeaturesSE(3)-equivariant features via Wigner D-matrices and spherical harmonics
2018Defining Disentangled Representations via Group TheoryFirst formal definition of disentanglement using symmetry group decomposition
2018Spherical CNNs: Rotation-Equivariant Networks on the SphereSO(3)-equivariance via generalized Fourier transform on the sphere
2019DGCNN: Dynamic Graph CNN for Point CloudsEdgeConv on dynamically recomputed k-NN graphs in feature space
2020SE(3)-Transformers: Equivariant Attention for 3D DataSelf-attention with SE(3)-equivariant type-0/type-1 features