Computational Biology
Side-by-side comparison showing naive SVD producing a reflected alignment versus Umeyama's corrected proper rotation

Umeyama's Method: Corrected SVD for Point Alignment

Corrects a flaw in prior SVD-based alignment methods (Arun et al., Horn et al.) that could produce reflections instead of rotations under noisy data, and provides a complete closed-form solution for similarity transformations in arbitrary dimensions.

Generative Modeling
Diagram showing consistency models mapping points on a PF ODE trajectory to the same origin

Consistency Models: Fast One-Step Diffusion Generation

This paper introduces consistency models, a new family of generative models that map any point on a Probability Flow ODE trajectory to its origin. They support fast one-step generation by design, while allowing multi-step sampling for improved quality and zero-shot editing tasks like inpainting and colorization.

Generative Modeling
D3PM forward and reverse processes on a quantized swiss roll with uniform, Gaussian, and absorbing transition matrices

D3PM: Discrete Denoising Diffusion Probabilistic Models

This paper introduces Discrete Denoising Diffusion Probabilistic Models (D3PMs), which generalize diffusion to discrete state-spaces using structured Markov transition matrices. D3PMs include uniform, absorbing-state, and discretized Gaussian corruption processes, drawing a connection between diffusion and masked language models.

Optical Chemical Structure Recognition
GraphReco system architecture showing component extraction, atom and bond ambiguity resolution, and graph reconstruction stages

GraphReco: Probabilistic Structure Recognition (2026)

GraphReco presents a rule-based OCSR system with two key innovations: a Fragment Merging line detection algorithm for precise bond identification and a Markov network for probabilistic resolution of atom/bond ambiguity during graph assembly. Achieves 94.2% accuracy on USPTO-10K, outperforming both traditional rule-based and some ML-based methods.

Computational Biology
3D scatter plot showing left and right point sets with rotation axis and quaternion rotation arc

Horn's Method: Absolute Orientation via Unit Quaternions

Derives the optimal rotation between two 3D point sets as the eigenvector of a 4x4 symmetric matrix built from cross-covariance sums, using unit quaternions to enforce the orthogonality constraint.

Computational Biology
3D scatter plot showing source points, target points, and Kabsch-aligned points overlapping the targets

Kabsch Algorithm: Optimal Rotation for Point Set Alignment

A foundational 1976 short communication presenting a direct, non-iterative method for finding the best rotation matrix between two point sets via eigendecomposition of a cross-covariance matrix.

Generative Modeling
LDM architecture diagram showing conditioning via concatenation and cross-attention

Latent Diffusion Models for High-Res Image Synthesis

This paper introduces Latent Diffusion Models (LDMs), which apply denoising diffusion in the latent space of pretrained autoencoders. By separating perceptual compression from generative learning and adding cross-attention conditioning, LDMs achieve FID 1.50 on Places inpainting and FID 3.60 on ImageNet class-conditional synthesis, with competitive text-to-image generation, at a fraction of the compute cost of pixel-space diffusion.

Machine Learning
Three-panel diagram showing an original sequence, its time-warped version, and the gate values derived from requiring time warping invariance

Can Recurrent Neural Networks Warp Time? (ICLR 2018)

Tallec and Ollivier show that requiring invariance to time transformations in recurrent models leads to gating mechanisms, recovering key LSTM components from first principles. They propose the chrono initialization for gate biases that improves learning of long-term dependencies.

Machine Learning
Graph network block diagram showing input graph transformed through edge, node, and global update steps to produce an updated graph

Relational Inductive Biases in Deep Learning (2018)

Battaglia et al. argue that combinatorial generalization requires structured representations, systematically analyze the relational inductive biases in standard deep learning architectures (MLPs, CNNs, RNNs), and present the graph network as a unifying framework that generalizes and extends prior graph neural network approaches.

Machine Learning
SE(3)-Transformer architecture showing invariant attention weights modulating equivariant value messages on a 3D point cloud

SE(3)-Transformers: Equivariant Attention for 3D Data

Fuchs et al. introduce the SE(3)-Transformer, which combines self-attention with SE(3)-equivariance for 3D point clouds and graphs. Invariant attention weights modulate equivariant value messages from tensor field networks, resolving angular filter constraints while enabling data-adaptive, anisotropic processing.

Machine Learning
Comparison of planar CNN (translation only) versus spherical CNN (SO(3)-equivariant) showing how filters rotate on the sphere

Spherical CNNs: Rotation-Equivariant Networks on the Sphere

Cohen et al. introduce Spherical CNNs that achieve SO(3)-equivariance by defining cross-correlation on the sphere and rotation group, computed efficiently via generalized FFT algorithms from non-commutative harmonic analysis.

Machine Learning
The three quarks of attention: multiplexing (additive), output gating (multiplicative output), and synaptic gating (multiplicative weight)

The Quarks of Attention: Building Blocks of Attention

Baldi and Vershynin systematically classify the fundamental building blocks of attention (activation attention, output gating, synaptic gating) by source, target, and mechanism, then prove capacity bounds showing that gating introduces quadratic terms sparsely, gaining expressiveness without the full cost of polynomial activations.