Generative Modeling
GP-MoLFormer architecture showing large-scale SMILES input, linear-attention transformer decoder, and property optimization via pair-tuning soft prompts

GP-MoLFormer: Molecular Generation via Transformers

A 46.8M parameter transformer for molecular generation trained on 1.1B SMILES, introducing pair-tuning for efficient …

Generative Modeling
Visualization of probability density flow from initial distribution ρ₀ to target distribution ρ₁ over time through space

Building Normalizing Flows with Stochastic Interpolants

A continuous-time normalizing flow using stochastic interpolants and quadratic loss to bypass costly ODE …

Generative Modeling
Visualization comparing Optimal Transport (straight paths) vs Diffusion (curved paths) for Flow Matching

Flow Matching for Generative Modeling

A simulation-free framework for training Continuous Normalizing Flows using Conditional Flow Matching and Optimal …

Generative Modeling
Visualization showing linear interpolation, learned ODE trajectories, and the reflow straightening process for rectified flow

Flow Straight and Fast

A unified ODE-based framework for generative modeling and domain transfer that learns straight paths for fast 1-step …

Generative Modeling
Denoising Score Matching Intuition - Vectors point from corrupted samples back to clean data, approximating the score

Score Matching and Denoising Autoencoders

Theoretical paper proving the equivalence between training Denoising Autoencoders and performing Score Matching on a …

Generative Modeling
Forward and Reverse SDE trajectories showing the diffusion process from data to noise and back

Score-Based Generative Modeling with SDEs

Unified SDE framework for score-based generative models, introducing Predictor-Corrector samplers and achieving SOTA on …

Generative Modeling
Diagram comparing standard stochastic sampling (gradient blocked) vs the reparameterization trick (gradient flows)

Auto-Encoding Variational Bayes (VAE Paper Summary)

Summary of Kingma & Welling's foundational VAE paper introducing the reparameterization trick and variational …

Generative Modeling
MNIST digit samples generated from a Variational Autoencoder latent space

Importance Weighted Autoencoders: Beyond the Standard VAE

The key difference between multi-sample VAEs and IWAEs: how log-of-averages creates a tighter bound on log-likelihood.

Generative Modeling
Flowchart comparing VAE and IWAE computation showing the key difference in where averaging occurs relative to the log operation

IWAE: Importance Weighted Autoencoders

Summary of Burda, Grosse & Salakhutdinov's ICLR 2016 paper introducing Importance Weighted Autoencoders for tighter …

Generative Modeling
Visualization of the VAE prior hole problem showing a ring-shaped aggregate posterior with an empty center where the Gaussian prior has highest density

Contrastive Learning for Variational Autoencoder Priors

Aneja et al.'s NeurIPS 2021 paper introducing Noise Contrastive Priors (NCPs) to address VAE's 'prior hole' problem with …

Generative Modeling
Variational Autoencoder architecture diagram showing encoder, latent space, and decoder

Modern PyTorch Techniques for VAEs: A Hands-On Tutorial

Learn to implement VAEs in PyTorch: ELBO objective, reparameterization trick, loss scaling, and MNIST experiments on …

Generative Modeling
Wasserstein distance visualization showing Earth-Mover distance concept for GAN training

GAN Objective Functions: A Comprehensive Guide

Complete guide to GAN objective functions including WGAN, LSGAN, Fisher GAN, and more. Understand which loss function to …