
GP-MoLFormer: Molecular Generation via Transformers
A 46.8M parameter transformer for molecular generation trained on 1.1B SMILES, introducing pair-tuning for efficient …

A 46.8M parameter transformer for molecular generation trained on 1.1B SMILES, introducing pair-tuning for efficient …

A continuous-time normalizing flow using stochastic interpolants and quadratic loss to bypass costly ODE …

A simulation-free framework for training Continuous Normalizing Flows using Conditional Flow Matching and Optimal …

A unified ODE-based framework for generative modeling and domain transfer that learns straight paths for fast 1-step …

Theoretical paper proving the equivalence between training Denoising Autoencoders and performing Score Matching on a …

Unified SDE framework for score-based generative models, introducing Predictor-Corrector samplers and achieving SOTA on …

Summary of Kingma & Welling's foundational VAE paper introducing the reparameterization trick and variational …

The key difference between multi-sample VAEs and IWAEs: how log-of-averages creates a tighter bound on log-likelihood.

Summary of Burda, Grosse & Salakhutdinov's ICLR 2016 paper introducing Importance Weighted Autoencoders for tighter …

Aneja et al.'s NeurIPS 2021 paper introducing Noise Contrastive Priors (NCPs) to address VAE's 'prior hole' problem with …

Learn to implement VAEs in PyTorch: ELBO objective, reparameterization trick, loss scaling, and MNIST experiments on …

Complete guide to GAN objective functions including WGAN, LSGAN, Fisher GAN, and more. Understand which loss function to …