Predictive Chemistry
Scatter plot showing molecules ranked by perplexity score with color coding for task-relevant (positive delta) versus pretraining-biased (negative delta) generations

Perplexity for Molecule Ranking and CLM Bias Detection

This study applies perplexity, a model-intrinsic metric from NLP, to rank de novo molecular designs generated by SMILES-based chemical language models and introduces a delta score to detect pretraining bias in transfer-learned CLMs.

Molecular Generation
Spectral performance curve showing model accuracy declining as train-test overlap decreases

SPECTRA: Evaluating Generalizability of Molecular AI

Introduces SPECTRA, a framework that generates spectral performance curves to measure how ML model accuracy degrades as train-test overlap decreases across molecular sequencing tasks.

Molecular Generation
Diagram showing the TamGen three-stage pipeline from protein pocket encoding through compound generation to experimental testing

TamGen: GPT-Based Target-Aware Drug Design and Generation

Introduces TamGen, a target-aware molecular generation method using a pre-trained GPT-like chemical language model with protein structure conditioning. A Design-Refine-Test pipeline discovers 14 inhibitors against tuberculosis ClpP protease, with IC50 values as low as 1.9 uM.

Molecular Representations
Log-log plots showing power-law scaling of ChemGPT validation loss versus model size and GNN force field loss versus dataset size

Neural Scaling of Deep Chemical Models

Frey et al. discover empirical power-law scaling relations for both chemical language models (ChemGPT, up to 1B parameters) and equivariant GNN interatomic potentials, finding that neither domain has saturated with respect to model size, data, or compute.

Predictive Chemistry
QSPR surface roughness comparison across molecular representations, showing smooth fingerprint surfaces versus rougher pretrained model surfaces

ROGI-XD: Roughness of Pretrained Molecular Representations

This paper introduces ROGI-XD, a reformulation of the ROuGhness Index that enables fair comparison of QSPR surface roughness across molecular representations of different dimensionalities. Evaluating VAE, GIN, ChemBERTa, and ChemGPT representations, the authors show that pretrained chemical models do not produce smoother structure-property landscapes than simple molecular fingerprints or descriptors.

Molecular Generation
Taxonomy diagram showing the three axes of MolGenSurvey: molecular representations (1D string, 2D graph, 3D geometry), generative methods (deep generative models and combinatorial optimization), and eight generation tasks (1D/2D and 3D)

MolGenSurvey: Systematic Survey of ML for Molecule Design

MolGenSurvey systematically reviews ML models for molecule design, organizing the field by molecular representation (1D/2D/3D), generative method (deep generative models vs. combinatorial optimization), and task type (8 distinct generation/optimization tasks). It catalogs over 100 methods, unifies task definitions via input/output/goal taxonomy, and identifies key challenges including out-of-distribution generation, oracle costs, and lack of unified benchmarks.

Predictive Chemistry
Diagram of the tied two-way transformer architecture with shared encoder, retro and forward decoders, latent variables, and cycle consistency, alongside USPTO-50K accuracy and validity results

Tied Two-Way Transformers for Diverse Retrosynthesis

This paper couples a retrosynthesis transformer with a forward reaction transformer through parameter sharing, cycle consistency checks, and multinomial latent variables. The combined approach reduces top-1 SMILES invalidity to 0.1% on USPTO-50K, improves top-10 accuracy to 78.5%, and achieves 87.3% pathway coverage on a multi-pathway in-house dataset.

Molecular Representations
BARTSmiles ablation study summary showing impact of pre-training strategies on downstream task performance

BARTSmiles: BART Pre-Training for Molecular SMILES

BARTSmiles pre-trains a BART-large model on 1.7 billion SMILES strings from ZINC20 and achieves the best reported results on 11 classification, regression, and generation benchmarks.

Predictive Chemistry
Three distribution plots showing RNN language models closely matching training distributions across peaked, multi-modal, and large-scale molecular generation tasks while graph models fail

Language Models Learn Complex Molecular Distributions

This study benchmarks RNN-based chemical language models against graph generative models on three challenging tasks: high penalized LogP distributions, multi-modal molecular distributions, and large-molecule generation from PubChem. The LSTM language models consistently outperform JTVAE and CGVAE.

Molecular Generation
Diagram of the LIMO pipeline showing gradient-based reverse optimization flowing backward through a frozen property predictor and VAE decoder to optimize the latent space z

LIMO: Latent Inceptionism for Targeted Molecule Generation

LIMO combines a SELFIES-based VAE with a novel stacked property predictor architecture (decoder output as predictor input) and gradient-based reverse optimization on the latent space. It is 6-8x faster than RL baselines and 12x faster than sampling methods while generating molecules with nanomolar binding affinities, including a predicted KD of 6e-14 M against the human estrogen receptor.

Molecular Generation
Diagram showing the UnCorrupt SMILES pipeline: invalid SMILES are corrected by a transformer seq2seq model into valid SMILES, with correction rates of 62-95% across generator types

UnCorrupt SMILES: Post Hoc Correction for De Novo Design

This paper trains a transformer model to correct invalid SMILES produced by de novo molecular generators (RNN, VAE, GAN). The corrector fixes 60-95% of invalid outputs, and the fixed molecules are comparable in novelty and similarity to valid generator outputs. The approach also enables local chemical space exploration by introducing and correcting errors in existing molecules.

Predictive Chemistry
Molecular Transformer architecture showing atom-wise tokenized SMILES input through encoder-decoder with multi-head attention to predict reaction products

Molecular Transformer: Calibrated Reaction Prediction

The Molecular Transformer applies the Transformer architecture to forward reaction prediction, treating it as SMILES-to-SMILES machine translation. It achieves 90.4% top-1 accuracy on USPTO_MIT, outperforms quantum-chemistry baselines on regioselectivity, and provides calibrated uncertainty scores (0.89 AUC-ROC) for ranking synthesis pathways.