Molecular Representations
Horizontal bar chart showing X-MOL achieves best performance across five molecular tasks

X-MOL: Pre-training on 1.1B Molecules for SMILES

X-MOL applies large-scale Transformer pre-training on 1.1 billion molecules with a generative SMILES-to-SMILES strategy, then fine-tunes for five molecular analysis tasks including property prediction, reaction analysis, and de novo generation.

Molecular Generation
Diagram showing back translation workflow with forward and reverse models mapping between source and target molecular domains, augmented by unlabeled ZINC molecules

Back Translation for Semi-Supervised Molecule Generation

Adapts back translation from NLP to molecular generation, using unlabeled molecules from ZINC to create synthetic training pairs that improve property optimization and retrosynthesis prediction across Transformer and graph-based architectures.

Molecular Generation
Two Gaussian distributions in ChemNet activation space with the Frechet distance shown between them

Frechet ChemNet Distance for Molecular Generation

Introduces the Frechet ChemNet Distance (FCD), a single metric that captures chemical validity, biological relevance, and diversity of generated molecules by comparing distributions of learned ChemNet representations.

Molecular Generation
Comparison bar chart showing penalized logP scores for GB-GA, GB-GM-MCTS, and ML-based molecular optimization methods

Graph-Based GA and MCTS Generative Model for Molecules

A graph-based genetic algorithm (GB-GA) and a graph-based generative model with Monte Carlo tree search (GB-GM-MCTS) for molecular optimization that match or outperform ML-based generative approaches while being orders of magnitude faster.

Predictive Chemistry
Scatter plot showing molecules ranked by perplexity score with color coding for task-relevant (positive delta) versus pretraining-biased (negative delta) generations

Perplexity for Molecule Ranking and CLM Bias Detection

This study applies perplexity, a model-intrinsic metric from NLP, to rank de novo molecular designs generated by SMILES-based chemical language models and introduces a delta score to detect pretraining bias in transfer-learned CLMs.

Molecular Generation
Spectral performance curve showing model accuracy declining as train-test overlap decreases

SPECTRA: Evaluating Generalizability of Molecular AI

Introduces SPECTRA, a framework that generates spectral performance curves to measure how ML model accuracy degrades as train-test overlap decreases across molecular sequencing tasks.

Molecular Generation
Visualization of STONED algorithm generating a local chemical subspace around a seed molecule through SELFIES string mutations, with a chemical path shown between two endpoints

STONED: Training-Free Molecular Design with SELFIES

STONED introduces simple string manipulation algorithms on SELFIES for molecular design, achieving competitive results with deep generative models while requiring no training data or GPU resources.

Molecular Generation
Diagram showing the TamGen three-stage pipeline from protein pocket encoding through compound generation to experimental testing

TamGen: GPT-Based Target-Aware Drug Design and Generation

Introduces TamGen, a target-aware molecular generation method using a pre-trained GPT-like chemical language model with protein structure conditioning. A Design-Refine-Test pipeline discovers 14 inhibitors against tuberculosis ClpP protease, with IC50 values as low as 1.9 uM.

Predictive Chemistry
Diagram of the tied two-way transformer architecture with shared encoder, retro and forward decoders, latent variables, and cycle consistency, alongside USPTO-50K accuracy and validity results

Tied Two-Way Transformers for Diverse Retrosynthesis

This paper couples a retrosynthesis transformer with a forward reaction transformer through parameter sharing, cycle consistency checks, and multinomial latent variables. The combined approach reduces top-1 SMILES invalidity to 0.1% on USPTO-50K, improves top-10 accuracy to 78.5%, and achieves 87.3% pathway coverage on a multi-pathway in-house dataset.

Molecular Representations
BARTSmiles ablation study summary showing impact of pre-training strategies on downstream task performance

BARTSmiles: BART Pre-Training for Molecular SMILES

BARTSmiles pre-trains a BART-large model on 1.7 billion SMILES strings from ZINC20 and achieves the best reported results on 11 classification, regression, and generation benchmarks.

Molecular Generation
Diagram of the LIMO pipeline showing gradient-based reverse optimization flowing backward through a frozen property predictor and VAE decoder to optimize the latent space z

LIMO: Latent Inceptionism for Targeted Molecule Generation

LIMO combines a SELFIES-based VAE with a novel stacked property predictor architecture (decoder output as predictor input) and gradient-based reverse optimization on the latent space. It is 6-8x faster than RL baselines and 12x faster than sampling methods while generating molecules with nanomolar binding affinities, including a predicted KD of 6e-14 M against the human estrogen receptor.

Predictive Chemistry
Regression Transformer dual-masking concept showing property prediction (mask numbers) and conditional generation (mask molecules) in a single model

Regression Transformer: Prediction Meets Generation

The Regression Transformer (RT) reformulates regression as conditional sequence modelling, enabling a single XLNet-based model to both predict continuous molecular properties and generate novel molecules conditioned on desired property values.