Computational Chemistry
Pipeline diagram showing natural language chemistry questions flowing through fine-tuned GPT-3 to chemical predictions across molecules, materials, and reactions

Fine-Tuning GPT-3 for Predictive Chemistry Tasks

Jablonka et al. show that fine-tuning GPT-3 on natural language chemistry questions achieves competitive or superior performance to dedicated ML models across 15 benchmarks, with particular strength in low-data settings and inverse molecular design.

Computational Chemistry
Visualization of Galactica corpus composition and benchmark performance comparing Galactica 120B against baselines

Galactica: A Curated Scientific LLM from Meta AI

Galactica trains a decoder-only Transformer on a curated 106B-token scientific corpus spanning papers, proteins, and molecules, achieving strong results on scientific QA, mathematical reasoning, and citation prediction.

Computational Chemistry
Diagram comparing character-level VAE with low validity to Grammar VAE using parse tree constraints for molecular generation

Grammar VAE: Generating Valid Molecules via CFGs

The Grammar VAE replaces character-level decoding with context-free grammar production rules, using a stack-based masking mechanism to guarantee that all generated SMILES strings are syntactically valid. Applied to molecular optimization and symbolic regression, it learns smoother latent spaces and finds better molecules than character-level baselines.

Computational Chemistry
LatentGAN pipeline from SMILES encoder through latent space WGAN-GP to SMILES decoder

LatentGAN: Latent-Space GAN for Molecular Generation

LatentGAN decouples molecular generation from SMILES syntax by training a Wasserstein GAN on latent vectors from a pretrained heteroencoder, enabling de novo design of drug-like and target-biased compounds.

Computational Chemistry
SMolInstruct dataset feeding into four base models for chemistry instruction tuning

LlaSMol: Instruction-Tuned LLMs for Chemistry Tasks

LlaSMol fine-tunes Mistral, Llama 2, and other open-source LLMs on SMolInstruct, a 3.3M-sample instruction tuning dataset covering 14 chemistry tasks. The Mistral-based model outperforms GPT-4 and Claude 3 Opus across all tasks.

Computational Chemistry
LSTM cells generating SMILES characters alongside validity and novelty statistics for drug-like molecule generation

LSTM Neural Network for Drug-Like Molecule Generation

Ertl et al. train a character-level LSTM on 509K bioactive ChEMBL SMILES and generate one million novel, diverse molecules whose physicochemical properties, substructure features, and predicted bioactivity closely match the training distribution.

Computational Chemistry
Diagram showing how memory-assisted reinforcement learning explores multiple local maxima in chemical space compared to standard RL

Memory-Assisted RL for Diverse De Novo Mol. Design

Introduces a memory unit that modifies the RL reward function to penalize previously explored chemical scaffolds, substantially increasing the diversity of generated molecules while maintaining relevance to known active ligands.

Computational Chemistry
Molecular graph being built atom-by-atom with BFS ordering and property optimization bars

MolecularRNN: Graph-Based Molecular Generation and RL

Proposes MolecularRNN, a graph recurrent model that generates molecular graphs atom-by-atom with 100% validity via valency-based rejection sampling, then shifts property distributions using policy gradient reinforcement learning.

Computational Chemistry
MolFM trimodal architecture fusing 2D graph, knowledge graph, and biomedical text via cross-modal attention

MolFM: Trimodal Molecular Foundation Pre-training

MolFM pre-trains a multimodal encoder that fuses 2D molecular graphs, biomedical text, and knowledge graph entities through fine-grained cross-modal attention, achieving strong gains on cross-modal retrieval, molecule captioning, text-based generation, and property prediction.

Computational Chemistry
MoMu architecture showing contrastive alignment between molecular graph and scientific text modalities

MoMu: Bridging Molecular Graphs and Natural Language

MoMu pre-trains dual graph and text encoders on 15K molecule graph-text pairs using contrastive learning, enabling cross-modal retrieval, molecule captioning, zero-shot text-to-graph generation, and improved molecular property prediction.

Computational Chemistry
Seq2seq encoder-decoder translating reactant SMILES to product SMILES for reaction prediction

Neural Machine Translation for Reaction Prediction

This 2016 paper first proposed treating organic reaction prediction as a neural machine translation problem, using a GRU-based sequence-to-sequence model with attention to translate reactant SMILES strings into product SMILES strings.

Computational Chemistry
Architecture diagram showing ORGAN generator, discriminator, and objective reward with lambda interpolation formula

ORGAN: Objective-Reinforced GANs for Molecule Design

Proposes ORGAN, a framework that extends SeqGAN with domain-specific reward functions via reinforcement learning, enabling tunable generation of molecules optimized for druglikeness, solubility, and synthesizability while maintaining sample diversity.