This section collects work that surveys, reviews, or systematizes the chemical language model landscape rather than proposing new architectures.

YearPaperVenueFocus
2018Sánchez-Lengeling & Aspuru-GuzikScienceVAE/GAN/RL framework for inverse molecular design
2019Elton et al.Mol. Sys. Des. Eng.45 papers across RNN, VAE, GAN, RL architectures
2022Du et al. (MolGenSurvey)arXiv100+ methods across 1D/2D/3D representations
2023GrisoniCurr. Opin. Struct. Biol.CLM generation strategies and experimental validations
2023Chen et al.Brief. Funct. Genom.RNNs vs transformers empirical comparison
2024Flores-Hernandez & Martínez-LedesmaJ. Cheminform.PRISMA review of 72 CLM papers via MOSES/GuacaMol
2024Sultan et al.J. Chem. Inf. Model.16 transformer models, seven design decisions
2024Mswahili & JeongHeliyon~30 CLMs by encoder/decoder architecture
2024Bran & SchwallerDrug Dev. InformaticsTask-specific to multimodal to LLM agents
2024Atz et al.Transformers across molecular science
2024Tang et al.Brief. Bioinform.Molecule and protein generation, 12 benchmark tables
2025Choi et al.JACS AuSmall vs big foundation models for chemistry

All Notes