This section houses my personal notes on machine learning methodologies, architectures, and theoretical foundations. As the field evolves rapidly, maintaining a solid grasp of both cutting-edge techniques and historical milestones is crucial.
You can explore notes across several key areas:
- Generative Models: Deep dives into Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), Normalizing Flows, and Diffusion Models. I focus heavily on the mathematical underpinnings and practical implementation details of these systems.
- Geometric Deep Learning: Notes on Graph Neural Networks (GNNs) and other architectures designed for non-Euclidean data. This area is particularly relevant for applications in chemistry and biology.
- Classic Papers: Summaries and analyses of foundational papers that have shaped the current landscape of AI. Revisiting these often provides clarity on modern “innovations” that are actually rediscoveries of older principles.
These notes range from quick summaries of papers to detailed derivations of algorithms. They are living documents that I update as my understanding deepens or as new research clarifies old concepts. My goal is to bridge the gap between abstract theory and the practical intuition needed to apply these methods effectively.