<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Classical Potentials and Simulation Methods on Hunter Heidenreich | ML Research Scientist</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/</link><description>Recent content in Classical Potentials and Simulation Methods on Hunter Heidenreich | ML Research Scientist</description><generator>Hugo -- 0.147.7</generator><language>en-US</language><copyright>2026 Hunter Heidenreich</copyright><lastBuildDate>Sat, 11 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/index.xml" rel="self" type="application/rss+xml"/><item><title>Stillinger-Weber Potential for Silicon Simulation</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/stillinger-weber-1985/</link><pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/stillinger-weber-1985/</guid><description>The 1985 paper introducing the Stillinger-Weber potential, a 3-body interaction model for molecular dynamics of tetrahedral semiconductors.</description><content:encoded><![CDATA[<h2 id="core-methodological-contribution">Core Methodological Contribution</h2>
<p>This is a <strong>Method</strong> paper.</p>
<p>Its primary contribution is the formulation of the <strong>Stillinger-Weber potential</strong>, a non-additive potential energy function designed to model tetrahedral semiconductors. The paper also uses molecular dynamics simulation to explore physical properties of silicon in both crystalline and liquid phases, but the methodological contribution (the potential architecture) is what enabled subsequent research on covalent materials.</p>
<h2 id="the-failure-of-pair-potentials-in-silicon">The Failure of Pair Potentials in Silicon</h2>
<p>The authors aimed to simulate the melting and liquid properties of tetrahedral semiconductors (Silicon and Germanium).</p>
<ul>
<li><strong>The Problem:</strong> Standard pair potentials (like Lennard-Jones) favor close-packed structures (12 nearest neighbors) and cannot stabilize the open diamond structure (4 nearest neighbors) of Silicon.</li>
<li><strong>The Gap:</strong> Earlier classical potentials lacked the flexibility to describe the profound structural change where Silicon shrinks upon melting (coordination number increases from 4 to &gt;6) while remaining conductive.</li>
<li><strong>The Goal:</strong> To construct a potential that spans the entire configuration space, describing both the rigid crystal and the diffusive liquid, without requiring quantum mechanical calculations.</li>
</ul>
<h2 id="the-three-body-interaction-novelty">The Three-Body Interaction Novelty</h2>
<p>The core novelty is the introduction of a stabilizing <strong>three-body interaction term</strong> ($v_3$) to the potential energy function.</p>
<ul>
<li><strong>3-Body Term:</strong> Explicitly penalizes deviations from the ideal tetrahedral angle ($\cos \theta_t = -1/3$).</li>
<li><strong>Unified Model:</strong> This potential handles bond breaking and reforming, allowing for the simulation of melting and liquid diffusion. Previous &ldquo;Keating&rdquo; potentials model only small elastic deformations.</li>
<li><strong>Mapping Technique:</strong> The application of &ldquo;steepest-descent mapping&rdquo; to quench dynamical configurations into their underlying &ldquo;inherent structures&rdquo; (local minima), revealing the fundamental topology of the liquid energy landscape.</li>
</ul>
<h2 id="molecular-dynamics-validation">Molecular Dynamics Validation</h2>
<p>The authors performed Molecular Dynamics (MD) simulations using the proposed potential.</p>
<ul>
<li><strong>System:</strong> 216 Silicon atoms in a cubic cell with periodic boundary conditions.</li>
<li><strong>State Points:</strong> Fixed density $\rho = 2.53 \text{ g/cm}^3$ (matching experimental liquid density at melting).</li>
<li><strong>Process:</strong>
<ol>
<li>Start with diamond crystal at low temperature.</li>
<li>Systematically heat to induce spontaneous nucleation and melting.</li>
<li>Equilibrate the liquid.</li>
<li>Periodically map configurations to potential minima (inherent structures) using steepest descent.</li>
</ol>
</li>
</ul>
<h2 id="phase-topology-and-inverse-lindemann-criterion">Phase Topology and Inverse Lindemann Criterion</h2>
<ul>
<li><strong>Validation:</strong> The potential successfully stabilizes the diamond structure as the global minimum at zero pressure.</li>
<li><strong>Liquid Structure:</strong> The simulated liquid pair-correlation function $g(r)$ and structure factor $S(k)$ qualitatively match experimental diffraction data, including the characteristic shoulder on the structure factor peak.</li>
<li><strong>Inherent Structure:</strong> The liquid possesses a temperature-independent inherent structure (amorphous network) hidden beneath thermal vibrations.</li>
<li><strong>Melting/Freezing Criteria:</strong> The study proposes an &ldquo;Inverse Lindemann Criterion&rdquo;: while crystals melt when vibration amplitude exceeds ~0.19 lattice spacings, liquids freeze when atom displacements from their inherent minima drop below ~0.30 neighbor spacings.</li>
</ul>
<h2 id="limitations-and-energy-scale-problem">Limitations and Energy Scale Problem</h2>
<p>The authors acknowledge a quantitative energy scale discrepancy. To match the observed melting temperature of Si ($1410°$C), $\epsilon$ would need to be approximately 42 kcal/mol, considerably less than the 50 kcal/mol required to reproduce the correct cohesive energy of the crystal. The authors suggest this could be resolved either by further optimization of $v_2$ and $v_3$, or by adding position-independent single-particle terms $v_1 \approx -16$ kcal/mol arising from the electronic structure. Adding $v_1$ terms only affects the temperature scale and has no influence on local structure at a given reduced temperature.</p>
<p>The simulated liquid coordination number (8.07) is also higher than the experimentally reported value of approximately 6.4, though the authors note that the experimental definition of &ldquo;nearest neighbors&rdquo; was not precisely stated.</p>
<h2 id="bonding-statistics-in-inherent-structures">Bonding Statistics in Inherent Structures</h2>
<p>Analysis of potential-energy minima (inherent structures) using a bond cutoff of $r/\sigma = 1.40$ reveals the coordination distribution in the liquid:</p>
<table>
  <thead>
      <tr>
          <th style="text-align: left">Coordination Number</th>
          <th style="text-align: left">Fraction of Atoms</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td style="text-align: left">4</td>
          <td style="text-align: left">0.201</td>
      </tr>
      <tr>
          <td style="text-align: left">5</td>
          <td style="text-align: left">0.568</td>
      </tr>
      <tr>
          <td style="text-align: left">6</td>
          <td style="text-align: left">0.205</td>
      </tr>
      <tr>
          <td style="text-align: left">7</td>
          <td style="text-align: left">0.024</td>
      </tr>
  </tbody>
</table>
<p>Five-coordinate atoms dominate the liquid&rsquo;s inherent structure, with four- and six-coordinate atoms each accounting for about 20% of the population. The three-body interactions prevent any occurrence of coordination numbers near 12 that would indicate local close packing.</p>
<hr>
<h2 id="reproducibility-details">Reproducibility Details</h2>
<h3 id="algorithms">Algorithms</h3>
<ul>
<li><strong>Integration:</strong> Equations of motion integrated using a <strong>fifth-order Gear algorithm</strong>.</li>
<li><strong>Time Step:</strong> $\Delta t = 5 \times 10^{-3} \tau$ (approx $3.83 \times 10^{-16}$ s), where $\tau = \sigma(m/\epsilon)^{1/2} = 7.6634 \times 10^{-14}$ s.</li>
<li><strong>Minimization:</strong> Steepest-descent mapping utilized <strong>Newton&rsquo;s method</strong> to find limiting solutions ($\nabla \Phi = 0$).</li>
</ul>
<h3 id="models">Models</h3>
<p>To reproduce this work, one must implement the potential $\Phi = \sum v_2 + \sum v_3$ with the exact functional forms and parameters provided.</p>















<figure class="post-figure center ">
    <img src="/img/notes/chemistry/stillinger-weber-potential.webp"
         alt="Stillinger-Weber potential visualization"
         title="Stillinger-Weber potential visualization"
         
         
         loading="lazy"
         class="post-image">
    
    <figcaption class="post-caption">Left: Two-body radial potential $v_2(r)$ showing the characteristic well at $r_{min} \approx 1.12\sigma$. Right: Three-body angular penalty $h(r_{min}, r_{min}, \theta)$ demonstrating the minimum at the tetrahedral angle (109.5°), which enforces the diamond crystal structure.</figcaption>
    
</figure>

<h4 id="reduced-units">Reduced Units</h4>
<ul>
<li>$\sigma = 0.20951 \text{ nm}$</li>
<li>$\epsilon = 50 \text{ kcal/mol} = 3.4723 \times 10^{-12} \text{ erg}$</li>
</ul>
<h4 id="two-body-term-v_2">Two-Body Term ($v_2$)</h4>
<p>$$
v_2(r_{ij}) = \epsilon A (B r_{ij}^{-p} - r_{ij}^{-q}) \exp[(r_{ij} - a)^{-1}] \quad \text{for } r_{ij} &lt; a
$$</p>
<p><em>(Vanishes for $r \geq a$)</em></p>
<h4 id="three-body-term-v_3">Three-Body Term ($v_3$)</h4>
<p>$$
v_3(r_i, r_j, r_k) = \epsilon [h(r_{ij}, r_{ik}, \theta_{jik}) + h(r_{ji}, r_{jk}, \theta_{ijk}) + h(r_{ki}, r_{kj}, \theta_{ikj})]
$$</p>
<p>where:</p>
<p>$$
h(r_{ij}, r_{ik}, \theta_{jik}) = \lambda \exp[\gamma(r_{ij}-a)^{-1} + \gamma(r_{ik}-a)^{-1}] (\cos\theta_{jik} + \frac{1}{3})^2
$$</p>
<p><em>(Vanishes if distances $\geq a$)</em></p>
<h4 id="parameters">Parameters</h4>
<table>
  <thead>
      <tr>
          <th style="text-align: left">Parameter</th>
          <th style="text-align: left">Value</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td style="text-align: left">$A$</td>
          <td style="text-align: left">$7.049556277$</td>
      </tr>
      <tr>
          <td style="text-align: left">$B$</td>
          <td style="text-align: left">$0.6022245584$</td>
      </tr>
      <tr>
          <td style="text-align: left">$p$</td>
          <td style="text-align: left">$4$</td>
      </tr>
      <tr>
          <td style="text-align: left">$q$</td>
          <td style="text-align: left">$0$</td>
      </tr>
      <tr>
          <td style="text-align: left">$a$</td>
          <td style="text-align: left">$1.80$</td>
      </tr>
      <tr>
          <td style="text-align: left">$\lambda$</td>
          <td style="text-align: left">$21.0$</td>
      </tr>
      <tr>
          <td style="text-align: left">$\gamma$</td>
          <td style="text-align: left">$1.20$</td>
      </tr>
  </tbody>
</table>
<h3 id="evaluation">Evaluation</h3>
<p>The paper evaluates the model against experimental diffraction data.</p>
<table>
  <thead>
      <tr>
          <th style="text-align: left">Metric</th>
          <th style="text-align: left">Simulated Value</th>
          <th style="text-align: left">Experimental Value</th>
          <th style="text-align: left">Notes</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td style="text-align: left"><strong>Melting Point ($T_m^*$)</strong></td>
          <td style="text-align: left">$\approx 0.080$</td>
          <td style="text-align: left">N/A</td>
          <td style="text-align: left">Reduced units. Requires $\epsilon \approx 42$ kcal/mol to match real $T_m = 1410°$C, vs 50 kcal/mol for correct cohesive energy.</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>Coordination (Liquid)</strong></td>
          <td style="text-align: left">$8.07$</td>
          <td style="text-align: left">$\approx 6.4$</td>
          <td style="text-align: left">Evaluated at first $g(r)$ minimum ($r/\sigma = 1.625$). Simulated value is higher than experiment.</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>$S(k)$ First Peak</strong></td>
          <td style="text-align: left">$2.53$ $\AA^{-1}$</td>
          <td style="text-align: left">$2.80$ $\AA^{-1}$</td>
          <td style="text-align: left">From Table I.</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>$S(k)$ Shoulder</strong></td>
          <td style="text-align: left">$3.25$ $\AA^{-1}$</td>
          <td style="text-align: left">$3.25$ $\AA^{-1}$</td>
          <td style="text-align: left">From Table I. Exact match with experiment.</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>$S(k)$ Second Peak</strong></td>
          <td style="text-align: left">$5.35$ $\AA^{-1}$</td>
          <td style="text-align: left">$5.75$ $\AA^{-1}$</td>
          <td style="text-align: left">From Table I.</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>$S(k)$ Third Peak</strong></td>
          <td style="text-align: left">$8.16$ $\AA^{-1}$</td>
          <td style="text-align: left">$8.50$ $\AA^{-1}$</td>
          <td style="text-align: left">From Table I.</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>$S(k)$ Fourth Peak</strong></td>
          <td style="text-align: left">$10.60$ $\AA^{-1}$</td>
          <td style="text-align: left">$11.20$ $\AA^{-1}$</td>
          <td style="text-align: left">From Table I.</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>Entropy of Melting ($\Delta S / N k_B$)</strong></td>
          <td style="text-align: left">$\approx 3.7$</td>
          <td style="text-align: left">$3.25$</td>
          <td style="text-align: left">Simulated at constant volume; experimental at constant pressure (1 atm).</td>
      </tr>
  </tbody>
</table>
<hr>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Stillinger, F. H., &amp; Weber, T. A. (1985). Computer simulation of local order in condensed phases of silicon. <em>Physical Review B</em>, 31(8), 5262-5271. <a href="https://doi.org/10.1103/PhysRevB.31.5262">https://doi.org/10.1103/PhysRevB.31.5262</a></p>
<p><strong>Publication</strong>: Physical Review B, 1985</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{stillingerComputerSimulationLocal1985,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span> = <span style="color:#e6db74">{Computer Simulation of Local Order in Condensed Phases of Silicon}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span> = <span style="color:#e6db74">{Stillinger, Frank H. and Weber, Thomas A.}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span> = <span style="color:#ae81ff">1985</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">month</span> = apr,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span> = <span style="color:#e6db74">{Physical Review B}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span> = <span style="color:#e6db74">{31}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">number</span> = <span style="color:#e6db74">{8}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span> = <span style="color:#e6db74">{5262--5271}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">publisher</span> = <span style="color:#e6db74">{American Physical Society}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">doi</span> = <span style="color:#e6db74">{10.1103/PhysRevB.31.5262}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div>]]></content:encoded></item><item><title>Second-Order Langevin Equation for Field Simulations</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/second-order-langevin-1987/</link><pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/second-order-langevin-1987/</guid><description>Hyperbolic Algorithm adds second-order derivatives to Langevin dynamics, reducing systematic errors to O(ε²) for lattice field simulations.</description><content:encoded><![CDATA[<h2 id="contribution-and-paper-type">Contribution and Paper Type</h2>
<p>This is a <strong>Methodological Paper</strong> ($\Psi_{\text{Method}}$). It proposes a novel stochastic algorithm, the Hyperbolic Algorithm (HA), and validates its superior efficiency against the existing Langevin Algorithm (LA) through formal error analysis and numerical simulation. It contains significant theoretical derivation (Liouville dynamics) that serves primarily to justify the algorithmic performance claims.</p>
<h2 id="motivation-and-gaps-in-prior-work">Motivation and Gaps in Prior Work</h2>
<p>The standard Langevin Algorithm (LA) for numerical simulation of Euclidean field theories suffers from efficiency bottlenecks. The simplest Euler-discretization of the LA introduces systematic errors of $O(\epsilon)$ (where $\epsilon$ is the step size). To maintain accuracy, $\epsilon$ must be kept small, which increases the sweep-sweep correlation time (autocorrelation time), making simulations computationally expensive.</p>
<h2 id="core-novelty-second-order-dynamics">Core Novelty: Second-Order Dynamics</h2>
<p>The core contribution is the introduction of a <strong>second-order derivative in fictitious time</strong> to the stochastic equation. This converts the parabolic Langevin equation into a hyperbolic equation:</p>
<p>$$
\begin{aligned}
\frac{\partial^{2}\phi}{\partial t^{2}}+\gamma\frac{\partial\phi}{\partial t}=-\frac{\partial S}{\partial\phi}+\eta
\end{aligned}
$$</p>
<h3 id="equation-comparison">Equation Comparison</h3>
<p>The key difference from the standard (first-order) Langevin equation:</p>
<table>
  <thead>
      <tr>
          <th style="text-align: left">Equation Type</th>
          <th style="text-align: left">Formula</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td style="text-align: left"><strong>Hyperbolic (Second Order)</strong></td>
          <td style="text-align: left">$$\frac{\partial^{2}\phi}{\partial t^{2}}+\gamma\frac{\partial\phi}{\partial t}=-\frac{\partial S}{\partial\phi}+\eta$$</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>Langevin (First Order)</strong></td>
          <td style="text-align: left">$$\frac{\partial\phi}{\partial t}=-\frac{\partial S}{\partial\phi}+\eta$$</td>
      </tr>
  </tbody>
</table>
<p>The standard Langevin equation corresponds to the overdamped limit where the acceleration term is absent. Physically, the Hyperbolic equation can be viewed as microcanonical equations of motion with an added friction term.</p>
<h3 id="key-innovations">Key Innovations</h3>
<ul>
<li><strong>Higher Order Accuracy</strong>: The simplest discretization of this equation leads to systematic errors of only $O(\epsilon^2)$ compared to $O(\epsilon)$ for LA.</li>
<li><strong>Tunable Damping</strong>: The addition of the damping parameter $\gamma$ allows tuning to minimize autocorrelation tails.</li>
<li><strong>Uniform Evolution</strong>: The method evolves structures of different wavelengths more uniformly than LA due to the specific dissipation structure.</li>
</ul>
<h2 id="methodology-and-experiments">Methodology and Experiments</h2>
<p>The author validated the method using the <strong>XY Model</strong> on 2D lattices.</p>
<ul>
<li><strong>System</strong>: Euclidean action $S = -\sum_{x,\mu} \cos(\theta_{x+\mu} - \theta_x)$.</li>
<li><strong>Setup</strong>:
<ul>
<li>Lattice sizes: $15^2$ (helical boundary conditions) and $30^2$.</li>
<li>$\beta$ range: 0.9 to 1.2 (crossing the critical point $\approx 1.0$).</li>
<li>Run length: &gt;100,000 updates in equilibrium.</li>
</ul>
</li>
<li><strong>Metrics</strong>:
<ul>
<li><strong>Autocorrelation time ($\tau$)</strong>: Defined as the number of updates for the time-correlation function to drop to 10% of its initial value.</li>
<li><strong>Systematic Error</strong>: Measured via deviation of average action from Monte Carlo values.</li>
</ul>
</li>
</ul>
<h2 id="results-and-conclusions">Results and Conclusions</h2>
<ul>
<li><strong>Efficiency</strong>: The Hyperbolic Algorithm (HA) is far more efficient. For equal systematic errors, sweep-sweep correlation times are significantly lower than LA.</li>
<li><strong>Error Scaling</strong>: Numerical results confirmed that HA step size $\epsilon_H = 0.1$ yields systematic errors comparable to LA step size $\epsilon_L \approx 0.008$ ($O(\epsilon^2)$ vs $O(\epsilon)$ scaling).</li>
<li><strong>Speedup</strong>: In the disordered phase, HA is roughly $\epsilon_H / \epsilon_L$ times faster (approximately a factor of 12.5 for $\epsilon_H = 0.1$, $\epsilon_L = 0.008$). In the ordered phase, efficiency gains increase with distance scale, reaching factors of 20 or more for long-range correlations.</li>
<li><strong>Optimal Damping</strong>: For the XY model, the optimal damping parameter was found to be $\gamma \approx 0.4$.</li>
</ul>
<hr>
<h2 id="reproducibility-details">Reproducibility Details</h2>
<h3 id="algorithms">Algorithms</h3>
<p><strong>1. The Hyperbolic Algorithm (HA)</strong></p>
<p>The discretized update equations for scalar fields are:</p>
<p>$$
\begin{aligned}
\pi_{t+\epsilon} - \pi_{t} &amp;= -\epsilon\gamma\pi_{t} - \epsilon\frac{\partial S}{\partial\phi_{t}} + \sqrt{2\epsilon\gamma/\beta}\xi_{t} \\
\phi_{t+\epsilon} - \phi_{t} &amp;= \epsilon\pi_{t+\epsilon}
\end{aligned}
$$</p>
<ul>
<li><strong>Variables</strong>: $\phi$ is the field, $\pi$ is the conjugate momentum ($\dot{\phi}$).</li>
<li><strong>Parameters</strong>: $\epsilon$ (step size), $\gamma$ (damping constant).</li>
<li><strong>Noise</strong>: $\xi$ is Gaussian noise with $\langle\xi_x \xi_y\rangle = \delta_{x,y}$.</li>
<li><strong>Storage</strong>: Requires storing both $\phi$ and $\pi$ vectors.</li>
</ul>
<p><strong>2. Non-Abelian Generalization</strong></p>
<p>For Lie group elements $U$ with generators $T^a$:</p>
<p>$$
\begin{aligned}
\pi_{t+\epsilon}^a - \pi_{t}^a &amp;= -\epsilon\gamma\pi_{t}^a - \epsilon\delta^a S[U_t] + \sqrt{2\epsilon\gamma/\beta}\xi_{t}^a \\
U_{t+\epsilon} &amp;= e^{i\epsilon\pi_{t+\epsilon}^a T^a} U_t
\end{aligned}
$$</p>
<h3 id="theoretical-proof-of-oepsilon2-accuracy">Theoretical Proof of $O(\epsilon^2)$ Accuracy</h3>
<p>The derivation relies on the generalized Liouville equation for the probability distribution $P[\phi, \pi; t]$.</p>
<ol>
<li><strong>Transition Probability</strong>: The transition $W$ for one iteration is defined.</li>
<li><strong>Effective Liouville Operator</strong>: The evolution is written as $P(t+\epsilon) = \exp(\epsilon L_{\text{eff}}) P(t)$.</li>
<li><strong>Baker-Hausdorff Expansion</strong>: Using normal ordering of operators, the equilibrium distribution $P_{\text{eq}}$ is derived through $O(\epsilon^2)$:</li>
</ol>
<p>$$
\begin{aligned}
P_{\text{eq}} &amp;= \exp\left\lbrace-\frac{1}{2}\beta_{1}\sum_{x}\pi_{x}^{2} - \beta S[\phi] + \frac{1}{2}\epsilon\beta\sum_{x}\pi_{x}S_{x} + \epsilon^{2}G + O(\epsilon^3)\right\rbrace
\end{aligned}
$$</p>
<p>where $\beta_1 = \beta\left(1 - \frac{1}{2}\epsilon\gamma\right)$.</p>
<ol start="4">
<li><strong>Effective Action</strong>: Integrating out $\pi$ yields the effective action for $\phi$:</li>
</ol>
<p>$$
\begin{aligned}
S_{\text{eff}}[\phi] &amp;= S[\phi] - \frac{1}{8}\epsilon^2 \sum_x S_x^2 + \dots
\end{aligned}
$$</p>
<p>The absence of $O(\epsilon)$ terms proves the higher-order accuracy.</p>
<h3 id="evaluation">Evaluation</h3>
<ul>
<li><strong>Model</strong>: XY Model (2D)</li>
<li><strong>Hamiltonian</strong>: $H = \frac{1}{2}\sum \pi^2 + S[\phi]$ where $S = -\sum \cos(\Delta \theta)$.</li>
<li><strong>Observables</strong>:
<ul>
<li>$\Gamma_n = \cos(\theta_{m+n} - \theta_m)$ (averaged over lattice $m$).</li>
</ul>
</li>
<li><strong>Comparisons</strong>:
<ul>
<li><strong>LA Step</strong>: $\epsilon_L \approx 0.005 - 0.02$.</li>
<li><strong>HA Step</strong>: $\epsilon_H \approx 0.1 - 0.2$.</li>
<li><strong>Equivalence</strong>: $\epsilon_H = 0.1$ matches error of $\epsilon_L \approx 0.008$.</li>
</ul>
</li>
</ul>
<hr>
<h2 id="terminology-note">Terminology Note</h2>
<p>The naming conventions in this paper differ from those commonly used in molecular dynamics (MD). The following table provides a cross-field mapping:</p>
<table>
  <thead>
      <tr>
          <th style="text-align: left">Concept</th>
          <th style="text-align: left"><strong>Field Theory (This Paper)</strong></th>
          <th style="text-align: left"><strong>Molecular Dynamics</strong></th>
          <th style="text-align: left"><strong>Mathematics</strong></th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td style="text-align: left"><strong>Equation 1</strong></td>
          <td style="text-align: left">&ldquo;Langevin Equation&rdquo;</td>
          <td style="text-align: left">Brownian Dynamics (BD)</td>
          <td style="text-align: left">Overdamped Langevin</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>Equation 2</strong></td>
          <td style="text-align: left">&ldquo;Hyperbolic Equation&rdquo;</td>
          <td style="text-align: left">Langevin Dynamics (LD)</td>
          <td style="text-align: left">Underdamped Langevin</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>Integrator 1</strong></td>
          <td style="text-align: left">Euler Discretization</td>
          <td style="text-align: left">Euler Integrator</td>
          <td style="text-align: left">Euler-Maruyama</td>
      </tr>
      <tr>
          <td style="text-align: left"><strong>Integrator 2</strong></td>
          <td style="text-align: left">Hyperbolic Algorithm (HA)</td>
          <td style="text-align: left">Velocity Verlet / Leapfrog</td>
          <td style="text-align: left">Quasi-Symplectic Splitting</td>
      </tr>
  </tbody>
</table>
<p><strong>Key insight</strong>: The paper&rsquo;s &ldquo;Hyperbolic Algorithm&rdquo; is mathematically equivalent to Langevin Dynamics with a Leapfrog/Verlet integrator, commonly used in MD. The baseline &ldquo;Langevin Algorithm&rdquo; corresponds to Brownian Dynamics. The term &ldquo;Langevin equation&rdquo; is overloaded: field theorists often use it for overdamped dynamics (no inertia), while chemists assume it includes momentum ($F=ma$).</p>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Horowitz, A. M. (1987). The Second Order Langevin Equation and Numerical Simulations. <em>Nuclear Physics B</em>, 280, 510-522. <a href="https://doi.org/10.1016/0550-3213(87)90159-3">https://doi.org/10.1016/0550-3213(87)90159-3</a></p>
<p><strong>Publication</strong>: Nuclear Physics B 1987</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{horowitzSecondOrderLangevin1987,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span> = <span style="color:#e6db74">{The Second Order {{Langevin}} Equation and Numerical Simulations}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span> = <span style="color:#e6db74">{Horowitz, Alan M.}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span> = <span style="color:#ae81ff">1987</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">month</span> = jan,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span> = <span style="color:#e6db74">{Nuclear Physics B}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span> = <span style="color:#e6db74">{280}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span> = <span style="color:#e6db74">{510--522}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">issn</span> = <span style="color:#e6db74">{05503213}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">doi</span> = <span style="color:#e6db74">{10.1016/0550-3213(87)90159-3}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div>]]></content:encoded></item><item><title>Evans 1986: Thermal Conductivity of Lennard-Jones Fluid</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/evans-thermal-conductivity-1986/</link><pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/evans-thermal-conductivity-1986/</guid><description>A 1986 validation of the Evans NEMD method for simulating heat flow, identifying long-time tail anomalies near the critical point.</description><content:encoded><![CDATA[<h2 id="methodological-validation-and-physical-discovery">Methodological Validation and Physical Discovery</h2>
<p>This is primarily a <strong>Methodological Paper ($\Psi_{\text{Method}}$)</strong>, with a significant secondary component of <strong>Discovery ($\Psi_{\text{Discovery}}$)</strong>.</p>
<p>It focuses on validating a specific algorithm (the &ldquo;Evans method&rdquo;) for Non-Equilibrium Molecular Dynamics (NEMD) by comparing its results against experimental benchmarks. However, it also uncovers physical anomalies, specifically &ldquo;long-time tails&rdquo; in the heat flux autocorrelation function that deviate significantly from theoretical predictions, marking a discovery about the physics of the Lennard-Jones fluid itself.</p>
<h2 id="flow-gradients-and-boundary-limitations">Flow Gradients and Boundary Limitations</h2>
<p>The primary motivation is to overcome the limitations of simulating heat flow using physical boundaries (e.g., walls at different temperatures), which causes severe interpretive difficulties due to density and temperature gradients.</p>
<p>The &ldquo;Evans method&rdquo; uses a fictitious external field to induce heat flow in a periodic, homogeneous system. This paper serves to:</p>
<ol>
<li>Validate this method across a wide range of state points (temperatures and densities) beyond the triple point.</li>
<li>Investigate the system&rsquo;s behavior near the critical point, where transport properties are known to be anomalous.</li>
</ol>
<h2 id="core-innovations-of-the-evans-algorithm">Core Innovations of the Evans Algorithm</h2>
<p>The core contribution is the rigorous stress-testing of the <strong>homogeneous heat flow algorithm</strong> (Evans method) combined with a <strong>Gaussian thermostat</strong>.</p>
<p>Specific novel insights include:</p>
<ul>
<li><strong>Linearity Validation</strong>: Establishing that, away from phase boundaries, the effective thermal conductivity is a monotonic, virtually linear function of the external field, justifying the extrapolation to zero field.</li>
<li><strong>Critical Anomaly Detection</strong>: Finding that near the critical point, conductivity becomes a non-monotonic function of the field, challenging standard simulation approaches in this regime.</li>
<li><strong>Tail Amplitude Discovery</strong>: Demonstrating that the &ldquo;long-time tails&rdquo; of the heat flux autocorrelation function have amplitudes roughly 6 times larger than those predicted by mode-coupling theory.</li>
</ul>
<h2 id="nemd-simulation-setup">NEMD Simulation Setup</h2>
<p>The author performed <strong>Non-Equilibrium Molecular Dynamics (NEMD)</strong> simulations using the Lennard-Jones potential.</p>
<ul>
<li><strong>System</strong>: Mostly $N=108$ particles, with some checks using $N=256$ to test size dependence.</li>
<li><strong>Thermostat</strong>: A Gaussian thermostat was used to keep the kinetic energy (temperature) constant.</li>
<li><strong>State Points</strong>:
<ul>
<li><strong>Critical Isotherm</strong>: $T=1.35$, varying density.</li>
<li><strong>Supercritical Isotherm</strong>: $T=2.0$.</li>
<li><strong>Freezing Line</strong>: Two points ($T=2.74, \rho=1.113$ and $T=2.0, \rho=1.04$).</li>
</ul>
</li>
<li><strong>Validation</strong>: Results were compared against <strong>experimental data for Argon</strong> (using standard LJ parameters).</li>
<li><strong>Ablation</strong>:
<ul>
<li><strong>Field Strength ($F$)</strong>: Varied to check for linearity/non-linearity.</li>
<li><strong>System Size ($N$)</strong>: Comparison between 108 and 256 particles to rule out finite-size artifacts.</li>
</ul>
</li>
</ul>
<h2 id="linearity-regimes-and-long-time-tail-anomalies">Linearity Regimes and Long-Time Tail Anomalies</h2>
<ul>
<li><strong>Agreement with Experiment</strong>: The Evans method yields thermal conductivities in broad agreement with experimental Argon data for most state points.</li>
<li><strong>Linearity</strong>: Away from the critical point, conductivity is a virtually linear function of the field strength $F$, allowing for accurate zero-field extrapolation.</li>
<li><strong>Critical Region Failure</strong>: Near the critical point ($T=1.35, \rho=0.4$), the method struggles; the conductivity is non-monotonic with respect to $F$, and the zero-field extrapolation underestimates the experimental value by ~11%.</li>
<li><strong>Long-Time Tails</strong>: The decay of the heat flux autocorrelation function follows a $t^{-3/2}$ tail (consistent with mode-coupling theory), but the <strong>amplitude is ~6x larger</strong> than predicted.</li>
<li><strong>Phase Hysteresis</strong>: In high-density regions near the freezing line, the system exhibits hysteresis and bi-stability between solid and liquid phases depending on the field strength.</li>
</ul>
<hr>
<h2 id="reproducibility-details">Reproducibility Details</h2>
<h3 id="data">Data</h3>
<p>The simulation relies on the Lennard-Jones (LJ) potential to model Argon. No external training data is used; the &ldquo;data&rdquo; consists of the physical constants defining the system.</p>
<table>
  <thead>
      <tr>
          <th>Parameter</th>
          <th>Value/Description</th>
          <th>Notes</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Potential</strong></td>
          <td>$\Phi(q)=4(q^{-12}-q^{-6})$</td>
          <td>Standard LJ 12-6 potential</td>
      </tr>
      <tr>
          <td><strong>Cutoff</strong></td>
          <td>$r_c = 2.5$</td>
          <td>Truncated at 2.5 distance units</td>
      </tr>
      <tr>
          <td><strong>Comparison</strong></td>
          <td>Argon Experimental Data</td>
          <td>Sourced from NBS recommended values</td>
      </tr>
  </tbody>
</table>
<h3 id="algorithms">Algorithms</h3>
<p>The core algorithm is the <strong>Evans Homogeneous Heat Flow</strong> method. To reproduce this, one must implement the specific Equations of Motion (EOM) derived from linear response theory.</p>
<p><strong>Equations of Motion:</strong></p>
<p>The trajectories are generated by:
$$
\begin{aligned}
\dot{q}_i &amp;= \frac{p_i}{m} \\
\dot{p}_i &amp;= F_i^{\text{inter}} + (E_i - \bar{E})F(t) - \sum_{j} F_{ij} q_{ij} \cdot F(t) + \frac{1}{2N} \sum_{j,k} F_{jk} q_{jk} \cdot F(t) - \alpha p_i
\end{aligned}
$$</p>
<p>Where:</p>
<ul>
<li>$F(t)$ is the fictitious external field driving heat flow.</li>
<li>$E_i$ is the instantaneous energy of particle $i$.</li>
<li>$\alpha$ is the <strong>Gaussian Thermostat multiplier</strong> (calculated at every step to strictly conserve kinetic energy/Temperature):
$$\alpha = \frac{\sum_i [\dots]_{\text{force terms}} \cdot p_i}{\sum_i p_i \cdot p_i}$$</li>
</ul>
<p><strong>Conductivity Calculation:</strong></p>
<p>The zero-frequency limit is extrapolated as:
$$ \lambda = \lim_{F \to 0} \frac{J_Q}{FT} $$</p>
<p>The frequency-dependent conductivity relies on the heat-flux autocorrelation:
$$ \lambda(\omega) = \frac{V}{3k_B T^2} \int_0^\infty dt , e^{i\omega t} \langle J_Q(t) \cdot J_Q(0) \rangle $$</p>
<h3 id="models">Models</h3>
<p>The &ldquo;model&rdquo; here is the physical simulation setup.</p>
<ul>
<li><strong>Particle Count</strong>: $N = 108$ (primary), $N = 256$ (validation).</li>
<li><strong>Boundary Conditions</strong>: Periodic Boundary Conditions (PBC).</li>
<li><strong>Thermostat</strong>: Gaussian Isokinetic (Temperature is a constant of motion).</li>
</ul>
<h3 id="evaluation">Evaluation</h3>
<p>The primary metric is the <strong>Thermal Conductivity</strong> ($\lambda$).</p>
<table>
  <thead>
      <tr>
          <th>Metric</th>
          <th>Definition</th>
          <th>Baseline</th>
          <th>Result</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Thermal Conductivity</strong></td>
          <td>Ratio of heat flux $J_Q$ to field $F$ (extrapolated to $F=0$)</td>
          <td>Experimental Argon (NBS Data)</td>
          <td>Good agreement away from critical point</td>
      </tr>
      <tr>
          <td><strong>Tail Amplitude</strong></td>
          <td>Coefficient of the $\omega^{1/2}$ term in frequency-dependent conductivity</td>
          <td>Mode-Coupling Theory ($\approx 0.05$)</td>
          <td>Simulation value $\approx 0.3$ (6x larger)</td>
      </tr>
  </tbody>
</table>
<h3 id="hardware">Hardware</h3>
<ul>
<li><strong>Requirements</strong>: While 1986 hardware is obsolete, reproducing this requires a standard MD code capable of non-conservative forces (NEMD).</li>
<li><strong>Compute Cost</strong>: Low by modern standards. 108 particles for $\sim 10^5$ to $10^6$ steps is trivial on modern CPUs.</li>
</ul>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Evans, D. J. (1986). Thermal conductivity of the Lennard-Jones fluid. <em>Physical Review A</em>, 34(2), 1449-1453. <a href="https://doi.org/10.1103/PhysRevA.34.1449">https://doi.org/10.1103/PhysRevA.34.1449</a></p>
<p><strong>Publication</strong>: Physical Review A, 1986</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{PhysRevA.34.1449,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span> = <span style="color:#e6db74">{Thermal conductivity of the Lennard-Jones fluid}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span> = <span style="color:#e6db74">{Evans, Denis J.}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span> = <span style="color:#e6db74">{Phys. Rev. A}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span> = <span style="color:#e6db74">{34}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">number</span> = <span style="color:#e6db74">{2}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span> = <span style="color:#e6db74">{1449--1453}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">numpages</span> = <span style="color:#e6db74">{0}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span> = <span style="color:#e6db74">{1986}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">month</span> = <span style="color:#e6db74">{Aug}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">publisher</span> = <span style="color:#e6db74">{American Physical Society}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">doi</span> = <span style="color:#e6db74">{10.1103/PhysRevA.34.1449}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">url</span> = <span style="color:#e6db74">{https://link.aps.org/doi/10.1103/PhysRevA.34.1449}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div>]]></content:encoded></item><item><title>Embedded-Atom Method: Theory and Applications Review</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-review-1993/</link><pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-review-1993/</guid><description>Comprehensive 1993 review of the Embedded-Atom Method (EAM), covering theory, parameterization, and applications to metallic systems.</description><content:encoded><![CDATA[<h2 id="systematizing-the-embedded-atom-method">Systematizing the Embedded-Atom Method</h2>
<p>This is a <strong>Systematization (Review)</strong> paper. It consolidates the theoretical development, semi-empirical parameterization, and broad applications of the Embedded-Atom Method (EAM) into a unified framework. The paper systematizes the field by connecting the EAM to related theories (Effective Medium Theory, Finnis-Sinclair, &ldquo;glue&rdquo; models) and organizing phenomenological results across diverse physical regimes (bulk, surfaces, interfaces).</p>
<p>The authors explicitly frame the work as a survey, stating &ldquo;We review here the history, development, and application of the EAM&rdquo; and &ldquo;This review emphasizes the physical insight that motivated the EAM.&rdquo; The paper follows a classic survey structure, organizing the literature by application domains.</p>
<h2 id="the-failure-of-pair-potentials-in-metallic-systems">The Failure of Pair Potentials in Metallic Systems</h2>
<p>The primary motivation is the failure of pair-potential models to accurately describe metallic bonding, particularly at defects and interfaces.</p>
<p><strong>Physics Gap</strong>: Pair potentials assume bond strength is independent of environment, implying cohesive energy scales linearly with coordination ($Z$), whereas in reality it scales roughly as $\sqrt{Z}$.</p>
<p><strong>Empirical Failures</strong>: Pair potentials incorrectly predict the &ldquo;Cauchy relation&rdquo; ($C_{12} = C_{44}$) and predict a vacancy formation energy equal to the cohesive energy, contradicting experimental data for fcc metals.</p>
<p><strong>Practical Need</strong>: First-principles calculations (like DFT) were computationally too expensive for low-symmetry systems like grain boundaries and fracture tips, creating a need for an efficient, semi-empirical many-body potential.</p>
<h2 id="theoretical-unification--core-innovations">Theoretical Unification &amp; Core Innovations</h2>
<p>The paper&rsquo;s core contribution is the synthesis of the EAM as a practical computational tool that captures &ldquo;coordination-dependent bond strength&rdquo; without the cost of ab initio methods.</p>
<p><strong>Theoretical Unification</strong>: It demonstrates that the EAM ansatz can be derived from Density Functional Theory (DFT) by assuming the total electron density is a superposition of atomic densities.</p>
<p><strong>Environmental Dependence</strong>: It explicitly formulates how the &ldquo;effective&rdquo; pair interaction stiffens and shortens as coordination decreases (e.g., at surfaces), a feature naturally arising from the non-linearity of the embedding function.</p>
<p><strong>Broad Validation</strong>: It provides a centralized evaluation of the method across a vast array of metallic properties, establishing it as the standard for atomistic simulations of face-centered cubic (fcc) metals.</p>
<h2 id="validating-eam-across-application-domains">Validating EAM Across Application Domains</h2>
<p>The authors review computational experiments using Energy Minimization, Molecular Dynamics (MD), and Monte Carlo (MC) simulations across several domains:</p>
<p><strong>Bulk Properties</strong>: Calculation of phonon spectra, liquid structure factors, thermal expansion coefficients, and melting points for fcc metals (Ni, Pd, Pt, Cu, Ag, Au).</p>
<p><strong>Defects</strong>: Computation of vacancy formation/migration energies and self-interstitial geometries.</p>
<p><strong>Grain Boundaries</strong>: Calculation of grain boundary structures, energies, and elastic properties for twist and tilt boundaries in Au and Al. Computed structures show good agreement with X-ray diffraction and HRTEM experiments. The many-body interactions in the EAM produce somewhat better agreement than pair potentials, which tend to overestimate boundary expansion.</p>
<p><strong>Surfaces</strong>: Analysis of surface energies, relaxations, reconstructions (e.g., Au(110) missing row), and surface phonons.</p>
<p><strong>Alloys</strong>: Investigation of heat of solution, surface segregation profiles (e.g., Ni-Cu), and order-disorder transitions.</p>
<p><strong>Mechanical Properties</strong>: Simulation of dislocation mobility, pinning by defects (He bubbles), and crack tip plasticity (ductile vs. brittle fracture modes).</p>
<h2 id="key-outcomes-and-the-limits-of-eam">Key Outcomes and the Limits of EAM</h2>
<p><strong>Many-Body Success</strong>: The EAM successfully reproduces the breakdown of the Cauchy relation and the correct ratio of vacancy formation energy to cohesive energy (~0.35) for fcc metals.</p>
<p><strong>Surface Accuracy</strong>: It correctly predicts that surface bonds are shorter and stiffer than bulk bonds due to lower coordination. It accurately predicts surface reconstructions (e.g., Au(110) $(1 \times 2)$).</p>
<p><strong>Alloy Behavior</strong>: The method naturally captures segregation phenomena, including oscillating concentration profiles in Ni-Cu, driven by the embedding energy.</p>
<p><strong>Limitations</strong>: The method is less accurate for systems with strong directional bonding (covalent materials) or significant Fermi-surface effects, as it assumes spherically averaged electron densities.</p>
<hr>
<h2 id="reproducibility-details">Reproducibility Details</h2>
<h3 id="data">Data</h3>
<p><strong>Fitting Data</strong>: The semi-empirical functions are fitted to basic bulk properties: lattice constants, cohesive energy, elastic constants ($C_{11}$, $C_{12}$, $C_{44}$), and vacancy formation energy.</p>
<p><strong>Universal Binding Curve</strong>: The cohesive energy as a function of lattice constant is constrained to follow the &ldquo;universal binding curve&rdquo; of Rose et al. to ensure accurate anharmonic behavior.</p>
<p><strong>Alloy Data</strong>: For binary alloys, dilute heats of alloying are used for fitting cross-interactions.</p>
<h3 id="algorithms">Algorithms</h3>
<p><strong>Core Ansatz</strong>: The total energy is defined as:</p>
<p>$$E_{coh} = \sum_{i} G_i\left( \sum_{j \neq i} \rho_j^a(R_{ij}) \right) + \frac{1}{2} \sum_{i, j (j \neq i)} U_{ij}(R_{ij})$$</p>
<p>where $G$ is the embedding energy (function of local electron density $\rho$), and $U$ is a pair interaction.</p>
<p><strong>Simulation Techniques</strong>:</p>
<ul>
<li><strong>Molecular Dynamics (MD)</strong>: Used for liquids, phonons, and fracture simulations.</li>
<li><strong>Monte Carlo (MC)</strong>: Used for phase diagrams and segregation profiles (e.g., approximately $10^5$ iterations per atom).</li>
<li><strong>Phonons</strong>: Calculated via the dynamical matrix derived from the force-constant tensor $K_{ij}$.</li>
<li><strong>Normal-Mode Analysis</strong>: Vibrational normal modes obtained by diagonalizing the dynamical matrix, feasible for unit cells of up to about 260 atoms.</li>
</ul>
<h3 id="models">Models</h3>
<p><strong>Parameterizations</strong>: The review lists several specific function sets developed by the authors (Table 2), including:</p>
<ul>
<li><strong>Daw and Baskes</strong>: For Ni, Pd, H (elemental metals and H in solution/on surfaces)</li>
<li><strong>Foiles</strong>: For Cu, Ag, Au, Ni, Pd, Pt (elemental metals)</li>
<li><strong>Foiles</strong>: For Cu, Ni (tailored for the Ni-Cu alloy system)</li>
<li><strong>Foiles, Baskes and Daw</strong>: For Cu, Ag, Au, Ni, Pd, Pt (dilute alloys)</li>
<li><strong>Daw, Baskes, Bisson and Wolfer</strong>: For Ni, H (fracture, dislocations, H embrittlement)</li>
<li><strong>Foiles and Daw</strong>: For Ni, Al (Ni-rich end of the Ni-Al alloy system)</li>
<li><strong>Daw</strong>: For Ni (calculated from first principles, not semi-empirical)</li>
<li><strong>Hoagland, Daw, Foiles and Baskes</strong>: For Al (elemental Al)</li>
</ul>
<p>Many of these historical parameterizations are directly downloadable in machine-readable formats from the NIST Interatomic Potentials Repository (linked in the resources below).</p>
<p><strong>Transferability</strong>: EAM functions are generally <em>not</em> transferable between different parameterization sets; mixing functions from different sets (e.g., Daw-Baskes Ni with Foiles Pd) is invalid.</p>
<h3 id="evaluation">Evaluation</h3>
<p><strong>Bulk Validation</strong>: Phonon dispersion curves for Cu show excellent agreement with experiment across the full Brillouin zone.</p>
<p><strong>Thermal Properties</strong>: Linear thermal expansion coefficients match experiment well (e.g., Cu calculated: $16.4 \times 10^{-6}/K$ vs experimental: $16.7 \times 10^{-6}/K$).</p>
<p><strong>Defect Energetics</strong>: Vacancy migration energies and divacancy binding energies (~0.1-0.2 eV) align with experimental data.</p>
<p><strong>Surface Segregation</strong>: Correctly predicts segregation species for 18 distinct dilute alloy cases (e.g., Cu segregating in Ni).</p>
<h3 id="hardware">Hardware</h3>
<p><strong>Compute Scale</strong>: At the time of publication (1993), Molecular Dynamics simulations of up to 35,000 atoms were possible.</p>
<p><strong>Platforms</strong>: Calculations were performed on supercomputers like the <strong>CRAY-XMP</strong>, though smaller calculations were noted as feasible on high-performance workstations.</p>
<hr>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Daw, M. S., Foiles, S. M., &amp; Baskes, M. I. (1993). The embedded-atom method: a review of theory and applications. <em>Materials Science Reports</em>, 9(7-8), 251-310. <a href="https://doi.org/10.1016/0920-2307(93)90001-U">https://doi.org/10.1016/0920-2307(93)90001-U</a></p>
<p><strong>Publication</strong>: Materials Science Reports 1993</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{dawEmbeddedatomMethodReview1993,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span> = <span style="color:#e6db74">{The embedded-atom method: a review of theory and applications}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">shorttitle</span> = <span style="color:#e6db74">{The Embedded-Atom Method}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span> = <span style="color:#e6db74">{Daw, Murray S. and Foiles, Stephen M. and Baskes, Michael I.}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span> = <span style="color:#ae81ff">1993</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">month</span> = mar,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span> = <span style="color:#e6db74">{Materials Science Reports}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span> = <span style="color:#e6db74">{9}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">number</span> = <span style="color:#e6db74">{7-8}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span> = <span style="color:#e6db74">{251--310}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">issn</span> = <span style="color:#e6db74">{0920-2307}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">doi</span> = <span style="color:#e6db74">{10.1016/0920-2307(93)90001-U}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p><strong>Additional Resources</strong>:</p>
<ul>
<li><a href="/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method/">Original EAM Paper (1984)</a></li>
<li><a href="/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-voter-1994/">EAM User Guide (1994)</a></li>
<li><a href="https://www.ctcms.nist.gov/potentials/">NIST Interatomic Potentials Repository</a></li>
</ul>
]]></content:encoded></item><item><title>Embedded-Atom Method User Guide: Voter's 1994 Chapter</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-voter-1994/</link><pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-voter-1994/</guid><description>Comprehensive user guide for the Embedded-Atom Method (EAM), covering theory, potential fitting, and applications to intermetallics.</description><content:encoded><![CDATA[<h2 id="contribution-systematizing-the-embedded-atom-method">Contribution: Systematizing the Embedded-Atom Method</h2>
<p>This is a <strong>Systematization</strong> paper (specifically a handbook chapter) with a strong secondary <strong>Method</strong> projection.</p>
<p>Its primary goal is to serve as a &ldquo;users&rsquo; guide&rdquo; to the Embedded-Atom Method (EAM). The text organizes existing knowledge:</p>
<ul>
<li>It traces the physical origins of EAM from Density Functional Theory (DFT) and Effective Medium Theory.</li>
<li>It synthesizes &ldquo;closely related methods&rdquo; (Second Moment Approximation, Glue Model), showing they are mathematically equivalent or very similar to EAM.</li>
<li>It provides a pedagogical, step-by-step methodology for fitting potentials to experimental data.</li>
</ul>
<h2 id="motivation-bridging-the-gap-between-dft-and-pair-potentials">Motivation: Bridging the Gap Between DFT and Pair Potentials</h2>
<p>The primary motivation is to bridge the gap between accurate, expensive electronic structure calculations and fast, inaccurate pair potentials.</p>
<ul>
<li><strong>Computational Efficiency</strong>: First-principles methods scale as $O(N^3)$ or worse, limiting simulations to $&lt;100$ atoms (in 1994). Pair potentials scale as $O(N)$ and fail to capture essential many-body physics of metals.</li>
<li><strong>Physical Accuracy</strong>: Simple pair potentials cannot accurately model metallic defects; they predict zero Cauchy pressure ($C_{12} - C_{44} = 0$) and equate vacancy formation energy to cohesive energy, both of which are incorrect for transition metals.</li>
<li><strong>Practical Utility</strong>: There was a need for a clear guide on how to construct and apply these potentials for large-scale simulations ($10^6+$ atoms) of fracture and defects.</li>
</ul>
<h2 id="novelty-a-unified-framework-and-robust-fitting-recipe">Novelty: A Unified Framework and Robust Fitting Recipe</h2>
<p>As a review chapter, the novelty lies in the synthesis and the specific, reproducible recipe for potential construction. Central to this synthesis is the core EAM energy functional:</p>
<p>$$E_{\text{tot}} = \sum_i \left( F(\bar{\rho}_i) + \frac{1}{2} \sum_{j \neq i} \phi(r_{ij}) \right)$$</p>
<p>where the total energy $E_{\text{tot}}$ depends on embedding an atom $i$ into a local background electron density $\bar{\rho}_i = \sum_{j \neq i} \rho(r_{ij})$, plus a repulsive pair interaction $\phi(r_{ij})$.</p>
<ul>
<li><strong>Unified Framework</strong>: It explicitly maps the &ldquo;Second Moment Approximation&rdquo; (Tight Binding) and the &ldquo;Glue Model&rdquo; onto the fundamental EAM framework above, clarifying that they differ primarily in terminology or specific functional choices (e.g., square root embedding functions).</li>
<li><strong>Cross-Potential Fitting Recipe</strong>: It details a robust method for fitting alloy potentials (specifically Ni-Al-B) by using &ldquo;transformation invariance&rdquo;, scaling the density and shifting the embedding function to fit alloy properties without disturbing pure element fits.</li>
<li><strong>Specific Parameters</strong>: It publishes optimized potential parameters for Ni, Al, and B that accurately reproduce properties like the Boron interstitial preference in $\text{Ni}_3\text{Al}$.</li>
</ul>
<h2 id="validation-computational-benchmarks-and-simulations">Validation: Computational Benchmarks and Simulations</h2>
<p>The &ldquo;experiments&rdquo; described are computational validations and simulations using the fitted Ni-Al-B potential:</p>
<ol>
<li>
<p><strong>Potential Fitting</strong>:</p>
<ul>
<li>Pure elements (Ni, Al) were fitted to elastic constants, vacancy formation energies, and diatomic data. The Ni fit achieved $\chi_{\text{rms}} = 0.75%$ and Al achieved $\chi_{\text{rms}} = 3.85%$.</li>
<li>Boron was fitted using hypothetical crystal structures (fcc, bcc) calculated via LMTO (Linear Muffin-Tin Orbital) since experimental data for fcc B does not exist.</li>
</ul>
</li>
<li>
<p><strong>Molecular Statics (Validation)</strong>:</p>
<ul>
<li><strong>Surface Relaxation</strong>: Demonstrated that EAM captures the oscillatory relaxation of atomic layers near a free surface, a many-body effect that pair potentials fail to capture.</li>
<li><strong>Defect Energetics</strong>: Calculated formation energies for Boron interstitials in $\text{Ni}_3\text{Al}$. Found the 6Ni-octahedral site is most stable ($-4.59$ eV relative to an isolated B atom and unperturbed crystal), followed by the 4Ni-2Al octahedral site ($-3.65$ eV) and the 3Ni-1Al tetrahedral site ($-2.99$ eV), consistent with channeling experiments.</li>
</ul>
</li>
<li>
<p><strong>Molecular Dynamics (Application)</strong>:</p>
<ul>
<li><strong>Grain Boundary (GB) Cleavage</strong>: Simulated the fracture of a (210) tilt grain boundary in $\text{Ni}_3\text{Al}$ at a strain rate of $5 \times 10^{10}$ s$^{-1}$.</li>
<li><strong>Comparison</strong>: Compared pure $\text{Ni}_3\text{Al}$ boundaries vs. those doped with Boron and substitutional Nickel.</li>
</ul>
</li>
</ol>
<h2 id="key-outcomes-eam-efficiency-and-boron-strengthening">Key Outcomes: EAM Efficiency and Boron Strengthening</h2>
<ul>
<li><strong>EAM Efficiency</strong>: Confirmed that EAM scales linearly with atom count ($N$), requiring only 2-5 times the computational work of pair potentials.</li>
<li><strong>Boron Strengthening Mechanism</strong>: The simulations suggested that Boron segregates to grain boundaries and, specifically when co-segregated with Ni, significantly increases cohesion.
<ul>
<li>The maximum stress for the enriched boundary was approximately 22 GPa, compared to approximately 19 GPa for the clean boundary.</li>
<li>The B-doped boundary required approximately 44% more work to cleave than the undoped boundary.</li>
<li>The fracture mode shifted from cleaving along the GB to failure in the bulk.</li>
</ul>
</li>
<li><strong>Grain Boundary Segregation</strong>: Molecular statics calculations found B interstitial energies at the GB as low as $-6.9$ eV, compared to $-4.59$ eV in the bulk, consistent with experimental observations of boron segregation to grain boundaries.</li>
<li><strong>Limitations</strong>: The author concludes that while EAM is excellent for metals, it lacks the angular dependence required for strongly covalent materials (like $\text{MoSi}_2$) or directional bonding.</li>
</ul>
<hr>
<h2 id="reproducibility-details">Reproducibility Details</h2>
<p>The chapter provides nearly all details required to implement the described potential from scratch.</p>
<h3 id="data">Data</h3>
<ul>
<li><strong>Experimental/Reference Data</strong>: Used for fitting the cost function $\chi_{\text{rms}}$.
<ul>
<li><strong>Pure Elements</strong>: Lattice constants ($a_0$), cohesive energy ($E_{\text{coh}}$), bulk modulus ($B$), elastic constants ($C_{11}, C_{12}, C_{44}$), vacancy formation energy ($E_{\text{vac}}^f$), and diatomic bond length/strength ($R_e, D_e$).</li>
<li><strong>Alloys</strong>: Heat of solution and defect energies (APB, SISF) for $\text{Ni}_3\text{Al}$.</li>
<li><strong>Hypothetical Data</strong>: LMTO first-principles data used for unobserved phases (e.g., fcc Boron, B2 NiB) to constrain the fit.</li>
</ul>
</li>
</ul>
<h3 id="algorithms">Algorithms</h3>
<ul>
<li><strong>Component Functions</strong>:
<ul>
<li><strong>Pair Potential $\phi(r)$</strong>: Morse potential form:
$$\phi(r) = D_M {1 - \exp[-\alpha_M(r - R_M)]}^2 - D_M$$</li>
<li><strong>Density Function $\rho(r)$</strong>: Modified hydrogenic 4s orbital:
$$\rho(r) = r^6(e^{-\beta r} + 2^9 e^{-2\beta r})$$</li>
<li><strong>Embedding Function $F(\bar{\rho})$</strong>: Derived numerically to force the crystal energy to match the &ldquo;Universal Energy Relation&rdquo; (Rose et al.) as a function of lattice constant.</li>
</ul>
</li>
<li><strong>Fitting Strategy</strong>:
<ul>
<li><strong>Smooth Cutoff</strong>: A polynomial smoothing function ($h_{\text{smooth}}$) applied at $r_{\text{cut}}$ to ensure continuous derivatives.</li>
<li><strong>Simplex Algorithm</strong>: Used to optimize parameters ($D_M, R_M, \alpha_M, \beta, r_{\text{cut}}$).</li>
<li><strong>Alloy Invariance</strong>: Used transformations $F&rsquo;(\rho) = F(\rho) + g\rho$ and $\rho&rsquo;(r) = s\rho(r)$ to fit cross-potentials without altering pure-element properties.</li>
</ul>
</li>
</ul>
<h3 id="models">Models</h3>
<ul>
<li><strong>Parameters</strong>: The text provides the exact optimized parameters for the Ni-Al-B potential in <strong>Table 2</strong> (Pure elements) and <strong>Table 5</strong> (Cross-potentials).
<ul>
<li>Example Ni parameters: $D_M=1.5335$ eV, $\alpha_M=1.7728$ Å$^{-1}$, $r_{\text{cut}}=4.7895$ Å.</li>
</ul>
</li>
</ul>
<h3 id="hardware">Hardware</h3>
<ul>
<li><strong>1994 Context</strong>: Mentions that simulations of $10^6$ atoms were possible on the &ldquo;fastest computers available&rdquo;.</li>
<li><strong>Scaling</strong>: Explicitly notes computational work scales as $O(N)$, roughly 2-5x slower than pair potentials.</li>
</ul>
<hr>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Voter, A. F. (1994). Chapter 4: The Embedded-Atom Method. In <em>Intermetallic Compounds: Vol. 1, Principles</em>, edited by J. H. Westbrook and R. L. Fleischer. John Wiley &amp; Sons Ltd.</p>
<p><strong>Publication</strong>: Intermetallic Compounds: Vol. 1, Principles (1994)</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@incollection</span>{voterEmbeddedAtomMethod1994,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span> = <span style="color:#e6db74">{The Embedded-Atom Method}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span> = <span style="color:#e6db74">{Voter, Arthur F.}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">booktitle</span> = <span style="color:#e6db74">{Intermetallic Compounds: Vol. 1, Principles}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">editor</span> = <span style="color:#e6db74">{Westbrook, J. H. and Fleischer, R. L.}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span> = <span style="color:#e6db74">{1994}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">publisher</span> = <span style="color:#e6db74">{John Wiley &amp; Sons Ltd}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span> = <span style="color:#e6db74">{77--90}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">chapter</span> = <span style="color:#e6db74">{4}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p><strong>Additional Resources</strong>:</p>
<ul>
<li><a href="https://www.ctcms.nist.gov/potentials/">NIST Interatomic Potentials Repository</a> (Modern repository often hosting EAM files)</li>
<li><a href="/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method/">Original EAM Paper (1984)</a></li>
<li><a href="/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-review-1993/">EAM Review (1993)</a></li>
</ul>
]]></content:encoded></item><item><title>Correlations in the Motion of Atoms in Liquid Argon</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/correlations-motion-atoms-liquid-argon/</link><pubDate>Sat, 13 Dec 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/correlations-motion-atoms-liquid-argon/</guid><description>Rahman's 1964 MD simulation of 864 argon atoms with Lennard-Jones potential revealed the cage effect and validated classical molecular dynamics for liquids.</description><content:encoded><![CDATA[<h2 id="contribution-methodological-validation-of-md">Contribution: Methodological Validation of MD</h2>
<p>This is the archetypal <strong>Method</strong> paper (dominant classification with secondary <strong>Theory</strong> contribution). It establishes the architectural validity of Molecular Dynamics (MD) as a scientific tool. Rahman answers the question: &ldquo;Can a digital computer solving classical difference equations faithfully represent a physical liquid?&rdquo;</p>
<p>The paper utilizes specific rhetorical indicators of a methodological contribution:</p>
<ul>
<li><strong>Algorithmic Explication</strong>: A dedicated Appendix details the predictor-corrector difference equations.</li>
<li><strong>Validation against Ground Truth</strong>: Extensive comparison of calculated diffusion constants and pair-correlation functions against experimental neutron and X-ray scattering data.</li>
<li><strong>Robustness Checks</strong>: Ablation studies on the numerical integration stability (one vs. two corrector cycles).</li>
</ul>
<h2 id="motivation-bridging-neutron-scattering-and-many-body-theory">Motivation: Bridging Neutron Scattering and Many-Body Theory</h2>
<p>In the early 1960s, neutron scattering data provided insights into the dynamic structure of liquids, but theorists lacked concrete models to explain the observed two-body dynamical correlations. Analytic theories were limited by the difficulty of the many-body problem.</p>
<p>Rahman sought to bypass these analytical bottlenecks by assuming that <strong>classical dynamics</strong> with a simple 2-body potential (Lennard-Jones) could sufficiently describe the motion of atoms in liquid argon. The goal was to generate &ldquo;experimental&rdquo; data via simulation to test theoretical models (like the Vineyard convolution approximation) and provide a microscopic understanding of diffusion.</p>
<h2 id="core-innovation-system-stability-and-the-cage-effect">Core Innovation: System Stability and the Cage Effect</h2>
<p>This paper is widely considered the birth of modern molecular dynamics for continuous potentials. Its key novelties include:</p>
<ol>
<li><strong>System Size &amp; Stability</strong>: Successfully simulating 864 particles interacting via a continuous Lennard-Jones potential with stable temperature over the full simulation duration (approximately $10^{-11}$ sec, as confirmed by Table I in the paper).</li>
<li><strong>The &ldquo;Cage Effect&rdquo;</strong>: The discovery that the velocity autocorrelation function becomes negative after a short time:
$$ \langle \textbf{v}(0) \cdot \textbf{v}(t) \rangle &lt; 0 \quad \text{for } t &gt; 0.33 \times 10^{-12} \text{ s} $$
This proved that atoms in a liquid &ldquo;rattle&rdquo; against the cage of their nearest neighbors.</li>
<li><strong>Delayed Convolution</strong>: Proposing an improvement to the Vineyard approximation for the distinct Van Hove function $G_d(r,t)$ by introducing a time-delayed convolution to account for the persistence of local structure. Instead of convolving $g(r)$ with $G_s(r,t)$ at the same time $t$, Rahman convolves at a delayed time $t&rsquo; &lt; t$, using a one-parameter function with $\tau = 1.0 \times 10^{-12}$ sec. This makes $G_d(r,t)$ decay as $t^4$ at short times (instead of $t^2$ in the Vineyard approximation) and as $t$ at long times.</li>
</ol>
<h2 id="methodology-simulating-864-argon-atoms">Methodology: Simulating 864 Argon Atoms</h2>
<p>Rahman performed a &ldquo;computer experiment&rdquo; (simulation) of <strong>Liquid Argon</strong>:</p>
<ul>
<li><strong>System</strong>: 864 particles in a cubic box of side $L=10.229\sigma$.</li>
<li><strong>Conditions</strong>: Temperature $94.4^\circ$K, Density $1.374 \text{ g cm}^{-3}$.</li>
<li><strong>Interaction</strong>: Lennard-Jones potential, truncated at $R=2.25\sigma$.</li>
<li><strong>Time Step</strong>: $\Delta t = 10^{-14}$ s (780 steps total, covering approximately $7.8 \times 10^{-12}$ s).</li>
<li><strong>Output Analysis</strong>:
<ul>
<li>Radial distribution function $g(r)$.</li>
<li>Mean square displacement $\langle r^2 \rangle$.</li>
<li>Velocity autocorrelation function $\langle v(0)\cdot v(t) \rangle$.</li>
<li>Van Hove space-time correlation functions $G_s(r,t)$ and $G_d(r,t)$.</li>
</ul>
</li>
</ul>
<h2 id="results-validation-and-non-gaussian-diffusion-analysis">Results: Validation and Non-Gaussian Diffusion Analysis</h2>
<ul>
<li><strong>Validation</strong>: The calculated pair-distribution function $g(r)$ agreed well with X-ray scattering data from Eisenstein and Gingrich (at $91.8^\circ$K). The self-diffusion constant $D = 2.43 \times 10^{-5} \text{ cm}^2 \text{ sec}^{-1}$ at $94.4^\circ$K matched the experimental value from Naghizadeh and Rice at $90^\circ$K and the same density ($1.374 \text{ g cm}^{-3}$).</li>
<li><strong>Dynamics</strong>: The velocity autocorrelation has a negative region, contradicting simple exponential decay models (Langevin). Its frequency spectrum $f(\omega)$ shows a broad maximum at $\omega \approx 0.25 (k_BT/\hbar)$, reminiscent of solid-like behavior.</li>
<li><strong>Non-Gaussian Behavior</strong>: The self-diffusion function $G_s(r,t)$ attains its maximum departure from a Gaussian shape at about $t \approx 3.0 \times 10^{-12}$ s (with $\langle r^4 \rangle$ departing from its Gaussian value by about 13%), returning to Gaussian form by $\sim 10^{-11}$ s. At that time, the rms displacement ($3.8$ Angstrom) is close to the first-neighbor distance ($3.7$ Angstrom). This indicates that Fickian diffusion is an asymptotic limit and does not apply at short times.</li>
<li><strong>Fourier Transform Validation</strong>: The Fourier transform of $g(r)$ has peaks at $\kappa\sigma = 6.8$, 12.5, 18.5, 24.8, closely matching the X-ray scattering peaks at $\kappa\sigma = 6.8$, 12.3, 18.4, 24.4.</li>
<li><strong>Temperature Dependence</strong>: A second simulation at $130^\circ$K and $1.16 \text{ g cm}^{-3}$ yielded $D = 5.67 \times 10^{-5} \text{ cm}^2 \text{ sec}^{-1}$, compared to the experimental value of $6.06 \times 10^{-5} \text{ cm}^2 \text{ sec}^{-1}$ from Naghizadeh and Rice at $120^\circ$K and $1.16 \text{ g cm}^{-3}$. The paper notes that both calculated values are lower than experiment by about 20%, and suggests that allowing for a softer repulsive part in the interaction potential might reduce this discrepancy.</li>
<li><strong>Vineyard Approximation</strong>: The standard Vineyard convolution approximation ($G_d \approx g * G_s$) produces a too-rapid decay of $G_d(r,t)$ with time. The delayed convolution, matching pairs of $(t&rsquo;, t)$ in units of $10^{-12}$ sec as (0.2, 0.4), (0.5, 0.8), (1.0, 1.6), (1.5, 2.3), (2.0, 2.9), (2.5, 3.5), provides a substantially better fit.</li>
<li><strong>Conclusion</strong>: Classical N-body dynamics with a truncated pair potential is a sufficient model to reproduce both the structural and dynamical properties of simple liquids.</li>
</ul>
<hr>
<h2 id="reproducibility-details">Reproducibility Details</h2>
<h3 id="data">Data</h3>
<p>The simulation uses physical constants for Argon:</p>
<table>
  <thead>
      <tr>
          <th>Parameter</th>
          <th>Value</th>
          <th>Notes</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Particle Mass ($M$)</td>
          <td>$39.95 \times 1.6747 \times 10^{-24}$ g</td>
          <td>Mass of Argon atom</td>
      </tr>
      <tr>
          <td>Potential Depth ($\epsilon/k_B$)</td>
          <td>$120^\circ$K</td>
          <td>Lennard-Jones parameter</td>
      </tr>
      <tr>
          <td>Potential Size ($\sigma$)</td>
          <td>$3.4$ Å</td>
          <td>Lennard-Jones parameter</td>
      </tr>
      <tr>
          <td>Cutoff Radius ($R$)</td>
          <td>$2.25\sigma$</td>
          <td>Potential truncated beyond this</td>
      </tr>
      <tr>
          <td>Density ($\rho$)</td>
          <td>$1.374$ g cm$^{-3}$</td>
          <td></td>
      </tr>
      <tr>
          <td>Particle Count ($N$)</td>
          <td>864</td>
          <td></td>
      </tr>
  </tbody>
</table>
<h3 id="algorithms">Algorithms</h3>
<p>Rahman utilized a <strong>Predictor-Corrector</strong> scheme for solving the second-order differential equations of motion.</p>
<p><strong>Step Size</strong>: $\Delta t = 10^{-14}$ sec.</p>
<p><strong>The Algorithm:</strong></p>
<ol>
<li><strong>Predict</strong> positions $\bar{\xi}$ at $t + \Delta t$ based on previous steps:
$$\bar{\xi}_i^{(n+1)} = \xi_i^{(n-1)} + 2\Delta u \eta_i^{(n)}$$</li>
<li><strong>Calculate Forces</strong> (Accelerations $\alpha$) using predicted positions.</li>
<li><strong>Correct</strong> positions and velocities using the trapezoidal rule:
$$
\begin{aligned}
\eta_i^{(n+1)} &amp;= \eta_i^{(n)} + \frac{1}{2}\Delta u (\alpha_i^{(n+1)} + \alpha_i^{(n)}) \\
\xi_i^{(n+1)} &amp;= \xi_i^{(n)} + \frac{1}{2}\Delta u (\eta_i^{(n+1)} + \eta_i^{(n)})
\end{aligned}
$$</li>
</ol>
<p><em>Note: The paper compared one vs. two repetitions of the corrector step, finding that two passes improved precision slightly. The results presented in the paper were obtained using two passes.</em></p>
<h3 id="models">Models</h3>
<p><strong>Interaction Potential</strong>: Lennard-Jones 12-6
$$V(r_{ij}) = 4\epsilon \left[ \left(\frac{\sigma}{r_{ij}}\right)^{12} - \left(\frac{\sigma}{r_{ij}}\right)^6 \right]$$</p>
<p><strong>Boundary Conditions</strong>: Periodic Boundary Conditions (PBC) in 3 dimensions. When a particle moves out of the box ($x &gt; L$), it re-enters at $x - L$.</p>
<h3 id="hardware">Hardware</h3>
<p>This is a historical benchmark for computational capability in 1964:</p>
<table>
  <thead>
      <tr>
          <th>Resource</th>
          <th>Specification</th>
          <th>Notes</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Computer</strong></td>
          <td>CDC 3600</td>
          <td>Control Data Corporation mainframe</td>
      </tr>
      <tr>
          <td><strong>Compute Time</strong></td>
          <td>45 seconds / cycle</td>
          <td>Per predictor-corrector cycle for 864 particles (floating point)</td>
      </tr>
      <tr>
          <td><strong>Language</strong></td>
          <td>FORTRAN + Machine Language</td>
          <td>Machine language used for the most time-consuming parts</td>
      </tr>
  </tbody>
</table>
<p><em>Modern Context: Rahman&rsquo;s system (864 Argon atoms, LJ-potential) is highly reproducible today and serves as a classic pedagogical exercise. It can be simulated in standard MD frameworks (LAMMPS, OpenMM) in fractions of a second on consumer hardware.</em></p>
<hr>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Rahman, A. (1964). Correlations in the Motion of Atoms in Liquid Argon. <em>Physical Review</em>, 136(2A), A405-A411. <a href="https://doi.org/10.1103/PhysRev.136.A405">https://doi.org/10.1103/PhysRev.136.A405</a></p>
<p><strong>Publication</strong>: Physical Review 1964</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{rahman1964correlations,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span>=<span style="color:#e6db74">{Correlations in the motion of atoms in liquid argon}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span>=<span style="color:#e6db74">{Rahman, A.}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span>=<span style="color:#e6db74">{Physical Review}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span>=<span style="color:#e6db74">{136}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">number</span>=<span style="color:#e6db74">{2A}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span>=<span style="color:#e6db74">{A405--A411}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span>=<span style="color:#e6db74">{1964}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">publisher</span>=<span style="color:#e6db74">{APS}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">doi</span>=<span style="color:#e6db74">{10.1103/PhysRev.136.A405}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p><strong>Additional Resources</strong>:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Aneesur_Rahman">Aneesur Rahman - Wikipedia</a></li>
</ul>
]]></content:encoded></item><item><title>The Müller-Brown Potential: A 2D Benchmark Surface</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/muller-brown-1979/</link><pubDate>Mon, 08 Sep 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/muller-brown-1979/</guid><description>The Müller-Brown potential is a classic 2D benchmark for testing optimization algorithms and molecular dynamics methods.</description><content:encoded><![CDATA[<h2 id="overview">Overview</h2>
<p>The Müller-Brown potential is a primary benchmark system in computational chemistry: a two-dimensional analytical surface used to evaluate optimization algorithms. Introduced by Klaus Müller and Leo D. Brown in 1979 as a test system for their constrained simplex optimization algorithm, this potential energy function captures the essential topology of chemical reaction landscapes while preserving computational efficiency.</p>
<p><strong>Origin</strong>: Müller, K., &amp; Brown, L. D. (1979). Location of saddle points and minimum energy paths by a constrained simplex optimization procedure. <em>Theoretica Chimica Acta</em>, 53, 75-93. The potential is introduced in footnote 7 (p. 79) as a two-parametric model surface for testing the constrained simplex procedures.</p>
<h2 id="mathematical-definition">Mathematical Definition</h2>
<p>The Müller-Brown potential combines four two-dimensional Gaussian functions:</p>
<p>$$V(x,y) = \sum_{k=1}^{4} A_k \exp\left[a_k(x-x_k^0)^2 + b_k(x-x_k^0)(y-y_k^0) + c_k(y-y_k^0)^2\right]$$</p>
<p>Each Gaussian contributes a different &ldquo;bump&rdquo; or &ldquo;well&rdquo; to the landscape. The parameters control amplitude ($A_k$), width, orientation, and center position.</p>
<h3 id="standard-parameters">Standard Parameters</h3>
<p>The canonical parameter values that define the Müller-Brown surface are:</p>
<table>
  <thead>
      <tr>
          <th>k</th>
          <th>$A_k$</th>
          <th>$a_k$</th>
          <th>$b_k$</th>
          <th>$c_k$</th>
          <th>$x_k^0$</th>
          <th>$y_k^0$</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>1</td>
          <td>-200</td>
          <td>-1</td>
          <td>0</td>
          <td>-10</td>
          <td>1</td>
          <td>0</td>
      </tr>
      <tr>
          <td>2</td>
          <td>-100</td>
          <td>-1</td>
          <td>0</td>
          <td>-10</td>
          <td>0</td>
          <td>0.5</td>
      </tr>
      <tr>
          <td>3</td>
          <td>-170</td>
          <td>-6.5</td>
          <td>11</td>
          <td>-6.5</td>
          <td>-0.5</td>
          <td>1.5</td>
      </tr>
      <tr>
          <td>4</td>
          <td>15</td>
          <td>0.7</td>
          <td>0.6</td>
          <td>0.7</td>
          <td>-1</td>
          <td>1</td>
      </tr>
  </tbody>
</table>
<p>The first three terms have negative amplitudes (creating energy wells), while the fourth has a positive amplitude (creating a barrier). The cross-term $b_k$ in the third Gaussian creates the tilted orientation that gives the surface its characteristic curved pathways.</p>
<h3 id="analytical-gradients-forces">Analytical Gradients (Forces)</h3>
<p>To optimize paths or simulate molecular dynamics across this surface, calculating the spatial derivatives (negative forces) is structurally simple. Defining $G_k(x,y)$ as the inner argument of the exponent, the partial derivatives with respect to $x$ and $y$ are:</p>
<p>$$ \frac{\partial V}{\partial x} = \sum_{k=1}^4 A_k \exp[G_k(x,y)] \cdot \left[ 2a_k(x-x_k^0) + b_k(y-y_k^0) \right] $$</p>
<p>$$ \frac{\partial V}{\partial y} = \sum_{k=1}^4 A_k \exp[G_k(x,y)] \cdot \left[ b_k(x-x_k^0) + 2c_k(y-y_k^0) \right] $$</p>
<h2 id="energy-landscape">Energy Landscape</h2>
<p>This simple formula creates a surprisingly rich topography with exactly the features needed to challenge optimization algorithms:</p>
<table>
  <thead>
      <tr>
          <th><strong>Stationary Point</strong></th>
          <th><strong>Coordinates</strong></th>
          <th><strong>Energy</strong></th>
          <th><strong>Type</strong></th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>MA (Reactant)</td>
          <td>(-0.558, 1.442)</td>
          <td>-146.70</td>
          <td>Deep minimum</td>
      </tr>
      <tr>
          <td>MC (Intermediate)</td>
          <td>(-0.050, 0.467)</td>
          <td>-80.77</td>
          <td>Shallow minimum</td>
      </tr>
      <tr>
          <td>MB (Product)</td>
          <td>(0.623, 0.028)</td>
          <td>-108.17</td>
          <td>Medium minimum</td>
      </tr>
      <tr>
          <td>S1</td>
          <td>(-0.822, 0.624)</td>
          <td>-40.67</td>
          <td>First saddle point</td>
      </tr>
      <tr>
          <td>S2</td>
          <td>(0.212, 0.293)</td>
          <td>-72.25</td>
          <td>Second saddle point</td>
      </tr>
  </tbody>
</table>
<p>All values from Table 1 of Müller &amp; Brown (1979).</p>















<figure class="post-figure center ">
    <img src="/img/muller-brown/muller-brown-potential-surface.webp"
         alt="Müller-Brown Potential Energy Surface showing the three minima (dark blue regions) and two saddle points"
         title="Müller-Brown Potential Energy Surface showing the three minima (dark blue regions) and two saddle points"
         
         
         loading="lazy"
         class="post-image">
    
    <figcaption class="post-caption">The Müller-Brown potential energy surface showing the three minima (dark blue regions) and two saddle points.</figcaption>
    
</figure>

<h3 id="key-challenge-curved-reaction-pathways">Key Challenge: Curved Reaction Pathways</h3>
<p>The path from the deep reactant minimum (MA) to the product minimum (MB) follows a curved two-step pathway:</p>
<ol>
<li><strong>MA → S1 → MC</strong>: First transition over a lower barrier into an intermediate basin</li>
<li><strong>MC → S2 → MB</strong>: Second transition over a slightly higher barrier to the product</li>
</ol>
<p>This curved pathway breaks linear interpolation methods. Algorithms that draw a straight line from reactant to product miss both the intermediate minimum and the correct transition states, climbing over much higher energy regions instead.</p>
<h2 id="why-it-works-as-a-benchmark">Why It Works as a Benchmark</h2>
<p>The Müller-Brown potential has served as a computational chemistry benchmark for over four decades because of four key characteristics:</p>
<p><strong>Low dimensionality</strong>: As a 2D surface, it permits complete visualization of the landscape, clearly revealing why specific algorithms succeed or fail.</p>
<p><strong>Analytical form</strong>: Energy and gradient calculations cost virtually nothing, enabling exhaustive testing impossible with quantum mechanical surfaces.</p>
<p><strong>Non-trivial topology</strong>: The curved minimum energy path and shallow intermediate minimum challenge sophisticated methods while remaining manageable.</p>
<p><strong>Known ground truth</strong>: All minima and saddle points are precisely known, providing unambiguous success metrics.</p>
<h3 id="contrast-with-other-benchmarks">Contrast with Other Benchmarks</h3>
<p>The Müller-Brown potential provides distinct evaluation metrics compared to other classic potentials. The Lennard-Jones potential serves as the standard benchmark for equilibrium properties due to its single energy minimum. In parallel, Müller-Brown explicitly models reactive landscapes. Its multiple minima and connecting barriers create an evaluation environment for algorithms designed to discover transition states and reaction paths.</p>
<h2 id="historical-applications">Historical Applications</h2>
<p>The potential has evolved with the field&rsquo;s changing focus:</p>
<p><strong>1980s-1990s</strong>: Testing path-finding methods like Nudged Elastic Band (NEB), which creates discrete representations of reaction pathways and optimizes them to find minimum energy paths.</p>
<p><strong>2000s-2010s</strong>: Validating Transition Path Sampling (TPS) methods that harvest statistical ensembles of reactive trajectories.</p>
<p><strong>2020s</strong>: Benchmarking machine learning models and generative approaches that learn to sample transition paths or approximate potential energy surfaces.</p>
<h2 id="modern-applications-in-machine-learning">Modern Applications in Machine Learning</h2>
<p>The rise of machine learning has given the Müller-Brown potential renewed purpose. Modern <strong>Machine Learning Interatomic Potentials (MLIPs)</strong> aim to bridge the gap between quantum mechanical accuracy and classical force field efficiency by training flexible models on expensive quantum chemistry data.</p>
<p>The Müller-Brown potential provides an ideal benchmarking solution: an exactly known potential energy surface that can generate unlimited, noise-free training data. This enables researchers to ask fundamental questions:</p>
<ul>
<li>How well does a given architecture learn complex, curved surfaces?</li>
<li>How many training points are needed for acceptable accuracy?</li>
<li>How does the model behave when extrapolating beyond training data?</li>
<li>Can it correctly identify minima and saddle points?</li>
</ul>
<p>The potential serves as a consistent benchmark for measuring the learning capacity of AI models.</p>
<h2 id="extensions-and-variants">Extensions and Variants</h2>
<h3 id="higher-dimensional-extensions">Higher-Dimensional Extensions</h3>
<p>The canonical Müller-Brown potential can be extended beyond two dimensions to create more challenging test cases:</p>
<p><strong>Harmonic constraints</strong>: Add quadratic wells in orthogonal dimensions while preserving the complex 2D landscape:</p>
<p>$$V_{5D}(x_1, x_2, x_3, x_4, x_5) = V(x_1, x_3) + \kappa(x_2^2 + x_4^2 + x_5^2)$$</p>
<p><strong>Collective variables (CVs)</strong>: Collective variables are low-dimensional coordinates that capture the most important degrees of freedom in a high-dimensional system. By defining CVs that mix multiple dimensions, the original surface can be embedded in higher-dimensional spaces. For instance, the active 2D coordinates $x$ and $y$ can be projected as linear combinations of $N$ arbitrary degrees of freedom ($q_i$):</p>
<p>$$ x = \sum_{i=1}^N w_{x,i} q_i \quad \text{and} \quad y = \sum_{i=1}^N w_{y,i} q_i $$</p>
<p>This constructs a complex, high-dimensional problem where an algorithm must learn to isolate the relevant active subspace (the CVs) before it can effectively optimize the topology.</p>
<p>These extensions enable systematic testing of algorithm scaling with dimensionality while maintaining known ground truth in the active subspace.</p>
<h2 id="limitations">Limitations</h2>
<p>Despite its utility, the Müller-Brown potential has fundamental limitations as a proxy for physical systems:</p>
<ul>
<li><strong>Lack of Realistic Scaling</strong>: As a purely mathematical 2D/analytical model, it cannot directly simulate the complexities of high-dimensional scaling found in many-body atomic systems.</li>
<li><strong>No Entropic Effects</strong>: In real chemical systems, entropic contributions heavily influence the free-energy landscape. The Müller-Brown potential maps energy precisely but lacks the thermal/entropic complexity of solvent or macromolecular environments.</li>
<li><strong>Trivial Topology Contrasts</strong>: While non-trivial compared to single wells, its global topology remains simpler than proper ab initio potential energy surfaces, missing features like complex bifurcations, multi-state crossings, or non-adiabatic couplings.</li>
</ul>
<h2 id="implementation-considerations">Implementation Considerations</h2>
<p>Modern implementations typically focus on:</p>
<ul>
<li><strong>Vectorized calculations</strong> for batch processing</li>
<li><strong>Analytical derivatives</strong> for gradient-based methods</li>
<li><strong>JIT compilation</strong> for performance optimization</li>
<li><strong>Automatic differentiation</strong> compatibility for machine learning frameworks</li>
</ul>
<p>The analytical nature of the potential makes it ideal for testing both classical optimization methods and modern machine learning approaches.</p>
<h2 id="resources-and-visualizations">Resources and Visualizations</h2>
<ul>
<li><a href="/muller-brown-optimized">Interactive Müller-Brown Potential Energy Surface</a> - Local visualization tool</li>
<li><a href="https://www.wolframcloud.com/objects/demonstrations/TrajectoriesOnTheMullerBrownPotentialEnergySurface-source.nb">Müller-Brown Potential Visualization (Wolfram)</a> - External Wolfram demonstration</li>
<li><a href="/posts/muller-brown-in-pytorch/">Implementing the Müller-Brown Potential in PyTorch</a> - Detailed implementation guide with performance analysis</li>
</ul>
<h2 id="related-systems">Related Systems</h2>
<p>The Müller-Brown potential belongs to a family of analytical benchmark systems used in computational chemistry. Other notable examples include:</p>
<ul>
<li><strong>Lennard-Jones potential</strong>: Single-minimum benchmark for equilibrium properties</li>
<li><strong>Double-well potentials</strong>: Simple models for bistable systems</li>
<li><strong>Eckart barrier</strong>: One-dimensional tunneling benchmark</li>
<li><strong>Wolfe-Quapp potential</strong>: Higher-dimensional extension with valley-ridge inflection points</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>The Müller-Brown potential demonstrates how a well-designed benchmark can evolve with a field. Originating from 1970s computational constraints to test algorithms when quantum chemistry calculations were expensive, its topology causes naive linear-interpolation approaches to fail while maintaining instantaneous computational execution. Because of this, it remains a heavily analyzed benchmark system today.</p>
<p>It serves specific purposes in the machine learning era by providing a controlled environment for developing methods targeted at complex realistic molecular systems. Its evolution from a practical surrogate model to a machine learning benchmark demonstrates the continued relevance of foundational analytical test cases in computational science.</p>
]]></content:encoded></item><item><title>Embedded-Atom Method: Impurities and Defects in Metals</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method/</link><pubDate>Fri, 22 Aug 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method/</guid><description>Daw and Baskes's foundational 1984 paper introducing the Embedded-Atom Method (EAM), a many-body potential for metal simulations.</description><content:encoded><![CDATA[<h2 id="contribution-adaptive-many-body-potentials">Contribution: Adaptive Many-Body Potentials</h2>
<p>This is a foundational <strong>method paper</strong> that introduces a new class of semi-empirical, many-body interatomic potential: the <strong>Embedded-Atom Method (EAM)</strong>. It is designed for large-scale atomistic simulations of metallic systems, bridging the gap between computationally cheap (but physically limited) pair potentials and accurate (but expensive) quantum mechanical methods. The EAM achieves pair-potential speed while incorporating many-body physics inspired by density functional theory.</p>
<h2 id="motivation-the-geometric-limits-of-pair-potentials">Motivation: The Geometric Limits of Pair Potentials</h2>
<p>The authors sought to overcome the limitations of <strong>pair potentials</strong> (the dominant method of the time), which failed in three key areas:</p>
<ul>
<li><strong>Elastic Anisotropy:</strong> Pair potentials enforce the Cauchy relation ($C_{12} = C_{44}$), which is violated by most transition metals.</li>
<li><strong>Volume Ambiguity:</strong> Pair potentials require a volume-dependent energy term, making them impossible to use accurately on surfaces or cracks where local volume is undefined.</li>
<li><strong>Chemical Incompatibility:</strong> Pair potentials cannot model chemically active impurities like Hydrogen.</li>
</ul>
<p>First-principles quantum mechanical methods (e.g., band theory) are limited by basis-set size and periodicity requirements, making them impractical for the large systems (thousands of atoms) needed to study defects, surfaces, and mechanical properties.</p>
<p>The goal was to create a new model that bridges this gap in accuracy and computational cost.</p>
<h2 id="core-innovation-the-embedding-energy-function">Core Innovation: The Embedding Energy Function</h2>
<p>The EAM postulates that the energy of an atom is determined by the local electron density of its neighbors. The total energy is:</p>
<p>$$E_{tot} = \sum_{i} F_i(\rho_{h,i}) + \frac{1}{2}\sum_{i \neq j} \phi_{ij}(R_{ij})$$</p>
<ul>
<li><strong>$F_i(\rho_{h,i})$ (Embedding Energy):</strong> The energy required to embed atom $i$ into the background electron density $\rho$ provided by its neighbors. This term is non-linear and captures many-body effects.</li>
<li><strong>$\phi_{ij}$ (Pair Potential):</strong> A short-range electrostatic repulsion between cores.</li>
<li><strong>$\rho_{h,i}$ (Host Density):</strong> Approximated as a linear superposition of atomic densities: $\rho_{h,i} = \sum_{j \neq i} \rho^a_j(R_{ij})$.</li>
</ul>
<p>The key innovations are:</p>
<ol>
<li><strong>The Embedding Energy</strong>: Each atom $i$ contributes an energy $F_i$ which is a non-linear function of the local electron density $\rho_{h,i}$ it is embedded in. This density is approximated as a simple linear superposition of the atomic electron densities of all its neighbors. This term captures the crucial many-body effects of metallic bonding.</li>
<li><strong>A Redefined Pair Potential</strong>: A short-range, two-body potential $\phi_{ij}$ is retained, but it primarily models the electrostatic core-core repulsion.</li>
<li><strong>Elimination of the &ldquo;Volume&rdquo; Problem</strong>: Because the embedding energy depends on the local electron density (a quantity that is always well-defined, even at a surface or a crack tip), the method circumvents the ambiguities of volume-dependent pair potentials.</li>
<li><strong>Intrinsic Many-Body Nature</strong>: The non-linearity of the embedding function $F(\rho)$ naturally accounts for why chemically active impurities (like hydrogen) cannot be described by pair potentials and correctly breaks the Cauchy relation for elastic constants.</li>
</ol>
<h2 id="experimental-design-robust-parameter-validation">Experimental Design: Robust Parameter Validation</h2>
<p>The authors validated EAM through a rigorous split between parameterization data and prediction tasks:</p>
<p><strong>Fitting Data (Bulk Properties Only):</strong></p>
<p>The model parameters were fitted exclusively to these experimental values for Ni and Pd:</p>
<ul>
<li>Lattice constant ($a_0$)</li>
<li>Elastic constants ($C_{11}, C_{12}, C_{44}$)</li>
<li>Sublimation energy ($E_s$)</li>
<li>Vacancy-formation energy ($E^F_{1V}$)</li>
<li>Hydrogen heat of solution (for fitting H parameters)</li>
</ul>
<p><strong>Validation Tests (No Further Fitting):</strong></p>
<p>The model was then evaluated on its ability to predict these properties without any additional parameter adjustments:</p>
<ul>
<li><strong>Surface Relaxations:</strong> Ni(110) surface contraction</li>
<li><strong>Surface Energy:</strong> Ni(100) surface energy</li>
<li><strong>Hydrogen Migration:</strong> H migration energy in Pd</li>
<li><strong>Fracture Mechanics:</strong> Hydrogen embrittlement in Ni slabs</li>
</ul>
<h2 id="results-extending-predictive-power-to-surfaces-and-defects">Results: Extending Predictive Power to Surfaces and Defects</h2>
<ol>
<li><strong>Many-Body Physics:</strong> The embedding function $F(\rho)$ successfully captures the volume-dependence of metallic cohesion, fixing the &ldquo;Cauchy discrepancy&rdquo; inherent in pair potentials.</li>
<li><strong>Surface Properties:</strong> A single set of functions, fitted only to bulk data, correctly reproduces surface relaxations within 0.1 Å of experiment across three faces (100), (110), and (111) for Ni. The Ni(100) surface energy (1550 erg/cm²) compares well with the measured crystal-vapor average (1725 erg/cm²).</li>
<li><strong>Hydrogen in Bulk:</strong> The method predicts H migration energy in Pd as 0.26 eV, matching experiment exactly. Hydride lattice expansions are also well reproduced: 4.5% for NiH (experiment: 5%) and 4% for PdH (experiment: 3.5% for PdH$_{0.6}$).</li>
<li><strong>Hydrogen on Surfaces:</strong> Calculated adsorption sites on all three Ni and Pd faces agree with experimentally determined sites. Adsorption energies on Ni surfaces are systematically about 0.25 eV too low, while on Pd surfaces the error is much smaller (about 0.05 eV too high on average).</li>
<li><strong>Fracture Mechanics:</strong> Static fracture calculations on Ni slabs demonstrate brittle fracture behavior and show that hydrogen lowers the fracture stress, providing a qualitative model of hydrogen embrittlement.</li>
</ol>
<h2 id="limitations">Limitations</h2>
<p>The authors acknowledge several limitations:</p>
<ul>
<li>The functions $F$ and $\phi$ are not uniquely determined by the empirical fitting procedure. The short-range pair potential (restricted to first neighbors in fcc metals) may not be the best choice for all crystal structures.</li>
<li>The choice of hydrogen embedding function (Puska et al. vs. Norskov&rsquo;s corrected function) remains undecided and may affect hydrogen binding energies.</li>
<li>The fracture calculations are static, and dynamical effects and plasticity play important roles in real fracture that are not captured.</li>
<li>The method has only been demonstrated for fcc metals (Ni and Pd). Extension to bcc metals and other crystal structures requires further investigation.</li>
</ul>
<h2 id="reproducibility-details">Reproducibility Details</h2>
<h3 id="algorithms">Algorithms</h3>
<p>To replicate the method, three specific algorithmic definitions are needed:</p>
<ol>
<li>
<p><strong>Atomic Density Construction</strong>: The electron density $\rho^a(r)$ is a weighted sum of Hartree-Fock $s$ and $d$ orbital densities (from Clementi &amp; Roetti tables), controlled by a parameter $N_s$ (the number of s-like electrons):
$$\rho^a(r) = N_s\rho_s^a(r) + (N-N_s)\rho_d^a(r)$$
For Ni, $N_s = 0.85$; for Pd, $N_s = 0.65$ (fitted to H solution heat).</p>
</li>
<li>
<p><strong>Pair Potential Form</strong>: The short-range pair interaction derives from an effective charge function $Z(r)$ to handle core repulsion:
$$\phi_{ij}(r) = \frac{Z_i(r)Z_j(r)}{r}$$
Splines for $Z(r)$ are provided in Table II.</p>
</li>
<li>
<p><strong>Analytic Forces</strong>: Because embedding energy depends on neighbor density, the force calculation is many-body:
$$\vec{f}_{k} = -\sum_{j(\neq k)} (F&rsquo;_{k} \rho&rsquo;_{j} + F&rsquo;_{j} \rho&rsquo;_{k} + \phi&rsquo;_{jk}) \vec{r}_{jk}$$</p>
</li>
</ol>
<h3 id="models">Models</h3>
<p>The functions $F(\rho)$ and $\phi(r)$ are modeled using <strong>cubic splines</strong>, with parameters fitted to reproduce bulk experimental constants. The embedding function $F(\rho)$ is constrained to have a single minimum and to be linear at high densities, matching the qualitative form of the first-principles calculations by Puska et al. Energy minimization uses the <strong>conjugate gradients</strong> technique. The paper explicitly lists spline knots, coefficients, and cutoffs in Tables II and IV, making the method fully reproducible.</p>















<figure class="post-figure center ">
    <img src="/img/notes/chemistry/eam-embedding-effective-charge.webp"
         alt="Reproduction of Figures 1 and 2 from Daw &amp; Baskes (1984) showing the embedding energy and effective charge functions for Ni and Pd"
         title="Reproduction of Figures 1 and 2 from Daw &amp; Baskes (1984) showing the embedding energy and effective charge functions for Ni and Pd"
         
         
         loading="lazy"
         class="post-image">
    
    <figcaption class="post-caption"><strong>Left:</strong> Dimensionless embedding energy ($E/E_s$) vs. normalized electron density ($\rho/\bar{\rho}$). The minimum near $\rho/\bar{\rho} \approx 1.0$ drives metallic cohesion. <strong>Right:</strong> Normalized effective charge ($Z/Z_0$) vs. normalized distance ($R/a_0$). The charge drops to zero near $R/a_0 = 0.85$, ensuring short-range interactions. Reproduced from Table II spline knots.</figcaption>
    
</figure>

<h3 id="evaluation">Evaluation</h3>
<p><strong>Fitting Data (Used for Parameterization):</strong></p>
<p>Bulk experimental properties for Ni and Pd only:</p>
<ul>
<li>Lattice constant ($a_0$)</li>
<li>Elastic constants ($C_{11}, C_{12}, C_{44}$)</li>
<li>Sublimation energy ($E_s$)</li>
<li>Vacancy-formation energy ($E^F_{1V}$)</li>
<li>Hydrogen heat of solution (for fitting H parameters)</li>
</ul>
<p><strong>Validation Results (Predictions Without Further Fitting):</strong></p>
<table>
  <thead>
      <tr>
          <th>Property</th>
          <th>Predicted</th>
          <th>Experimental</th>
          <th>Agreement</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Ni(110) surface contraction</td>
          <td>-0.11 Å</td>
          <td>-0.06 to -0.10 Å</td>
          <td>Within 0.1 Å</td>
      </tr>
      <tr>
          <td>Ni(100) surface energy</td>
          <td>1550 erg/cm²</td>
          <td>1725 erg/cm² (avg.)</td>
          <td>Close</td>
      </tr>
      <tr>
          <td>H migration in Pd</td>
          <td>0.26 eV</td>
          <td>0.26 eV</td>
          <td>Exact</td>
      </tr>
      <tr>
          <td>NiH lattice expansion</td>
          <td>4.5%</td>
          <td>5%</td>
          <td>Close</td>
      </tr>
      <tr>
          <td>PdH lattice expansion</td>
          <td>4%</td>
          <td>3.5% (PdH$_{0.6}$)</td>
          <td>Close</td>
      </tr>
      <tr>
          <td>H adsorption sites (Ni, Pd)</td>
          <td>Correct on all faces</td>
          <td>Matches experiment</td>
          <td>Exact</td>
      </tr>
      <tr>
          <td>H embrittlement in Ni</td>
          <td>Qualitative model</td>
          <td>-</td>
          <td>Qualitative</td>
      </tr>
  </tbody>
</table>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Daw, M. S., &amp; Baskes, M. I. (1984). Embedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals. <em>Physical Review B</em>, 29(12), 6443-6453. <a href="https://doi.org/10.1103/PhysRevB.29.6443">https://doi.org/10.1103/PhysRevB.29.6443</a></p>
<p><strong>Publication</strong>: Physical Review B, 1984</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{daw1984embedded,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span>=<span style="color:#e6db74">{Embedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span>=<span style="color:#e6db74">{Daw, Murray S and Baskes, Mike I}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span>=<span style="color:#e6db74">{Physical Review B}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span>=<span style="color:#e6db74">{29}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">number</span>=<span style="color:#e6db74">{12}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span>=<span style="color:#e6db74">{6443--6453}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span>=<span style="color:#e6db74">{1984}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">publisher</span>=<span style="color:#e6db74">{APS}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">doi</span>=<span style="color:#e6db74">{10.1103/PhysRevB.29.6443}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p><strong>Additional Resources</strong>:</p>
<ul>
<li><a href="/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-review-1993/">EAM Review (1993)</a></li>
<li><a href="/notes/chemistry/molecular-simulation/classical-methods/embedded-atom-method-voter-1994/">EAM User Guide (1994)</a></li>
<li><a href="https://www.ctcms.nist.gov/potentials/">NIST Interatomic Potentials Repository</a></li>
</ul>
]]></content:encoded></item><item><title>Umbrella Sampling: Monte Carlo Free-Energy Estimation</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/umbrella-sampling/</link><pubDate>Thu, 21 Aug 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/umbrella-sampling/</guid><description>Torrie and Valleau's 1977 paper introducing Umbrella Sampling, an importance sampling technique for Monte Carlo free-energy calculations.</description><content:encoded><![CDATA[<h2 id="a-methodological-shift-in-monte-carlo-simulations">A Methodological Shift in Monte Carlo Simulations</h2>
<p>This is a <strong>Method</strong> paper that introduces a novel computational technique for Monte Carlo simulations. It presents Umbrella Sampling, an importance sampling approach that uses non-physical distributions to calculate free energy differences in molecular systems.</p>
<h2 id="the-sampling-gap-in-phase-transitions">The Sampling Gap in Phase Transitions</h2>
<p>The paper addresses the failure of conventional Boltzmann-weighted Monte Carlo to estimate free energy differences.</p>
<ul>
<li><strong>The Problem</strong>: Free energy depends on the integral of configurations that are rare in the reference system. In a standard simulation, the relevant probability density $f_0(\Delta U^*)$ is too small to be sampled accurately by conventional Boltzmann-weighted Monte Carlo.</li>
<li><strong>Phase Transitions</strong>: Conventional &ldquo;thermodynamic integration&rdquo; fails near phase transitions because it requires a path of integration where ensemble averages can be reliably measured, which is difficult in unstable regions.</li>
</ul>
<h2 id="bridging-states-with-non-physical-distributions">Bridging States with Non-Physical Distributions</h2>
<p>The authors introduce a non-physical distribution $\pi(q^N)$ to bridge the gap between a reference system (0) and a system of interest (1).</p>
<ul>
<li><strong>Arbitrary Weights</strong>: They generate a Markov chain with a limiting distribution $\pi(q^N)$ that differs from the Boltzmann distribution of either system. This distribution is written as $\pi(q&rsquo;^N) = w(q&rsquo;^N) \exp(-U_0(q&rsquo;^N)/kT_0) / Z$, where $w(q^N) = W(\Delta U^*)$ is a weighting function chosen to favor configurations with values of $\Delta U^*$ important to the free-energy integral.</li>
<li><strong>Reweighting Formula</strong>: The unbiased average of any property $\theta$ is recovered via the ratio of biased averages:</li>
</ul>
<p>$$\langle\theta\rangle_{0}=\frac{\langle\theta/w\rangle_{w}}{\langle1/w\rangle_{w}}$$</p>
<ul>
<li><strong>Overlap</strong>: The method allows sampling a range of $\Delta U^*$ up to <strong>three times</strong> that of a conventional Monte Carlo experiment, enabling accurate determination of values of $f_0(\Delta U^*)$ as small as $10^{-8}$. If a single weight function cannot span the entire gap, additional overlapping umbrella-sampling experiments are carried out with different weighting functions exploring successively overlapping ranges of $\Delta U^*$.</li>
</ul>
<h2 id="validation-on-lennard-jones-fluids">Validation on Lennard-Jones Fluids</h2>
<p>The authors validated Umbrella Sampling using Monte Carlo simulations of model fluids.</p>
<h3 id="experimental-setup">Experimental Setup</h3>
<ul>
<li><strong>System Specifications</strong>: The study used a <strong>Lennard-Jones (LJ)</strong> fluid and an <strong>inverse-12 &ldquo;soft-sphere&rdquo;</strong> fluid.</li>
<li><strong>System Size</strong>: Simulations were primarily performed with <strong>$N=32$ particles</strong>, with some validation runs at <strong>$N=108$ particles</strong> to check for size dependence.</li>
<li><strong>State Points</strong>: Calculations covered a wide range of densities ($N\sigma^3/V = 0.50$ to $0.85$) and temperatures ($kT/\epsilon = 0.7$ to $2.8$), including the gas-liquid coexistence region.</li>
</ul>
<h3 id="baselines">Baselines</h3>
<ul>
<li><strong>Baselines</strong>: Results were compared to thermodynamic integration data from <strong>Hansen</strong>, <strong>Levesque</strong>, and <strong>Verlet</strong>.</li>
<li><strong>Quantitative Success</strong>:
<ul>
<li><strong>Agreement</strong>: The free energy estimates agreed with pressure integration results to within statistical uncertainties (e.g., at $kT/\epsilon=1.35$, Umbrella Sampling gave -3.236 vs. Conventional -3.25).</li>
<li><strong>Precision</strong>: Free energy differences were obtained with high precision ($\pm 0.005 NkT$ for $N=108$).</li>
<li><strong>Efficiency</strong>: A single umbrella run could replace the &ldquo;numerous runs&rdquo; required for conventional $1/T$ integrations.</li>
</ul>
</li>
</ul>
<h2 id="temperature-scaling-via-reweighting">Temperature Scaling via Reweighting</h2>
<p>When the reference system has the same internal energy function as the system of interest (i.e., the same fluid at a different temperature), the free-energy expression simplifies to:</p>
<p>$$\frac{A(T)}{kT} = \frac{A(T_0)}{kT_0} - \ln \int f_0(U) \exp\left[-U\left(\frac{1}{kT} - \frac{1}{kT_0}\right)\right] dU$$</p>
<p>This is especially useful because a single determination of $f_0(U)$ over a wide energy range gives the free energy over a whole range of temperatures simultaneously. For 32 Lennard-Jones particles, only two umbrella-sampling experiments are needed to span the temperature range from the triple point ($kT/\epsilon = 0.7$) to twice the critical temperature ($kT/\epsilon = 2.8$). For 108 particles, four experiments suffice.</p>
<h2 id="mapping-the-liquid-gas-free-energy-surface">Mapping the Liquid-Gas Free Energy Surface</h2>
<ul>
<li><strong>Methodological Utility</strong>: The method successfully mapped the free energy of the LJ fluid across the liquid-gas transition, a region where conventional methods face convergence problems.</li>
<li><strong>N-Dependence</strong>: Comparison between $N=32$ and $N=108$ showed no statistically significant size dependence for free energy differences, suggesting small systems are sufficient for these estimates.</li>
<li><strong>Comparison with Gosling-Singer Method</strong>: The paper contrasts its results with free energies derived from Gosling and Singer&rsquo;s entropy estimation technique, finding discrepancies as large as $0.4N\epsilon$ (a 20% error in the nonideal entropy), equivalent to overestimating the configurational integral of a 108-particle system by a factor of $10^{16}$.</li>
<li><strong>Generality</strong>: While demonstrated on energy ($U$), the authors note the weighting function $w$ can be any function of the coordinates, generalizing the technique beyond simple free energy differences.</li>
</ul>
<h2 id="reproducibility">Reproducibility</h2>
<p>This 1977 paper predates modern code-sharing practices, and no source code or data files are publicly available. However, the paper provides sufficient algorithmic detail for reimplementation:</p>
<ul>
<li><strong>Constructing $W$</strong>: The paper does not derive $W$ analytically. It uses a <strong>trial-and-error procedure</strong>: start with a short Boltzmann-weighted experiment, then broaden the distribution in stages through short test runs, adjusting weights to flatten the probability density $f_w(\Delta U^*)$. The paper acknowledges this requires &ldquo;interaction between the trial computer results and human judgment.&rdquo;</li>
<li><strong>Specific Weights</strong>: Table I provides the exact numerical weights used for the 32-particle soft-sphere experiment at $N\sigma^3/V = 0.85$, $kT/\epsilon = 2.74$, with values spanning from $W=1{,}500{,}000$ at the lowest energies down to $W=1.0$ at the center and back up to $W=16.0$ at the highest energies.</li>
<li><strong>Potentials</strong>: The Lennard-Jones and inverse-twelve potentials are fully specified (Eqs. 8 and 9).</li>
<li><strong>State Points</strong>: Densities and temperatures are enumerated in Tables II and III.</li>
<li><strong>Block Averaging</strong>: Errors were estimated by treating sequences of $m$ steps as independent samples, where $m$ is determined by increasing block size until no systematic trends can be detected in either the average or the standard deviation of the mean.</li>
</ul>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Torrie, G. M., &amp; Valleau, J. P. (1977). Nonphysical sampling distributions in Monte Carlo free-energy estimation: Umbrella sampling. <em>Journal of Computational Physics</em>, 23(2), 187-199. <a href="https://doi.org/10.1016/0021-9991(77)90121-8">https://doi.org/10.1016/0021-9991(77)90121-8</a></p>
<p><strong>Publication</strong>: Journal of Computational Physics, 1977</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{torrie1977nonphysical,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span>=<span style="color:#e6db74">{Nonphysical sampling distributions in Monte Carlo free-energy estimation: Umbrella sampling}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span>=<span style="color:#e6db74">{Torrie, Glenn M and Valleau, John P}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span>=<span style="color:#e6db74">{Journal of Computational Physics}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span>=<span style="color:#e6db74">{23}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">number</span>=<span style="color:#e6db74">{2}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span>=<span style="color:#e6db74">{187--199}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span>=<span style="color:#e6db74">{1977}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">publisher</span>=<span style="color:#e6db74">{Elsevier}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">doi</span>=<span style="color:#e6db74">{10.1016/0021-9991(77)90121-8}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div>]]></content:encoded></item><item><title>Lennard-Jones on Adsorption and Diffusion on Surfaces</title><link>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/processes-of-adsorption/</link><pubDate>Sun, 17 Aug 2025 00:00:00 +0000</pubDate><guid>https://hunterheidenreich.com/notes/chemistry/molecular-simulation/classical-methods/processes-of-adsorption/</guid><description>Lennard-Jones's 1932 foundational paper introducing potential energy surface models to unify physical and chemical adsorption.</description><content:encoded><![CDATA[<h2 id="the-theoretical-foundation-of-adsorption-and-diffusion">The Theoretical Foundation of Adsorption and Diffusion</h2>
<p>This paper represents a foundational <strong>Theory</strong> contribution with dual elements of <strong>Systematization</strong>. It derives physical laws for adsorption potentials (Section 2) and diffusion kinetics (Section 4) from first principles, validating them against external experimental data (Ward, Benton). It bridges <strong>electronic structure theory</strong> (potential curves) and <strong>statistical mechanics</strong> (diffusion rates). It provides a unifying theoretical framework to explain a range of experimental observations.</p>
<h2 id="reconciling-physisorption-and-chemisorption">Reconciling Physisorption and Chemisorption</h2>
<p>The primary motivation was to reconcile conflicting experimental evidence regarding the nature of gas-solid interactions. At the time, it was observed that the same gas and solid could interact weakly at low temperatures (consistent with van der Waals forces) but exhibit strong, chemical-like bonding at higher temperatures, a process requiring significant activation energy. The paper seeks to provide a single, coherent model that can explain both &ldquo;physical adsorption&rdquo; (physisorption) and &ldquo;activated&rdquo; or &ldquo;chemical adsorption&rdquo; (chemisorption) and the transition between them.</p>
<h2 id="quantum-mechanical-potential-energy-surfaces-for-adsorption">Quantum Mechanical Potential Energy Surfaces for Adsorption</h2>
<p>The core novelty is the application of quantum mechanical potential energy surfaces to the problem of surface adsorption. The key conceptual breakthroughs are:</p>
<ol>
<li>
<p><strong>Dual Potential Energy Curves</strong>: The paper proposes that the state of the system must be described by at least two distinct potential energy curves as a function of the distance from the surface:</p>
<ul>
<li>One curve represents the interaction of the intact molecule with the surface (e.g., H₂ with a metal). This corresponds to weak, long-range van der Waals forces.</li>
<li>A second curve represents the interaction of the dissociated constituent atoms with the surface (e.g., 2H atoms with the metal). This corresponds to strong, short-range chemical bonds.</li>
</ul>
</li>
<li>
<p><strong>Activated Adsorption via Curve Crossing</strong>: The transition from the molecular (physisorbed) state to the atomic (chemisorbed) state occurs at the intersection of these two potential energy curves. For a molecule to dissociate and chemisorb, it must possess sufficient energy to reach this crossing point. This energy is identified as the <strong>energy of activation</strong>, which had been observed experimentally.</p>
</li>
<li>
<p><strong>Unified Model</strong>: This model unifies physisorption and chemisorption into a single continuous process. A molecule approaching the surface is first trapped in the shallow potential well of the physisorption curve. If it acquires enough thermal energy to overcome the activation barrier, it can transition to the much deeper potential well of the chemisorption state. This provides a clear physical picture for temperature-dependent adsorption phenomena.</p>
</li>
<li>
<p><strong>Quantum Mechanical Basis for Cohesion</strong>: To explain the nature of the chemisorption bond itself, Lennard-Jones draws on the then-recent quantum theory of metals (Sommerfeld, Bloch). In a metal, electrons are not bound to individual atoms but instead occupy shared energy states (bands) spread across the crystal. When an atom approaches the surface, local energy levels form in the gap between the bulk bands, creating sites where bonding can occur. The adsorption bond arises from the interaction between the valency electron of the approaching atom and conduction electrons of the metal, forming a closed shell analogous to a homopolar bond.</p>
</li>
</ol>
<h2 id="validating-theory-against-experimental-gas-solid-interactions">Validating Theory Against Experimental Gas-Solid Interactions</h2>
<p>This is a theoretical paper with no original experiments performed by the author. However, Lennard-Jones validates his theoretical framework against existing experimental data from other researchers:</p>
<ul>
<li><strong>Ward&rsquo;s data</strong>: Hydrogen absorption on copper, used to validate the square root time law for slow sorption kinetics (§4)</li>
<li><strong>Activated adsorption experiments</strong>: Benton and White (hydrogen on nickel), Taylor and Williamson, and Taylor and McKinney all provided isobar data showing temperature-dependent transitions between adsorption types (§3). Garner and Kingman documented three distinct adsorption regimes at different temperatures.</li>
<li><strong>van der Waals constant data</strong>: Used existing measurements of diamagnetic susceptibility to calculate predicted heats of adsorption (e.g., argon on copper yielding approximately 6000 cal/gram atom, nitrogen roughly 2500 cal/gram mol, hydrogen roughly 1300 cal/gram mol)</li>
<li><strong>KCl crystal calculations</strong>: Computed the full attractive potential field of argon above a KCl crystal lattice, accounting for the discrete ionic structure to produce detailed potential energy curves at different surface positions (§2)</li>
</ul>
<p>The validation approach involves deriving theoretical predictions from first principles and showing they match the functional form and magnitude of independently measured experimental results.</p>
<h2 id="the-lennard-jones-diagram-and-activated-adsorption">The Lennard-Jones Diagram and Activated Adsorption</h2>
<p><strong>Key Outcomes</strong>:</p>
<ul>
<li>The paper introduced the now-famous Lennard-Jones diagram for surface interactions, plotting potential energy versus distance from the surface for both molecular and dissociated atomic species. This graphical model became a cornerstone of surface science.</li>
<li>Derived the square root time law ($S \propto \sqrt{t}$) for slow sorption kinetics, validated against Ward&rsquo;s experimental data.</li>
<li>Established quantitative connection between adsorption potentials and measurable atomic properties (diamagnetic susceptibility).</li>
</ul>
<p><strong>Conclusions</strong>:</p>
<ul>
<li>The nature of adsorption is determined by the interplay between two distinct potential states (molecular and atomic).</li>
<li>&ldquo;Activated adsorption&rdquo; is the process of overcoming an energy barrier to transition from a physically adsorbed molecular state to a chemically adsorbed atomic state.</li>
<li>The model predicts that the specific geometry of the surface (i.e., the lattice spacing) and the orientation of the approaching molecule are critical, as they influence the shape of the potential energy surfaces and thus the magnitude of the activation energy.</li>
<li>The reverse process (recombination of atoms and desorption of a molecule) also requires activation energy to move from the chemisorbed state back to the molecular state.</li>
<li>This entire mechanism is proposed as a fundamental factor in heterogeneous <strong>catalysis</strong>, where the surface acts to lower the activation energy for molecular dissociation, facilitating chemical reactions.</li>
</ul>
<p><strong>Limitations</strong>:</p>
<ul>
<li>The initial &ldquo;method of images&rdquo; derivation assumes a perfectly continuous conducting surface, an approximation that breaks down at the atomic orbital level close to the surface.</li>
<li>While Lennard-Jones uses one-dimensional calculations to estimate initial potential well depths, he later qualitatively extends this to 3D &ldquo;contour tunnels&rdquo; to explain surface migration. However, these early geometric approximations lack the many-body, multi-dimensional complexity natively handled by modern Density Functional Theory (DFT) simulations.</li>
</ul>
<hr>
<h2 id="mathematical-derivations">Mathematical Derivations</h2>
<h3 id="van-der-waals-calculation-section-2">Van der Waals Calculation (Section 2)</h3>
<p>The paper derives the attractive force between a neutral atom and a metal surface using the <strong>classical method of electrical images</strong>. The key steps are:</p>
<ol>
<li><strong>Method of Images</strong>: Lennard-Jones models the metal as a continuum of perfectly mobile electric fluid (a perfectly polarisable system). When a neutral atom approaches, its instantaneous dipole moment induces image charges in the metal surface.</li>
</ol>















<figure class="post-figure center ">
    <img src="/img/notes/method-of-images-atom-surface.webp"
         alt="Diagram showing an atom with nucleus (&#43;Ne) and electrons (-e) at distance R from a conducting surface, with its electrical image reflected on the opposite side"
         title="Diagram showing an atom with nucleus (&#43;Ne) and electrons (-e) at distance R from a conducting surface, with its electrical image reflected on the opposite side"
         
         
         loading="lazy"
         class="post-image">
    
    <figcaption class="post-caption">An atom and its electrical image in a conducting surface. The nucleus (+Ne) and electrons create mirror charges across the metal plane.</figcaption>
    
</figure>

<ol start="2">
<li><strong>The Interaction Potential</strong>: The resulting potential energy $W$ of an atom at distance $R$ from the metal surface is:</li>
</ol>
<p>$$W = -\frac{e^2 \overline{r^2}}{6R^3}$$</p>
<p>where $\overline{r^2}$ is the mean square distance of electrons from the nucleus.</p>
<ol start="3">
<li><strong>Connection to Measurable Properties</strong>: This theoretical potential can be calculated using <strong>diamagnetic susceptibility</strong> ($\chi$). The interaction simplifies to:</li>
</ol>
<p>$$W = \mu R^{-3}$$</p>
<p>where $\mu = mc^2\chi/L$, with $m$ the electron mass, $c$ the speed of light, $\chi$ the diamagnetic susceptibility, and $L$ Loschmidt&rsquo;s number ($6.06 \times 10^{23}$). This connects the adsorption potential to measurable magnetic properties of the atom.</p>
<ol start="4">
<li><strong>Repulsive Forces and Equilibrium</strong>: By assuming repulsive forces account for approximately 40% of the potential at equilibrium, Lennard-Jones estimates heats of adsorption. For argon on copper, this yields approximately 6000 cal per gram atom. Similar calculations give roughly 2500 cal/gram mol for nitrogen on copper and 1300 cal/gram mol for hydrogen.</li>
</ol>
<hr>
<h2 id="kinetic-theory-of-slow-sorption-section-4">Kinetic Theory of Slow Sorption (Section 4)</h2>
<p>The paper extends beyond surface phenomena to model how gas <em>enters</em> the bulk solid (absorption). This section is critical for understanding time-dependent adsorption kinetics.</p>
<h3 id="the-cracks-hypothesis">The &ldquo;Cracks&rdquo; Hypothesis</h3>
<p>Lennard-Jones proposes that &ldquo;slow sorption&rdquo; is <strong>lateral diffusion along surface cracks</strong> (fissures between microcrystal boundaries) in the solid. The outer surface presents not a uniform plane but a network of narrow, deep crevasses where gas can penetrate. This reframes the problem: the rate-limiting step is diffusion along these crack walls, explaining why sorption rates differ from predictions based on bulk diffusion coefficients.</p>
<h3 id="the-diffusion-equation">The Diffusion Equation</h3>
<p>The problem is formulated using Fick&rsquo;s second law:</p>
<p>$$\frac{\partial n}{\partial t} = D \frac{\partial^{2}n}{\partial x^{2}}$$</p>
<p>where $n$ is the concentration of adsorbed atoms, $t$ is time, $D$ is the diffusion coefficient, and $x$ is the position along the crack.</p>
<h3 id="derivation-of-the-diffusion-coefficient">Derivation of the Diffusion Coefficient</h3>
<p>The diffusion coefficient is derived from kinetic theory:</p>
<p>$$D = \frac{\bar{c}^2 \tau^2}{2\tau^*}$$</p>
<p>where:</p>
<ul>
<li>$\bar{c}$ is the mean lateral velocity of mobile atoms parallel to the surface</li>
<li>$\tau$ is the time an atom spends in the mobile (activated) state</li>
<li>$\tau^*$ is the interval between activation events</li>
</ul>
<p>Atoms are &ldquo;activated&rdquo; to a mobile state with energy $E_0$, after which they can migrate along the surface.</p>
<h3 id="the-square-root-law">The Square Root Law</h3>
<p>Solving the diffusion equation for a semi-infinite crack yields the total amount of gas absorbed $S$ as a function of time:</p>
<p>$$S = 2n_0 \sqrt{\frac{Dt}{\pi}}$$</p>
<p>This predicts that <strong>absorption scales with the square root of time</strong>:</p>
<p>$$S \propto \sqrt{t}$$</p>
<h3 id="experimental-validation">Experimental Validation</h3>
<p>Lennard-Jones validates this derivation by re-analyzing Ward&rsquo;s experimental data on the Copper/Hydrogen system. Plotting the absorbed quantity against $\sqrt{t}$ produces linear curves, confirming the theoretical prediction. From the slope of the $\log_{10}(S^2/q^2t)$ vs. $1/T$ plot, Ward determined an activation energy of 14,100 cal per gram-molecule for the surface diffusion process.</p>
<hr>
<h2 id="surface-topography-and-3d-contours">Surface Topography and 3D Contours</h2>
<p>The notes above imply a one-dimensional process (distance from surface). The paper explicitly expands this to three dimensions to explain surface migration.</p>
<h3 id="potential-tunnels">Potential &ldquo;Tunnels&rdquo;</h3>
<p>Lennard-Jones models the surface potential as <strong>3D contour surfaces</strong> resembling &ldquo;underground caverns&rdquo; or tunnels. The potential energy landscape above a crystalline surface has periodic minima and saddle points.</p>
<h3 id="surface-migration">Surface Migration</h3>
<p>Atoms migrate along &ldquo;tunnels&rdquo; of low potential energy between surface atoms. The activation energy for surface diffusion corresponds to the barrier height between adjacent potential wells on the surface. This geometric picture explains:</p>
<ul>
<li>Why certain crystallographic orientations are more reactive</li>
<li>The temperature dependence of surface diffusion rates</li>
<li>The role of surface defects in catalysis</li>
</ul>
<h2 id="reproducibility">Reproducibility</h2>
<p>This is a 1932 theoretical paper with no associated code, datasets, or models. The mathematical derivations are fully presented in the text and can be followed from first principles. The experimental data referenced (Ward&rsquo;s copper/hydrogen measurements, Benton and White&rsquo;s nickel/hydrogen isobars) are cited from independently published sources. No computational artifacts exist.</p>
<ul>
<li><strong>Status</strong>: Closed (theoretical paper, no reproducibility artifacts)</li>
<li><strong>Hardware</strong>: N/A (analytical derivations only)</li>
</ul>
<h2 id="paper-information">Paper Information</h2>
<p><strong>Citation</strong>: Lennard-Jones, J. E. (1932). Processes of Adsorption and Diffusion on Solid Surfaces. <em>Transactions of the Faraday Society</em>, 28, 333-359. <a href="https://doi.org/10.1039/tf9322800333">https://doi.org/10.1039/tf9322800333</a></p>
<p><strong>Publication</strong>: Transactions of the Faraday Society, 1932</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bibtex" data-lang="bibtex"><span style="display:flex;"><span><span style="color:#a6e22e">@article</span>{lennardjones1932processes,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">title</span>=<span style="color:#e6db74">{Processes of adsorption and diffusion on solid surfaces}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">author</span>=<span style="color:#e6db74">{Lennard-Jones, John Edward}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">journal</span>=<span style="color:#e6db74">{Transactions of the Faraday Society}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">volume</span>=<span style="color:#e6db74">{28}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pages</span>=<span style="color:#e6db74">{333--359}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">year</span>=<span style="color:#e6db74">{1932}</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">publisher</span>=<span style="color:#e6db74">{Royal Society of Chemistry}</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div>]]></content:encoded></item></channel></rss>