Production-grade automation for LAMMPS molecular dynamics simulations with robust error handling, state management, and analysis pipelines.
- Type-Safe Configuration: Pydantic models ensure type safety and validation
- Pre-built Equilibration Protocols: Annealing, Compression (Larsen's 21-step), and Additional protocols with automatic restart linking
- Convergence Analysis: Automated energy, density, temperature, and pressure convergence checking with block averaging
- Structural Analysis: Freud integration for RDF, structure factor, bond order, and clustering analysis
- Flexible Execution: Support for local and SLURM schedulers
- Error Detection: Automated log parsing and error classification
- Auto-Recovery: Intelligent restart from checkpoints for timeout errors
- Provenance Tracking: Immutable manifests ensure reproducibility
- Integrated Analysis: MDAnalysis + Freud for trajectory analysis
- Workflow Management: Signac integration for multi-stage protocol orchestration
# Clone repository
git clone https://github.com/yourusername/automd.git
cd automd
# Install package
pip install -e .- Python 3.10+
- Pydantic 2.0+
- Jinja2 3.1+
- Signac 2023.0+
- MDAnalysis 3.0+
- Freud 3.0+
- NumPy 1.24+
For multi-stage equilibration protocols, see Section 1.5: Using Preset Protocols.
from automd import LammpsConfig
# Create configuration
config = LammpsConfig(
units="real",
atom_style="full",
timestep=1.0, # fs
run_steps=100000,
thermo_freq=1000,
ensemble="nvt",
temp_start=300.0,
temp_end=300.0,
tdamp=100.0,
pair_style="lj/cut 10.0",
pair_coeffs=["* * 1.0 1.0 2.5"],
dump_freq=1000,
data_file="system.data",
)AutoMD provides pre-configured equilibration protocols for common workflows. These protocols handle multi-stage simulations with automatic restart file linking and convergence checking.
Quick Example: 3-Stage Annealing
from pathlib import Path
from automd.workflow import WorkflowProject
from automd.presets import run_annealing_protocol
from automd.engine.scheduler import LocalScheduler
# Setup project
scheduler = LocalScheduler(lmp_binary="lmp")
project = WorkflowProject("workspace/annealing_demo", scheduler)
# Run annealing protocol (3 stages: packing → annealing → sampling)
results = run_annealing_protocol(
project=project,
base_dir=Path("input"), # Directory containing system.data
target_temp=300.0, # Final temperature (K)
max_temp=600.0, # Annealing temperature (K)
target_press=1.0, # Target pressure (atm)
monitor=True, # Wait for completion
check_convergence=True, # Check final stage convergence
)
# Check results
if results["convergence"]["converged"]:
print(f"✓ Annealing converged!")
else:
print(f"✗ Convergence check failed")Available Protocols
| Protocol | Stages | Description |
|---|---|---|
| Annealing | 3 | Temperature ramp from max_temp to target_temp for enhanced sampling |
| Compression | 23 | Larsen's method: 10× compression + hold + 10× decompression for dense systems |
| Additional | 1 | Flexible single-stage equilibration (NPT or NVT) |
For detailed protocol documentation, see Section 6: Preset Protocols.
For complete working examples, see the examples/ directory.
from pathlib import Path
from automd.engine.scheduler import LocalScheduler
from automd.engine.runner import JobRunner
# Setup scheduler and runner
scheduler = LocalScheduler(lmp_binary="lmp_serial")
runner = JobRunner(scheduler)
# Initialize job
job_dir = Path("simulation/job_001")
manifest = runner.initialize_job(config, job_dir)
# Submit and monitor
job_id = runner.submit_job(job_dir, manifest)
final_state = runner.monitor_job(job_dir)
print(f"Job finished: {final_state}")from automd.engine.scheduler import SlurmScheduler
# Setup SLURM scheduler
scheduler = SlurmScheduler(
partition="gpu",
nodes=2,
ntasks=32,
walltime="24:00:00",
modules=["lammps/2023", "cuda"],
)
runner = JobRunner(scheduler)
# Same initialization and submission as local
manifest = runner.initialize_job(config, job_dir)
job_id = runner.submit_job(job_dir, manifest)from automd.analysis.trajectory import analyze_trajectory
from automd.analysis.order import run_structural_analysis
# Load universe
import MDAnalysis as mda
u = mda.Universe("system.data", "trajectory.dump")
# Run standard analysis
results = analyze_trajectory(
job_dir,
trajectory_file="trajectory.dump",
analyses=["rdf", "msd", "density"],
)
# Run structural analysis with Freud
structure_results = run_structural_analysis(
u,
analyses=["rdf", "structure_factor", "clusters"],
)from automd.workflow import WorkflowProject
# Initialize project
project = WorkflowProject(
workspace_root="workspace",
scheduler=SlurmScheduler(partition="normal"),
)
# Create job
job = project.create_job(
config=config,
job_params={"temperature": 300, "density": 0.8},
type_map={1: "C", 2: "H", 3: "O"},
mass_map={1: 12.01, 2: 1.008, 3: 15.999},
)
# Run workflow
project.run_simulation(job)
project.run_analysis(job, submit_as_job=True)
# Check status
stats = project.get_statistics()
print(f"Job statistics: {stats}")AutoMD provides pre-configured equilibration protocols for common molecular dynamics workflows. These protocols automate multi-stage simulations with automatic restart file linking, convergence checking, and error recovery.
Preset protocols offer a high-level interface for complex multi-stage simulations:
- Automatic stage management: Each stage creates a Signac job with proper dependencies
- Restart file linking: Output from previous stage becomes input for next stage
- Convergence checking: Optional automated convergence analysis on final stage
- Error recovery: Automatic restart from checkpoints on timeout
- Flexible configuration: Type-safe Pydantic config models
Protocol Architecture
┌─────────────────────────────────────────────────────────────┐
│ EquilibrationProtocol (Base) │
│ • build_stage_configs() - Abstract method │
│ • run_protocol() - Execute all stages │
│ • get_protocol_jobs() - Query stage jobs │
└────────────────┬────────────────────────────────────────────┘
│
┌────────────┼────────────┬────────────────────┐
│ │ │ │
┌───▼─────┐ ┌───▼──────┐ ┌───▼──────┐ ┌──────────────┐
│Annealing│ │Compression│ │Additional│ │ Your Custom │
│Protocol │ │ Protocol │ │Protocol │ │ Protocol │
│(3 stages)│ │(23 stages)│ │(1 stage) │ │ │
└─────────┘ └───────────┘ └──────────┘ └──────────────┘
When to Use Presets
Use preset protocols when:
- Running standard equilibration workflows
- You need automatic restart linking between stages
- You want built-in convergence checking
- Reproducing published protocols (e.g., Larsen's compression)
Use basic LammpsConfig when:
- Running single-stage production simulations
- You need complete control over every LAMMPS command
- Your workflow doesn't fit the preset structure
Purpose: Temperature-annealing equilibration for removing initial configuration artifacts and exploring configuration space at elevated temperatures.
Stages
| Stage | Name | Ensemble | Description |
|---|---|---|---|
| 1 | Packing | NPT | Equilibrate density at target temperature |
| 2 | Annealing | NVT | Ramp temperature from max_temp to target_temp |
| 3 | Sampling | NPT | Production sampling at target conditions |
Usage
Option 1: Convenience function
from automd.presets import run_annealing_protocol
results = run_annealing_protocol(
project=project,
base_dir=Path("input"),
target_temp=300.0,
max_temp=600.0,
target_press=1.0,
packing_steps=100000,
annealing_steps=200000,
sampling_steps=100000,
)Option 2: Protocol class directly
from automd.presets import AnnealingConfig, AnnealingProtocol
config = AnnealingConfig(
target_temp=300.0,
max_temp=600.0,
target_press=1.0,
packing_steps=100000,
annealing_steps=200000,
sampling_steps=100000,
)
protocol = AnnealingProtocol(
project=project,
base_dir=Path("input"),
config=config,
type_map={1: "C", 2: "H"},
mass_map={1: 12.01, 2: 1.008},
)
results = protocol.run_protocol(monitor=True)Output Structure
workspace/
├── input/
│ └── system.data
└── [signac workspace]/
└── [job_id]/
├── stage=packing/
│ ├── in.lammps
│ ├── log.lammps
│ ├── trajectory.dump
│ └── restart.stage_2
├── stage=annealing/
│ ├── in.lammps (reads restart from stage=packing)
│ ├── log.lammps
│ └── restart.stage_3
└── stage=sampling/
├── in.lammps (reads restart from stage=annealing)
├── log.lammps
└── trajectory.dump
Purpose: Enhance sampling through 21-stage compression/decompression cycles. Useful for dense systems, overcoming energy barriers, and improving equilibration.
Stages (23 total)
| Stage Group | Stages | Description |
|---|---|---|
| Packing | 1 | Initial equilibration at target pressure |
| Compression | 2-11 | Increment pressure from P_target to P_max |
| Hold | 12 | Maintain at maximum pressure |
| Decompression | 13-22 | Decrement pressure from P_max to P_target |
| Sampling | 23 | Final production at target conditions |
Pressure schedule example (target_press=1.0 atm, press_ratio=100):
Packing: 1.0 atm
Compression: 10.9 → 20.8 → 30.7 → ... → 91.0 → 100.0 atm
Hold: 100.0 atm
Decompression: 91.0 → 82.0 → 73.0 → ... → 10.9 → 1.0 atm
Sampling: 1.0 atm
Usage
from automd.presets import run_compression_protocol
results = run_compression_protocol(
project=project,
base_dir=Path("input"),
max_temp=600.0, # High temperature during compression
target_temp=300.0, # Final sampling temperature
target_press=1.0, # Target pressure (atm)
press_ratio=100.0, # P_max / P_target = 100 atm
compression_steps=50000, # Steps per compression stage
decompression_steps=50000,
)When to Use
- Dense systems: Glasses, polymers, confined liquids
- Barrier crossing: Need to escape local minima
- Enhanced sampling: Explore configuration space more thoroughly
- Published protocols: Reproduce Larsen's method from literature
Purpose: Flexible single-stage equilibration for extended sampling or custom conditions.
Usage
from automd.presets import run_additional_protocol
# NPT additional equilibration
results = run_additional_protocol(
project=project,
base_dir=Path("input"),
ensemble="npt",
temperature=350.0,
pressure=1.0,
duration_steps=200000,
)
# NVT additional equilibration
results = run_additional_protocol(
project=project,
base_dir=Path("input"),
ensemble="nvt",
temperature=350.0,
duration_steps=200000,
)When to Use
- After initial equilibration needs more time
- Sampling at different temperature/pressure conditions
- Between protocol stages for additional relaxation
To adapt a preset protocol for your specific force field or system, extend the protocol class and override create_lammps_config():
Example: Water-Specific Compression Protocol
from automd.presets.config import CompressionConfig
from automd.presets.protocols.compression import CompressionProtocol
from automd.core.config import LammpsConfig
class WaterCompressionProtocol(CompressionProtocol):
"""SPC/E water compression protocol with custom force field."""
def create_lammps_config(self, temp_start, temp_end, ensemble,
run_steps, press_start=None, press_end=None,
timestep=1.0, restart_file=None, data_file=None):
# Get base config from parent
config = super().create_lammps_config(
temp_start, temp_end, ensemble, run_steps,
press_start, press_end, timestep, restart_file, data_file
)
# Override force field for SPC/E water
config.pair_style = "lj/cut/coul/cut 10.0"
config.pair_coeffs = [
"1 1 0.15535 3.166 # O-O LJ",
"2 2 0.0 0.0 # H-H (no LJ)",
]
config.bond_style = "harmonic"
config.bond_coeffs = ["1 1000.0 1.0 # O-H bond"]
config.angle_style = "harmonic"
config.angle_coeffs = ["1 100.0 109.47 # H-O-H angle"]
return config
# Use custom protocol
protocol = WaterCompressionProtocol(project, base_dir, config)
results = protocol.run_protocol()For a complete working example, see examples/water_compression_preset_example.py.
AnnealingConfig Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| target_temp | float | 300.0 | Final target temperature (K) |
| max_temp | float | 600.0 | Maximum annealing temperature (K) |
| target_press | float | 1.0 | Target pressure (atm) |
| packing_steps | int | 100000 | Steps for packing stage |
| annealing_steps | int | 200000 | Steps for annealing stage |
| sampling_steps | int | 100000 | Steps for sampling stage |
| timestep | float | 1.0 | Integration timestep (fs) |
| thermo_freq | int | 1000 | Thermo output frequency |
| dump_freq | int | 1000 | Trajectory output frequency |
CompressionConfig Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| max_temp | float | 600.0 | Compression temperature (K) |
| target_temp | float | 300.0 | Target temperature (K) |
| target_press | float | 1.0 | Target pressure (atm) |
| press_ratio | float | 100.0 | P_max / P_target |
| packing_steps | int | 100000 | Steps for packing stage |
| compression_steps | int | 50000 | Steps per compression stage |
| decompression_steps | int | 50000 | Steps per decompression stage |
| sampling_steps | int | 100000 | Steps for sampling stage |
AdditionalConfig Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| ensemble | str | "npt" | Thermodynamic ensemble (npt/nvt) |
| temperature | float | 300.0 | Equilibration temperature (K) |
| pressure | float | 1.0 | Target pressure (atm, npt only) |
| duration_steps | int | 100000 | Number of equilibration steps |
AutoMD provides automated convergence checking for LAMMPS simulations using statistical analysis of log file data.
Features
- Energy convergence: Standard deviation check in final trajectory window
- Density convergence: NPT ensemble volume equilibration
- Temperature stability: Thermostat performance validation
- Pressure stability: Barostat performance validation
- Block averaging: Statistical error estimation
- Equilibration point detection: Cumulative mean test
Automatic checking with protocols
results = run_annealing_protocol(
project=project,
base_dir=Path("input"),
check_convergence=True, # Enable automatic checking
)
# Check convergence results
if results["convergence"]["converged"]:
print("✓ All convergence criteria passed")
else:
conv = results["convergence"]
print(f"Energy converged: {conv['energy_converged']}")
print(f"Density converged: {conv['density_converged']}")
print(f"Temperature stable: {conv['temp_stable']}")
print(f"Pressure stable: {conv['pressure_stable']}")Manual convergence checking
from automd.analysis.convergence import check_convergence
results = check_convergence(
log_file="path/to/log.lammps",
energy_threshold=0.0005, # Max std deviation
density_threshold=0.001, # Max std deviation
temp_tolerance=5.0, # ±K from target
pressure_tolerance=0.1, # ±atm from target
window=0.2, # Check last 20% of trajectory
)Block averaging for error estimation
from automd.analysis.convergence import ConvergenceChecker
checker = ConvergenceChecker("log.lammps")
# Calculate block averages (divide trajectory into 5 blocks)
block_means, block_stds = checker.calculate_block_averages(
property="TotEng",
n_blocks=5
)
# Statistical error = std(block_means) / sqrt(n_blocks)
std_error = block_means.std() / (len(block_means) ** 0.5)
print(f"Energy error: ±{std_error:.6f}")AutoMD integrates Freud for high-performance structural analysis of trajectories.
Available Analyses
| Analysis | Description | Output |
|---|---|---|
| RDF | Radial distribution function | r, g(r) arrays |
| Structure Factor | Static structure factor S(k) | k, S(k) arrays |
| Local Density | Particle local densities | Per-particle density array |
| Bond Order | Steinhardt Q_l bond order parameters | Per-particle Q_l values |
| Clustering | Cluster detection and properties | Cluster sizes, indices, counts |
Basic structural analysis
import MDAnalysis as mda
from automd.analysis.order import run_structural_analysis
# Load trajectory
u = mda.Universe("system.data", "trajectory.dump")
# Run analyses
results = run_structural_analysis(
u,
analyses=["rdf", "structure_factor", "clusters"],
output_dir="analysis_results",
)
# Access results
r = results["rdf"]["r"]
gr = results["rdf"]["gr"]
peak_distance = r[gr.argmax()]
print(f"RDF peak at: {peak_distance:.3f} Å")Type-specific RDF
from automd.analysis.order import StructureAnalyzer
analyzer = StructureAnalyzer(u)
# RDF between specific atom types
r, gr = analyzer.compute_rdf_between_types(
type1="O",
type2="H",
r_max=5.0,
bins=50,
)AutoMD includes sophisticated error detection and automatic recovery based on error classification.
Error Types
| Error Type | Causes | Auto-Recovery | Action |
|---|---|---|---|
| EXPLOSION | NaN/Inf, lost atoms | ✗ Never | Manual intervention |
| TOPOLOGY | Missing bonds/angles | ✗ Never | Fix data file |
| TIMEOUT | Walltime limit reached | ✓ Always | Restart from checkpoint |
| MEMORY | Out of memory | ✗ Never | Reduce system size |
| CONVERGENCE | Minimization failed | △ Conditional | Adjust parameters |
Automatic Restart on Timeout
When a job times out, AutoMD automatically:
- Detects timeout from log file and SLURM exit code
- Creates new configuration reading from restart file
- Resubmits job with remaining timesteps
- Continues monitoring
from automd.engine.runner import JobRunner
runner = JobRunner(scheduler)
# Auto-recovery is enabled by default for monitoring
final_state = runner.monitor_job(
job_dir="path/to/job",
auto_recover=True, # Enable auto-restart on timeout
max_restarts=3, # Maximum restart attempts
)AutoMD uses Signac for sophisticated multi-job workflow management.
Key Features
- Statepoint-based job organization: Jobs identified by parameter sets
- Automatic dependency tracking: Jobs linked via restart files
- Linked views: Navigate jobs via human-readable directory structure
- Protocol execution: Multi-stage protocols with automatic stage management
Protocol Stage Management
When running multi-stage protocols, AutoMD:
- Creates Signac jobs for each stage with
stage_indexandstageparameters - Links restart files:
restart_Nfrom stage N becomes input for stage N+1 - Executes stages sequentially with failure detection
- Creates linked view for easy navigation
Querying Protocol Jobs
from automd.workflow import WorkflowProject
project = WorkflowProject("workspace")
# Get all jobs for a protocol
annealing_jobs = project.get_protocol_jobs("annealing")
# Jobs are sorted by stage_index
for job in annealing_jobs:
stage = job.document.get("stage")
state = job.document.get("state")
print(f"Stage: {stage}, State: {state}")Job Statistics
# Get project-level statistics
stats = project.get_statistics()
print(f"Completed: {stats['completed']}")
print(f"Failed: {stats['failed']}")
print(f"Running: {stats['running']}")AutoMD uses a robust state machine to track job lifecycle:
INITIALIZED → SUBMITTED → RUNNING → COMPLETED
↘ FAILED
↘ STALLED → SUBMITTED (auto-restart)
| State | Description | Auto-Recovery |
|---|---|---|
| INITIALIZED | Job created, inputs generated | N/A |
| SUBMITTED | Queued in scheduler | N/A |
| RUNNING | Actively computing | N/A |
| COMPLETED | Clean exit | N/A |
| FAILED | Crashed (explosion, topology error) | ✗ Never |
| STALLED | Walltime hit | ✓ Yes |
Multi-stage protocols add stage-level tracking:
Protocol Start
↓
Stage 1: INITIALIZED → SUBMITTED → RUNNING → COMPLETED
↓
Stage 2: INITIALIZED → SUBMITTED → RUNNING → COMPLETED
↓
...
↓
Stage N: INITIALIZED → SUBMITTED → RUNNING → COMPLETED
↓
Protocol Complete + Convergence Check
When using preset protocols, additional context is tracked in each job:
# Protocol job document contains:
{
"protocol": "annealing", # Protocol name
"stage": "packing", # Stage name
"stage_index": 1, # Stage order
"state": "COMPLETED", # Stage state
"restart_file": "restart.stage_2", # Output for next stage
}This enables tracking protocol progress and automatic recovery of individual stages.
AutoMD classifies errors into 5 categories based on log file analysis:
Indicators: NaN, Inf, lost atoms, invalid atom IDs
Example log output:
ERROR: Lost atoms: original 1000 current 998
Action: Never auto-restart. Requires manual intervention:
- Check force field parameters
- Reduce timestep
- Verify initial configuration
- Check for high-energy overlaps
Indicators: Missing bonds/angles, invalid topology
Example log output:
ERROR: Bond atoms missing on line 52
Action: Never auto-restart. Fix data file topology.
Indicators: Walltime limit reached, job cancellation
Example SLURM output:
slurmstepd: error: Job STEP_CANCELLED at step=123456
Action: Always auto-restart from checkpoint:
# Automatic restart behavior
final_state = runner.monitor_job(job_dir, auto_recover=True)Restart configuration:
- Reads
restart.filefrom previous run - Continues for remaining timesteps
- Preserves thermo and dump output
Indicators: Out of memory errors
Action: Never auto-restart. Requires system changes:
- Reduce number of atoms
- Decrease neighbor list skin
- Run on node with more memory
Indicators: Minimization didn't converge
Example log output:
Minimization did not converge
Action: Conditional auto-restart with adjusted parameters:
config.minimize_maxiter = 1000 # Increase iterationsRegister custom callbacks for state changes:
def status_callback(state: JobState, message: str):
"""Called on every state transition."""
if state == JobState.FAILED:
print(f"Job failed: {message}")
# Send notification, log to database, etc.
elif state == JobState.COMPLETED:
print(f"Job completed successfully")
runner.monitor_job(job_dir, callback=status_callback)Every job stores an immutable manifest.json:
{
"version": "1.0.0",
"config": {
"units": "real",
"timestep": 1.0,
...
},
"type_map": {"1": "C", "2": "H", "3": "O"},
"mass_map": {"1": 12.01, "2": 1.008, "3": 15.999},
"created_at": "2025-01-03T12:00:00",
"history": [...]
}This ensures reproducibility even if the code changes over time.
Analysis can be run as separate SLURM jobs with automatic dependencies:
# Analysis automatically waits for simulation completion
project.run_analysis(job, submit_as_job=True)
# Equivalent to:
# sbatch --dependency=afterok:<sim_job_id> submit_analysis.shThe examples/ directory contains complete, runnable examples demonstrating AutoMD features.
Available Examples
| Example | Description |
|---|---|
| basic_usage.py | Local simulation, SLURM workflow, Signac parameter sweeps, trajectory analysis |
| water_compression_preset_example.py | Full 23-stage compression protocol using preset infrastructure |
| water_compression_example.py | Manual multi-stage protocol (for comparison with preset version) |
| generate_water_data.py | Utility for creating SPC/E water data files |
Example Structure
examples/
├── README.md # Detailed example documentation
├── basic_usage.py # Basic and intermediate examples
├── water_compression_preset_example.py # Advanced preset protocol example
├── water_compression_example.py # Manual protocol example
├── generate_water_data.py # Water data file generator
└── workspace/ # Generated by running examples
Getting Started with Examples
- See
examples/README.mdfor detailed documentation of each example - Run basic examples first:
python examples/basic_usage.py - Try advanced protocols:
python examples/water_compression_preset_example.py - Adapt examples for your system by modifying force field parameters
Adapting Examples
To adapt examples for your system:
- Change force field: Modify
pair_styleandpair_coeffs - Update type/mass maps: Match your atom types
- Adjust parameters: Temperature, pressure, timestep
- Replace data file: Use your system.data
For customization examples, see Section 6.5: Customizing Protocols.
automd/
├── src/automd/
│ ├── core/ # Configuration and state management
│ ├── templates/ # Jinja2 templates for LAMMPS/SLURM
│ ├── engine/ # Execution engine and watchdog
│ ├── analysis/ # MDAnalysis and Freud wrappers
│ └── workflow/ # Signac orchestration
├── tests/ # Unit tests
└── examples/ # Example scripts
See LammpsConfig for all available parameters:
units: LAMMPS unit system (real, metal, lj, etc.)atom_style: Atom style (full, atomic, etc.)timestep: Integration timestep (fs)run_steps: Number of timestepsensemble: Thermodynamic ensemble (nvt, npt, nve)temp_start/temp_end: Temperature range (K)press: Target pressure (atm)pair_style: Pair style and parameterspair_coeffs: Pair coefficient commandsdump_freq: Trajectory output frequencyrestart_freq: Restart file frequency
MIT
Contributions welcome! Please read our contributing guidelines.
If you use AutoMD in your research, please cite:
[TODO: Add citation information]
AutoMD provides support for Kremer-Grest (KG) coarse-grained polymer simulations, a standard model for studying polymer dynamics and entanglement.
The Kremer-Grest model is a widely-used coarse-grained polymer model with:
- FENE bonds for chain connectivity:
V(r) = -0.5 * K * R0^2 * ln(1 - (r/R0)^2) - WCA potential for excluded volume: Purely repulsive LJ (cutoff at
2^(1/6)*sigma) - Standard parameters: K=30, R0=1.5, epsilon=1.0, sigma=1.0
- LJ units: Reduced units (σ=1, ε=1, m=1)
Quick Example
from automd import LammpsConfig
from automd.analysis.polymer import analyze_polymer_trajectory
# Configure Kremer-Grest force field
config = LammpsConfig(
# LJ units for Kremer-Grest
units="lj",
atom_style="molecular",
# WCA potential (purely repulsive LJ)
pair_style="lj/cut 1.122", # 2^(1/6) ≈ 1.122
pair_coeffs=["* * 1.0 1.0"], # epsilon=1.0, sigma=1.0
# FENE bonds
bond_style="fene",
bond_coeffs=["1 30.0 1.5"], # K=30, R0=1.5
# Special bonds (exclude 1-2 non-bonded interactions)
special_bonds="lj 0.0 0.0 0.0",
# Simulation parameters
timestep=0.005,
temp_start=1.0, # LJ units
temp_end=1.0,
ensemble="nvt",
run_steps=100000,
data_file="polymer.data",
)
# ... run simulation ...
# Analyze polymer properties
analysis = analyze_polymer_trajectory(
job_dir=Path("workspace/polymer_demo/job1"),
trajectory_file="trajectory.dump",
chain_length=10,
)
print(f"Rg: {analysis['Rg']['mean']:.3f} ± {analysis['Rg']['std']:.3f} σ")
print(f"Ree: {analysis['Ree']['mean']:.3f} ± {analysis['Ree']['std']:.3f} σ")Kremer-Grest Parameters
| Parameter | Description | Standard Value |
|---|---|---|
pair_style |
WCA (purely repulsive LJ) | "lj/cut 1.122" |
pair_coeffs |
epsilon, sigma | "* * 1.0 1.0" |
bond_style |
FENE bonds | "fene" |
bond_coeffs |
K, R0 | "1 30.0 1.5" |
special_bonds |
Exclude 1-2 interactions | "lj 0.0 0.0 0.0" |
units |
Reduced units | "lj" |
Why special_bonds?
The special_bonds lj 0.0 0.0 0.0 command excludes non-bonded WCA interactions between beads directly connected by FENE bonds. This is critical for:
- Preventing double-counting of bonded interactions
- Ensuring proper chain statistics
- Matching standard Kremer-Grest simulations
AutoMD provides basic polymer conformation analysis:
Available Analyses
| Metric | Description | Reference |
|---|---|---|
| Rg | Radius of gyration distribution | Rg^2 = (1/N) * Σ(r_i - r_cm)^2 |
| Ree | End-to-end distance distribution | Ree = |r_N - r_1| |
| Ree/Rg ratio | Chain conformation metric | Ideal: √6 ≈ 2.45 |
Using PolymerAnalyzer
from automd.analysis.polymer import PolymerAnalyzer
import MDAnalysis as mda
# Load trajectory
u = mda.Universe("polymer.data", "trajectory.dump", format="LAMMPSDUMP")
# Analyze
analyzer = PolymerAnalyzer(u, chain_length=10)
results = analyzer.analyze_conformations()
# Access results
print(f"Rg: {results['Rg']['mean']:.3f} ± {results['Rg']['std']:.3f}")
print(f"Ree: {results['Ree']['mean']:.3f} ± {results['Ree']['std']:.3f}")
print(f"Ree/Rg: {results['Ree_Rg_ratio']['mean']:.3f}")
print(f"Ideal chain (Ree/Rg): {results['Ree_Rg_ratio']['theoretical_ideal']:.3f}")Validation
Check your results against known values:
- Ideal chains: Ree/Rg ≈ √6 ≈ 2.45
- Self-avoiding walks: Ree/Rg ≈ 2.7-3.0
- Literature: Kremer & Grest, J. Chem. Phys. 92, 5057 (1990)
While AutoMD doesn't include polymer-specific preset protocols, you can easily create custom protocols using the existing infrastructure:
from automd.presets.base import EquilibrationProtocol
from automd.presets.config import ProtocolConfig
from automd.core.config import LammpsConfig
class PolymerConfig(ProtocolConfig):
"""Configuration for polymer simulations."""
units: str = "lj"
timestep: float = 0.005
special_bonds: str = "lj 0.0 0.0 0.0"
class PolymerProtocol(EquilibrationProtocol):
"""Custom polymer equilibration protocol."""
def __init__(self, project, base_dir, config):
super().__init__(project, base_dir, config)
# Set KG force field parameters
self.kg_params = {
"units": "lj",
"pair_style": "lj/cut 1.122",
"pair_coeffs": ["* * 1.0 1.0"],
"bond_style": "fene",
"bond_coeffs": ["1 30.0 1.5"],
"special_bonds": "lj 0.0 0.0 0.0",
}
def get_protocol_name(self) -> str:
return "polymer_equilibration"
def build_stage_configs(self):
"""Define multi-stage polymer equilibration."""
# Apply KG parameters to each stage
...For a complete working example, see examples/polymer_example.py.
- Kremer, K. & Grest, G. S. "Dynamics of entangled linear polymer melts: A molecular-dynamics simulation." J. Chem. Phys. 92, 5057 (1990).