How to use Amber

Software
Version
Cluster
Amber
22
Dardel-GPU

Amber is the collective name for a suite of programs that allow users to carry out molecular dynamics simulations, particularly on biomolecules. None of the individual programs carries this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.

The term Amber is also used to refer to the empirical force fields that are implemented in Amber. It should be recognized however, that the code and force field are separate: several other computer packages have implemented the Amber force fields, and other force fields can be implemented with the Amber programs.

In order to use Amber22 running on GPUs, you need to:

$ ml amber/22-cpeGNU-22.06-ambertools-22-gpu

This will append the directory containing all Amber executables to your PATH variable, and set various library paths.

Many of the Amber subprograms come in serial, parallel and other versions, including:

  • sander, sander.MPI, sander.OMP, sander.LES, sander.LES.MPI

  • pmemd, pmemd.hip.MPI, pmemd.hip

  • cpptraj, cpptraj.MPI, cpptraj.OMP

Here is an example batch script for a parallel execution of pmemd.MPI:

For running subprograms that rely on the Python interface to Amber, for example MMPBSA, one needs to load the appropriate Anaconda Python distributions and activate the mpi4py conda environment. Here is an example batch script for a parallel execution

#!/bin/bash

# Name of you allocation
#SBATCH -A XXXX-XX-XX
# Name of the job
#SBATCH -J myjob
# Partition
#SBATCH -p gpu
# Maximum wallclock time of job
#SBATCH -t 02:00:00

# Total number of nodes
#SBATCH --nodes=2
# Number of MPI processes per node
#SBATCH --ntasks-per-node=16

ml rocm/5.0.2
ml PDC/22.06
ml amber/22-cpeGNU-22.06-ambertools-22-gpu

# we run on two nodes with 16 cores per node
srun pmemd.hip.MPI -O -i input.in -o output.out -p prm.prmtop -c coords.rst -r restart.rst -x traj.mdcrd
#!/bin/bash

# Name of you allocation
#SBATCH -A XXXX-XX-XX
# Name of the job
#SBATCH -J myjob
# Maximum wallclock time of job
#SBATCH -t 02:00:00
# Partition
#SBATCH -p gpu
# Total number of nodes
#SBATCH --nodes=2
# Number of MPI processes per node
#SBATCH --ntasks-per-node=32

ml rocm/5.0.2
ml PDC/22.06
ml amber/22-cpeGNU-22.06-ambertools-22-gpu

# MMGBSA calculation of a previously generated trajectory
srun MMPBSA.py.MPI -O -i MMPBSA.in -o MMPBSA.dat -sp solvated.prmtop -cp complex.prmtop -rp receptor.prmtop -lp ligand.prmtop -y trajectory.nc >& MMPBSA.log

Disclaimer

PDC takes no responsibility for the correctness of results produced with the binaries. Always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.

How to build Amber