How to use GROMACS¶
Software
|
Version
|
Cluster
|
---|---|---|
GROMACS
|
2023.2-cpeGNU-22.06
|
Dardel
|
GROMACS is highly tuned for quite efficient use of HPC resources. Special assembly kernels make its core compute engine one of the fastest MD simulation programs.
In order to use this module, you need to
ml PDC/22.06
ml gromacs/2023.2-cpeGNU-22.06
Preprocessing input files (molecular topology, initial coordinates and mdrun parameters) to create a portable run input (.tpr) file can be run in a batch job by
srun -n 1 gmx_mpi grompp -c conf.gro -p topol.top -f grompp.mdp
Gromacs also contains a large number of other pre- and post-processing tools. A list of available commands can be seen by
srun -n 1 gmx_mpi help commands
This module provides four main versions of the GROMACS suite:
gmx : The MD engine binary without MPI, but with openMP threading. Useful if GROMACS is executed for preprocessing or running analysis tools on a compute node.
gmx_mpi : The MD engine binary with MPI support. This is the one that researchers would use most of the time.
gmx_d : Same as gmx above but in double precision.
gmx_mpi_d : Same as gmx_mpi above but in double precision.
All tools from the GROMACS suite can be launched using any of the above versions. Please note that they should be launched on the compute node(s).
Remember to always use in your scripts srun in front of the actual GROMACS command! Here is an example script that requests 2 nodes:
#!/bin/bash
#SBATCH -J my_gmx_job
#SBATCH -A snicYYYY-X-XX
#SBATCH -p main
#SBATCH -t 01:00:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=128
ml PDC/22.06
ml gromacs/2023.2-cpeGNU-22.06
export OMP_NUM_THREADS=1
srun -n 1 gmx_mpi grompp -c conf.gro -p topol.top -f grompp.mdp
srun gmx_mpi mdrun -s topol.tpr -deffnm gmx_md
Disclaimer¶
PDC takes no responsibility for the correctness of results produced with the binaries. Always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.