How to use GROMACS


GROMACS is highly tuned for quite efficient use of HPC resources. Special assembly kernels make its core compute engine one of the fastest MD simulation programs.

In order to use this module, you need to

ml PDC/22.06
ml gromacs/2023.2-cpeGNU-22.06-gpu-without-heffte

Preprocessing input files (molecular topology, initial coordinates and mdrun parameters) to create a portable run input (.tpr) file can be run in a batch job by

srun -n 1 gmx grompp -c conf.gro -p -f grompp.mdp

Gromacs also contains a large number of other pre- and post-processing tools. A list of available commands can be seen by

srun -n 1 gmx help commands

This module provides one main version of the GROMACS suite:

  • gmx : The MD engine binary without MPI, but with openMP threading and offloading to GPU via OpenSYCL.

  • gmx_mpi : The MD engine binary with MPI, with openMP threading, and offloading to GPU via OpenSYCL.

All tools from the GROMACS suite can be launched using any of the above versions. Please note that they should be launched on the compute node(s).

Remember to always use in your scripts srun in front of the actual GROMACS command! Here is an example script that requests 1 GPU node:


#SBATCH -J my_gmx_job
#SBATCH -p gpu
#SBATCH -t 01:00:00

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1

ml PDC/22.06
ml gromacs/2023.2-cpeGNU-22.06-gpu-without-heffte


srun -n 1 gmx_mpi grompp -c conf.gro -p -f grompp.mdp
srun gmx_mpi mdrun -s topol.tpr -deffnm gmx_md

Information on how to run GROMACS on AMD GPU nodes of an HPE Cray EX cluster can be found in How to run GROMACS efficiently on the LUMI supercomputer.


PDC takes no responsibility for the correctness of results produced with the binaries. Always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.

How to build GROMACS