How to use GROMACS

Software
Version
Cluster
GROMACS
2023-cpeGNU-22.06-gpu
Dardel-GPU

GROMACS is highly tuned for quite efficient use of HPC resources. Special assembly kernels make its core compute engine one of the fastest MD simulation programs.

In order to use this module, you need to

ml PDC/22.06
ml gromacs/2023-cpeGNU-22.06-gpu

Preprocessing input files (molecular topology, initial coordinates and mdrun parameters) to create a portable run input (.tpr) file can be run in a batch job by

srun -n 1 gmx grompp -c conf.gro -p topol.top -f grompp.mdp

Gromacs also contains a large number of other pre- and post-processing tools. A list of available commands can be seen by

srun -n 1 gmx help commands

This module provides one main version of the GROMACS suite:

  • gmx : The MD engine binary without MPI, but with openMP threading and offloading to GPU via OpenSYCL.

All tools from the GROMACS suite can be launched using any of the above versions. Please note that they should be launched on the compute node(s).

Remember to always use in your scripts srun in front of the actual GROMACS command! Here is an example script that requests 1 GPU node:

#!/bin/bash

#SBATCH -J my_gmx_job
#SBATCH -A snicYYYY-X-XX
#SBATCH -p gpu
#SBATCH -t 01:00:00

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1

ml PDC/22.06
ml gromacs/2023-cpeGNU-22.06-gpu

export OMP_NUM_THREADS=1

srun -n 1 gmx_mpi grompp -c conf.gro -p topol.top -f grompp.mdp
srun gmx_mpi mdrun -s topol.tpr -deffnm gmx_md

Information on how to run GROMACS on AMD GPU nodes of an HPE Cray EX cluster can be found in How to run GROMACS efficiently on the LUMI supercomputer.

Disclaimer

PDC takes no responsibility for the correctness of results produced with the binaries. Always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.

How to build GROMACS