Job scripts (Beskow)

Note

The maximum duration for a job on Beskow is 24:00 hours.

Job examples (Beskow)

Below are a job script example for MPI. For other programs, you can find examples in the software page.

Example 1:

#!/bin/bash -l
# The -l above is required to get the full environment with modules

# Set the allocation to be charged for this job
# not required if you have set a default allocation
#SBATCH -A 201X-X-XX

# The name of the script is myjob
#SBATCH -J myjob

# 10 hours wall-clock time will be given to this job
#SBATCH -t 10:00:00

# Number of nodes
#SBATCH --nodes=4
# Number of MPI processes per node
#SBATCH --ntasks-per-node=32

# Run the executable named myexe
# and write the output into my_output_file
srun -n 128 ./myexe > my_output_file

Below is another example for a Hybrid MPI+OpenMP program. This example will place 4 MPI processes with 8 threads each on each compute node.

Example 2:

#!/bin/bash -l
# The -l above is required to get the full environment with modules

# Set the allocation to be charged for this job
# not required if you have set a default allocation
#SBATCH -A 201X-X-XX

# The name of the script is myjob
#SBATCH -J myjob

# 10 hours wall-clock time will be given to this job
#SBATCH -t 10:00:00

# Number of Nodes
#SBATCH --nodes=256
# Number of MPI tasks.
#SBATCH -n 1024

# Number of MPI tasks per node
#SBATCH --ntasks-per-node=4

# Number of cores hosting OpenMP threads
#SBATCH -c 8

export OMP_NUM_THREADS=8

# Run the executable named myexe
# and write the output into my_output_file
srun ./myexe > my_output_file

Suppose you have 100 jobs residing in folders data0, data1, …, data99. Below is an example of a job array, in which 100 separate jobs are executed in one shot in the corresponding directories. Job arrays are useful to manage large numbers of jobs which run on the same number of nodes and take roughly the same time to complete. For more information see Job arrays.

Example 3:

#!/bin/bash -l
# The -l above is required to get the full environment with modules

# Set the allocation to be charged for this job
# not required if you have set a default allocation
#SBATCH -A 201X-X-XX

# The name of the script is myjob
#SBATCH -J myjobarray

# 10 hours wall-clock time will be given to this job
#SBATCH -t 10:00:00

# Number of nodes used for each individual job
#SBATCH --nodes=1

# Indices of individual jobs in the job array
#SBATCH -a 0-99

# Fetch one directory from the array based on the task ID
# Note: index starts from 0
CURRENT_DIR=data${SLURM_ARRAY_TASK_ID}
echo "Running simulation in $CURRENT_DIR"

# Go to job folder
cd $CURRENT_DIR
echo "Simulation in $CURRENT_DIR" > result

# Run individual job
srun -n 32 ./myexe > my_output_file

Node types

On Beskow there are 2 separate types of node defined as Haswell or Broadwell depending on the CPU type but also containing other hardware. See more information at https://www.pdc.kth.se/hpc-services/computing-systems

Upon sending in a job on Beskow, SLURM will not distinguish what type of nodes you will use so you will end up on either of these node types, but will not use both of them in the same job.

You can although define what node type you would like to use in certain cases where you would like to perform a specific kind of analysis, like scaling, hardware benchmarking etc…

If you would like to execute your job exclusively on haswell nodes you can use…

#SBATCH -C Haswell

If instead you would like to exclusively execute your job on broadwell nodes, please contact PDC.

Note

It is advisable to use parameter flags in the #SBATCH tags rather than as parameters in srun, to increase clarity of your scripts.