PDC Summer School

This page contains useful information for the PDC summer school.

Account and login

Please see instructions for acquiring a PDC account and logging in to PDC clusters in the General instructions for PDC courses.

Virtual machine

All participants will receive a USB memory stick with an Ubuntu virtual machine (VM) image which can be used to log in to PDC. To use the VM, you will need to install VirtualBox. The image can also be downloaded here.

Note that you can also use the VM for the Software Engineering and Singularity lessons, so we recommend all participants to install VirtualBox.

Compiling code

Beskow and Tegner have different compiler environments which you will use to compile code for the lab exercises. Detailed information can be found at Software Development, but a summary is found below.

Beskow uses compiler wrappers so that source code is compiled with the compiler environment currently loaded. By default the PrgEnv-cray module is loaded when you log in, but you can change to the Intel compiler suite by

$ module swap PrgEnv-cray PrgEnv-intel

and to GNU compilers with

$ module swap PrgEnv-cray PrgEnv-gnu

Regardless of which compiler suite is loaded, you should use the same compiler commands ftn, cc and CC to compile.

Compiling serial and MPI code (note that no MPI flags are needed):

# Fortran
ftn [flags] source.f90
# C
cc [flags] source.c
# C++
CC [flags] source.cpp

Compiling OpenMP code:

# Intel
ftn -openmp source.f90
cc -openmp source.c
CC -openmp source.cpp
# Cray
ftn -openmp source.f90
cc -openmp source.c
CC -openmp source.cpp
# GNU
ftn -fopenmp source.f90
cc -fopenmp source.c
CC -fopenmp source.cpp

Tegner does not use compiler wrappers so you must load the appropriate modules and link to libraries yourself.

Below are examples of how to compile serial, MPI, OpenMP and CUDA code on Tegner.

Serial code:

# GNU
gfortran -o hello hello.f
gcc -o hello hello.c
g++ -o hello hello.cpp
# Intel
module add i-compilers
ifort -FR -o hello hello.f
icc -o hello hello.c
icpc -o hello hello.cpp
# Portland
module add pgi
pgf90 -fast -o hello hello.f
pgcc -fast -o hello hello.c
pgc++ -fast -o hello hello.cpp

OpenMP/MPI code:

# GNU+OpenMPI
module add gcc/5.1 openmpi/1.8-gcc-5.1
mpif90 -FR -fopenmp -o hello_mpi hello_mpi.f
mpicc -fopenmp -o hello_mpi hello_mpi.c
mpic++ -fopenmp -o hello_mpi hello_mpi.cpp
# Intel+IntelMPI
module add i-compilers intelmpi
mpiifort -openmp -o hello_mpi hello_mpi.f90
mpiicc -openmp -o hello_mpi hello_mpi.c
mpiicpc  -openmp -o hello_mpi hello_mpi.cpp
# Portland
module add pgi
pgf90 -mp -fast -o hello hello.f
pgcc -mp -fast -o hello hello.c
pgc++ -mp -fast -o hello hello.cpp

CUDA code:

# CUDA
module add cuda/8.0
nvcc -arch=sm_37 -O2  hello.cu -o hello.x

Running jobs

After logging in you are on the login node of the cluster (Beskow or Tegner). The login node is a shared resource and should not be used for running parallel jobs. Instead, the SLURM scheduling system should be used to either run interactive jobs or submit batch jobs to the queue (see How to Run Jobs). You can however use the login node to edit files and compile code.

To request compute resources interactively or via a batch job, you need to specify an allocation ID, a reservation ID, how much time your job needs and how many nodes it should use. To request an interactive node for 1 hour, run the command

$ salloc -A edu19.summer --reservation <reservation-name> -N 1 -t 1:0:0

When the interactive compute node is allocated to you, a new shell session is started (in the same terminal). How to run a parallel job on the interactive node differs between Beskow and Tegner. On Beskow, type:

$ srun -n 32 ./my_executable

On Tegner, type either:

$ mpirun -np 24 ./my_executable

or:

$ srun -n 24 ./my_executable

To run a batch job, you first need to prepare a submit script. In the simplest case, it can look like this:

#!/bin/bash -l
#SBATCH -A edu19.summer
#SBATCH --reservation=<reservation-name>
#SBATCH -J name-of-my-job
#SBATCH -t 1:00:00
#SBATCH -N 2

# this is for Beskow:
srun -n 64 ./my_executable > my_output_file 2>&1

# this is for Tegner:
mpirun -np 48 ./my_executable > my_output_file 2>&1

To submit the job to the scheduler, type

$ sbatch my_submit_script.bash

You can then monitor the job using

$ squeue -u <your-username>

and if you need to cancel the job, find its job-ID using the squeue command and type

$ scancel <job-ID>

File transfer

Files are transferred with scp from PDC to your local machine as follows:

$ scp <username>@beskow.pdc.kth.se:<filename> .

or the other way by:

$ scp <filename> <username>@beskow.pdc.kth.se:~/

Here is more information on Using SCP/RSYNC

Lab exercises

Quick Reference Guide

Check out our quick reference guide to using PDC resources.

Introduction to PDC

Lecture slides can be found here.

Hands-on exercises can be found in this page.

Software engineering

All course material, including exercises and installation/preparation steps to be done before the summer school starts, can be found on this page.

Singularity

Lecture slides can be found here.

OpenMP

All exercises can be found in this GitHub repository. You can either clone it to your computer or navigate the exercises on GitHub.

CUDA

All exercises can be found in this GitHub repository. You can either clone it to your computer or navigate the exercises on GitHub.

Instructions for the two sets of exercises:

  • Lab 1 in C and guidelines for solving the lab in Fortran.

  • Lab 2 in C (and use the same guidelines as above for solving it in Fortran).

MPI

All exercises can be found in this GitHub repository. You can either clone it to your computer or navigate the exercises on GitHub.

Performance Engineering

TBD