A quick guide to PDC environment

Kerberos

To get an access to PDC’s computers you must have a valid Kerberos ticket. If your old ticket expired or you are connecting at the first time

# get a 7-days forwardable ticket
$ kinit --forwardable -l 7d <user>@NADA.KTH.SE

You can edit krb5.conf file (see PDC web) and

$ kinit -f <user>@NADA.KTH.SE # get a ticket with a simplier command

$ kpasswd <user>@NADA.KTH.SE # change password (run from local machine)

Other useful commands:

$ klist     # list available tickets
$ kdestroy  # destroy tickets

SSH

Use Secure Shell (SSH) to connect to PDC’s computers

# connecting without edited ~/.ssh/config
$ ssh -o GSSAPIKeyExchange=yes -o GSSAPIAuthentication=yes \
      -o GSSAPIDelegateCredentials=yes <user>@machine.pdc.kth.se

# connecting with edited ~/.ssh/config (see PDC web) and enabled Xserver
$ ssh -Y <user>@machine.pdc.kth.se

# connecting with edited ~/.ssh/config (see PDC web) without enabling Xserver
$ ssh <user>@machine.pdc.kth.se

Copying data

Use Secure Copy Protocol (SCP) to transfer files between your local computer and PDC’s computers

# Copy file from local to remote
$ scp localDir/fileToCopy <user>@machine.pdc.kth.se:/cfs/klemming/nobackup/u/user/

# Copy directory with its content from remote to local
$ scp -r <user>@machine.pdc.kth.se:/afs/pdc.kth.se/home/u/user/dirToCopy localDir/
$ scp -r <user>@machine.pdc.kth.se:~/dirToCopy .

KTH Ubuntu machines

On KTH Ubuntu machines, use the following special commands instead of kinit, klist, ssh, etc.

$ pdc-kinit
$ pdc-klist
$ pdc-kdestroy
$ pdc-kpasswd
$ pdc-ssh
$ pdc-scp

Unix shell

Use “Tab” key for faster input – start writting first letters of a command/file/directory name and press “Tab” key to complete an entry.

$ pwd             # print working directory
$ cd tmp/talks/   # change directory
$ cd ..           # go one level up
$ ls -l           # list content of directory

# back to home directory
$ cd $HOME
$ cd ~
$ cd
$ cd /afs/pdc.kth.se/home/u/user

$ mkdir test_dir      # create directory named "test_dir"

# call different text editors
$ nano draft.txt
$ emacs draft.txt
$ vim draft.txt       # this is "improved" vi

$ cp draft.txt backup.txt         # copy file
$ cp -r dir_to_copy copied_dir    # recursively copy directory

$ mv draft.txt draft_2.txt        # move/rename file
$ mv results backup               # move/rename directory
$ mv results ..                   # move directory one level up

# remove file - use with caution!
# From the Unix point of view, deleting is forever!
$ rm draft.txt

# remove directory and all its contents
$ rm -r results

$ tar -xvf file.tar                            # extract tar file
$ tar -cvf output.tar /dir1 /dir2 file1 file2  # create tar file

File systems at PDC

AFS (Andrew File System): Available on login node only on some clusters. ATTENTION please avoid running softwares on AFS!

/afs/pdc.kth.se/home/u/user

Lustre (Linux cluster file system)

/cfs/klemming/nobackup/u/user

Working with modules

$ module avail        # list all available modules

# show information about a module
$ module show fftw
$ module show fftw/3.3.0.4

# load a module
$ module load fftw
$ module load fftw/3.3.0.4

$ module list         # list currently loaded modules

$ module swap PrgEnv-cray PrgEnv-intel    # swap modules
$ module unload fftw/3.3.0.4              # unload module

Compiling code on Beskow

Compilers and libraries

Available compiler sets: Cray, GNU and Intel

# Change compiler (e.g. cray to intel)
$ module swap PrgEnv-cray PrgEnv-intel

Module cray-libsci: BLAS, LAPACK, BLACS, and SCALAPACK

Module cray-mpich: MPI

On Cray compile using compiler wrappers: ftn, cc and CC

# Compiler wrappers for sequential and MPI codes
$ ftn [flags, e.g. -o executable] source.f90
$ cc  [flags, e.g. -o executable] source.c
$ CC  [flags, e.g. -o executable] source.cpp

Advice for portability: protect MPI code with the pre-processor by putting #ifdef ENABLE_MPI and #endif in your code. On many compilers preprocessor works only with specific file formats, check compiler manuals for preprocessor flags (e.g. for ifort use -fpp)

$ ftn -fpp -DENABLE_MPI source.f90

Compiling OpenMP code on Beskow

Cray will automatically link to BLAS, LAPACK, BLACS, and SCALAPACK

# Intel
$ ftn -openmp source.F90      # Fortran
$ cc  -openmp source.c        # C
$ CC  -openmp source.cpp      # C++

# Cray
$ ftn -openmp source.F90       # Fortran
$ cc  -openmp source.c         # C
$ CC  -openmp source.cpp       # C++

# GNU
$ ftn -fopenmp source.F90     # Fortran
$ cc  -fopenmp source.c       # C
$ CC  -fopenmp source.cpp     # C++

Running executables on Beskow

Login node

  • After login to machine.pdc.kth.se you are on machine’s login node and in your working directory /afs/pdc.kth.se/home/u/user.

  • A login node has access to AFS (afs/pdc.kth.se) and Lustre (/cfs/klemming) file systems and compute nodes.

  • A login node is for submitting jobs, editing files, and compiling small programs.

  • Users do not have direct access to compute nodes.

  • Login node initializes jobs on compute nodes for you.

  • Never run your program on login node – it will disturb other users (always use salloc and srun`!).

Two options for running executables on Beskow

1. SLURM queue system

SLURM user commands:

# to book a dedicated node (modify the date on the reservation!)
$ salloc -t <1:30:00> -N <nodes> -A summer-2017 --reservation=summer-2017-08-15 [script/command]

$ exit    # log out

# submit your script
$ sbatch <script>

# show your running jobs
$ squeue -u <username>

# to remove a submitted job
$ scancel <jobid>

Example of batch script for Beskow:

#!/bin/bash -l
#SBATCH -J myjob

# Specify the account and the reservation (check the date!)
#SBATCH -A summer-2017
#SBATCH --reservation=summer-2017-08-15

# wall-clock time to be given to this job
#SBATCH -t 1:30:00

# Number of nodes
#SBATCH --nodes=1

# Number of tasks per node
#SBATCH --ntasks-per-node=32

# Run program
srun -n 32 ./hello_mpi

2. Run interactively

Running OpenMP code on Beskow:

# You have to book a node before using srun -n ..., otherwise it will fail to execute!
$ salloc -t <1:30:00> -N <nodes> -A summer-2017 --reservation=summer-2017-08-15 [script/command]
# set environment variable - number of threads
$ export OMP_NUM_THREADS=6

# run OpenMP program
$ srun -n 1 ./example.x

Running MPI code on Beskow:

# You have to book a node before using srun -n ..., otherwise it will fail to execute!
$ salloc -t <1:30:00> -N <nodes> -A summer-2017 --reservation=summer-2017-08-15 [script/command]

# Run program on n cores
$ srun -n <cores> ./example.x

Compiling and running on Tegner

GNU Compilers

# optional - add module
$ module add gcc/4.9.2

# Compile serial jobs
$ gfortran -o hello hello.f
$ gcc      -o hello hello.c
$ g++      -o hello hello.cpp

# Compile MPI jobs
$ module add gcc/4.9.2 openmpi/1.8-gcc-4.9
$ mpif90 -fopenmp -o hello_mpi hello_mpi.f
$ mpicc  -fopenmp -o hello_mpi hello_mpi.c
$ mpicxx -fopenmp -o hello_mpi hello_mpi.cpp

Intel Compilers

# Compile serial jobs
$ module add i-compilers
$ ifort -o hello hello.f
$ icc   -o hello hello.c
$ icpc  -o hello hello.cpp

# Compile MPI jobs
$ module add i-compilers intelmpi
$ mpiifort -openmp -o hello_mpi hello_mpi.f
$ mpiicc   -openmp -o hello_mpi hello_mpi.c
$ mpiicpcp -openmp -o hello_mpi hello_mpi.cpp

Running on Tegner

# For interactive usage book with e.g.
$ salloc -t 1:30:00 -N 1 -A summer-2017 ./hello_mpi
$ mpirun -np 24 ./hello_mpi

# Submit a script to dedicated nodes
$ sbatch ./job.sh

Compiling and running CUDA code on Tegner

# To compile cuda programs the nvcc command should be used.
# The -arch flag to optimize for the hardware we have should also be used.
$ module add cuda
$ nvcc -arch=sm_37 -O2 hello.cu -o hello.x

# To run the program, we need to select nodes that have GPU cards.
# We can pass the selection either as an argument to salloc (interactive use):
$ salloc -N 1 -t 00:10:00 --ntasks-per-node=1 –gres=--gres=gpu:K420:1
$ ./hello_gpu

# or as an SBATCH option:
$ #SBATCH --gres=gpu:K80:2

Example of batch script for Tegner

#!/bin/bash -l

#SBATCH -J myjob

# Specify the account and the reservation (check the date!)
#SBATCH -A summer-2017
#SBATCH --reservation=summer-2017-08-15

# wall-clock time to be given to this job
#SBATCH -t 1:30:00

# Number of nodes
#SBATCH --nodes=1

# Number of tasks per node
# set to 24 to disable hyperthreading
#SBATCH --ntasks-per-node=24

# load intel compiler and mpi
module load i-compilers intelmpi

# Run program
mpirun -n 24 ./hello_mpi