You are here: Home Education Tutorials Summer School

General instructions

Instructions and hints common to all Summer School labs and exercises.

Where to run

Most Summer School exercises will be run on PDC's CRAY XC40 system Beskow.

The only exception to this rule is that GPU computer exercises will be run on Tegner:

How to login

To access PDC's cluster you will be using the Ubuntu computers in the computer rooms Orange and Gul.

  • Login on any of the computers in Orange or Gul using your PDC or CSC username and password.
  • Make sure that you have received Kerberos tickets and AFS tokens on your local computer by typing:
    klist -Tf.
  • Log in on Beskow login node:
    ssh -Y
  • Wait for the prompt to change, which signals that you have a window on Beskow. Check that your login was successful by typing (in that window):
    klist -Tf

More about the environment on Beskow

The Cray automatically loads several modules at login.

Running OpenMP programs on Beskow

To start an OpenMP program on the Beskow compute nodes you should first set the OMP_NUM_THREADS environment variable using the command.


You then need to use the aprun command to launch the executable (if you do not use aprun then the command will be started on the login node, which could crash the login node if too many people do it).

To run interactively, obtain an allocation using

salloc -N 1 -t 4:00:00 -A summer-2016

which will book a single node for interactive use. The -t option specify the amount of time the node is booked(4 hour) and -A is the time allocation you book the node with. See the salloc man page for more information.

To launch the program on the compute nodes then use

aprun -n 1 -d 32 -cc none ./example.x

The -d  and -cc options are important as without them ALL the OpenMP threads will run on a single core, instead of across multiple cores, and you will not see any parallel speedup. The number after the -d should be the number of threads you want to use, and agree with the OMP_NUM_THREADS environment variable set above.

If all the compute nodes are busy then the salloc command will wait until nodes are free.

Running MPI programs on Beskow

MPI programs are started in a similar way to OpenMP programs, again you must use

salloc -N 2 -t 4:00:00 -A summer-2016

to obtain an allocation, then the aprun command to launch the job on the compute nodes.

aprun -n 32 ./example.x

In this example we will start 32 MPI tasks (there are 32 cores per node on the Beskow nodes).

If you do not use aprun and try to start your program on the login node then you will get an error similar to

Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(408): Initialization failed
MPID_Init(123).......: channel initialization failed
MPID_Init(461).......:  PMI2 init failed: 1