Data Management

This section gives you information about PDC’s storage solutions. Working with PDC can involve transferring data back and forth between your local machine and PDC resources, or between different systems at PDC.

PDC offers two storage systems AFS and CFS (Lustre), and an efficient usage of PDC requires knowing when to use what. However, Dardel mainly uses Lustre and has limited access to AFS (presently only from the login node to make it easier to transfer data from AFS to Dardel).

If you have SNIC Swestore allocation, please check File transfer section on how to transfer files to/from Swestore.

Where to store my data

As the speed of CPU computations keep increasing, the relatively slow rate of input/output (I/O) or data accessing operations can create bottlenecks and cause programs to slow down significantly. Therefore it is very important to pay attention to how your programs are doing I/O and accessing data as that can have a huge impact on the run time of your jobs. Here, you will find a quick guide to storing data, ideal if you have just started to use PDC resources.

What are AFS and Lustre?

The Andrew File System (AFS) is a distributed file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. The Lustre system is a parallel file system optimized for handling data from many clients at the same time.

Why is there more than one file system?

The AFS and Lustre file systems offer different and often complementary functionality. AFS allows a huge number of computers all over the world to share files between each other. Users can define custom groups and control access-rights effortlessly, suitable for project groups. But, AFS has a small storage volume and slow access speed, making it unsuitable for directly access with computational processes. On the contrary, Lustre is accessible only to PDC systems, but provides large storage and, is highly optimized for fast access with computational processes.

alternate text

Where do I find it?

You can find AFS at /afs/
You can find Lustre at /cfs/klemming

Before running your processes

  • All files for Beskow computations must go on Lustre
  • Big data files for Tegner computations should be put on Lustre
  • Small data files for Tegner computations can be put on AFS but should be put on lustre

Things to remember when using all types of files

  • Minimize I/O operations: larger input/output (I/O) operations are more efficient than small ones – if possible aggregate reads/writes into larger blocks.
  • Avoid creating too many files – post-processing a large number of files can be very hard on the file system.
  • Avoid creating directories with very large numbers of files – instead create directory hierarchies, which also improves interactiveness.
alternate text

Things to remember when using Lustre

  • Avoid all unnecessary metadata operations – once a file is opened, do as much as possible before closing it again. Do not check the existence of files or stat() files too often.

  • Open files as read-only if possible – read-only files require less locking and therefore put less load on the file system.

  • Avoid using ls with flags like -l , -F, or --color as this requires ls to stat() every file to determine its type, which puts an unnecessary load on the file system. Use such flags only when the extra information is really needed and do not have them as default.

    Comparison summary of AFS and Lustre
    File system AFS Lustre
    Suggested usage
    1. small files that needs backup
    2. unsuited for files accessed by computation
    1. large files
    2. program code
    3. files accessed for computation
    Location /afs/ /cfs/klemming
    Storage size default 5GB in home directory on Beskow and Tegner total 5 PB shared with all user on Beskow and Tegner. On Dardel 12 PB shared.
    File access speed Slow Fast
    File access
    1. own implementation of Access Control List
    2. user can define own group
    3. access permissions per directory (not file)
    1. supports standard POSIX ACLs
    Secure access uses Kerberos for authentication
    Backup files in home directory are backed up files are not backed up. On Dardel home directories are backed up
    1. user home directory
    2. project volumes (backup optional)
    3. installation/configuration of PDC environment
    4. source code packages
    1. scratch, suitable for temporary storage
    2. nobackup area, suitable for analysis results
    3. Own developed program code

After running your processes

  • After performing computations at PDC, please move important data files to your own departmental storage system or to a national storage system provided by SNIC (Swestore). Remember, space on Lustre is currently limited, and NOT backed up. However, home directories on Dardel (residing in Lustre) are backed up.
  • Smaller data files can be moved to AFS on Beskow and Tegner

SNIC environmental variables

To simplify for the user how to find different folders, SNIC has provided a number of specific variables which indicate in which folders data should be stored. On Beskow and Dardel the module snic_env is loaded by default but in order to access these variables on Tegner, a module must be loaded

module add snic-env

Table of the environmental variables

Name Function Value on Beskow and Tegner Location on Dardel
SNIC_BACKUP Where important data are backed up. Your AFS home directory Your klemming home directory
SNIC_NOBACKUP Not backed up folder for large data /cfs/klemming/nobackup /cfs/klemming/projects or /cfs/klemming/nobackup
SNIC_RESOURCE Name of the cluster you are logged into Beskow or Tegner Dardel
SNIC_SITE Name of the site PDC PDC
SNIC_TMP Scratch folder for storing temporary data /cfs/klemming/scratch /cfs/klemming/projects or /cfs/klemming/projects


Swestore is a large scale storage system for live research data provided by SNIC. It requires a separate allocation to use. Part of the Swestore system is hosted at PDC.

File transfer

We recommend the following methods for transferring files to and from PDC:

  1. scp/rsync: With Secure Copy (SCP) and rsync you can copy files between your local machine and PDC systems. They use SSH for data transfer, and thus the same authentication as for logging in.

  2. AFS client: With an AFS client on your local machine transferring files between PDC and your local computer is as easy as drag-and-drop, or, using a cp command.

  3. Swestore (dCache): If you have SNIC Swestore (dCache) allocation, please see here how you can transfer files to/from it.

  4. KTH OneDrive (rclone): Use to transfer data between PDC and KTH OneDrive cloud storage.

Nodes for file operations

At PDC we have a number of transfer nodes setup. These nodes are dedicated for large file transfers but also for extensive file operations involving large amount of data or many files. It is important that you use these nodes for extensive file operations as not to overload the login node.

Dedicated transfer nodes for large file transfers will be set up on Dardel. In the meanwhile, please use the login node for the file transfers.

Name Type Usage Login node (Dardel) Submitting jobs and small file transfers Login node (Dardel) Large transfers and operations on the file system Login node Submitting jobs and small file transfers Login node Submitting jobs and small file transfers Transfer node (Beskow/Tegner) Large transfers and operations on the file system Transfer node (Beskow/Tegner) Large transfers and operations on the file system