Skip to main content

Editorial

The upcoming summer holiday period is an opportunity to reflect on a busy period where there have been a lot of challenges and also many good results, various examples of which are covered in this newsletter.

The most prominent challenge for PDC is the continuing work on getting Dardel in place in its final configuration. The year started with the inauguration of the first phase ( Dardel Inauguration ). The event was attended by over a hundred people, which shows the strong interest in the system. During the initial phase, a partition comprising CPU-only nodes and a large-scale parallel file system were deployed. These rapidly became a major workhorse for many research projects, including large-scale projects using many nodes as well as small-scale projects using smaller numbers of cores. Some good news is that this compute partition and the capacity of the parallel file system will be significantly extended towards the end of the year. The new hardware, which is funded by SNIC, was ordered in May.

Before the new extension is installed, the second phase of Dardel will be put in place ( Dardel Second Phase on the Way ). It is based on innovative new hardware, namely AMD’s MI250X graphics processing units (GPUs). Their impressive performance has only recently been disclosed publicly: each GPU features a performance of almost 50 TFLOPS, and there will be four such GPUs per node. To put this into perspective: it would only take two of Dardel’s 56 new GPU nodes to outperform the whole Lindgren system, which PDC operated ten years ago. This new technology has made it possible to realise the world’s first exaflop system, Frontier, at the Oak Ridge National Lab, USA. This system, together with its smaller sibling LUMI, made it to the top of the recent Top500 and Green500 lists. The Top500 list orders the systems with the highest performance in terms of throughput for floating-point operations during the execution of the LINPACK benchmark programme, while the Green500 list orders the systems with the best power efficiency.

This power efficiency only translates into energy efficiency for real-life applications if the hardware is exploited efficiently by the software. Therefore, work on applications and getting them ready for the GPU-phase of Dardel and for LUMI is of utmost importance. In this context, PDC can report on two highlights: a new computational fluid dynamics code ( Neko ) and porting the molecular dynamics code GROMACS ( Preparing GROMACS for Heterogeneous Exascale HPC Systems ), both of which demonstrate the usability of Dardel's GPU partition for high-performance computing (HPC) simulations. However, this new partition will also be very interesting for machine learning workflows. We are preparing for the necessary software to be available as soon as the hardware becomes available ( Boosting AI/ML Research on Dardel ).

HPC applications are not only supposed to make the best possible use of the underlying hardware resources but to be user-friendly and versatile. A best-practice example is the VeloxChem software ( VeloxChem Workshop ). While this software is being prepared for Dardel’s powerful GPUs, it is also having its features improved on a smaller scale as it becomes a central element of a new electronic book called eChem ( eChem: Computational Chemistry from Laptop to HPC ). This makes the application particularly suitable for learning and exploring quantum chemistry problems interactively. This is just one example of PDC’s efforts in supporting training and education in the context of HPC. We are also very happy to be part of the third phase of CodeRefinery, a very successful training program that is partially funded by the Nordic e-Infrastructure Collaboration, NeIC, ( CodeRefinery Update ) and, of course, we are continuing our regular introductory training courses to help researchers to get started on Dardel ( Introduction to PDC Systems Workshop ).

The Dardel HPC system at PDC will soon be augmented by the Dardel Cloud, which will provide a virtualised computing infrastructure as well as a cloud storage system ( Dardel Cloud ). With this resource, PDC will expand its service portfolio to serve more researchers and make data sharing easier.

While PDC’s prime focus is on enabling computational research in Sweden, we continue doing this in a European context that is evolving rapidly. The PRACE implementation projects have, in the past, offered many opportunities to work with European partners and to contribute to the European HPC ecosystem. While PRACE as an organisation continues to exist, the implementation projects are coming to an end, and thus the role of PRACE is changing ( PRACE: Change of Pace ). At the same time, the EuroHPC Joint Undertaking activities are increasing momentum. This includes the procurement of pre-exascale systems like LUMI, where we encourage Swedish researchers to apply for computing resources ( LUMI Inauguration & Applying to Use LUMI ), as well as developing the European network of competence centres ( EuroCC Lithuania visits Stockholm ). Also, many of PDC’s efforts towards the support of application domains remain European ( Update on BioExcel and PerMed CoEs ).

It is not just the European HPC landscape that is changing, significant changes are on the way in the Swedish landscape too. After twenty years, SNIC will be replaced by a new organisation ( SNIC Transition News ). Providing services for Swedish research based on high-end HPC resources will remain a national effort, but the way this is organised is changing. Decisions on how this will change are still in the making and will be reported on in the next edition of this newsletter.

The summer months will hopefully provide the necessary recreation opportunities to address this and other challenges!

Dirk Pleiter, Director PDC