Skip to main content

Earlier projects

TREX

The TREX Center of Excellence (CoE) in high-performance computing (HPC) for quantum chemistry brings together European researchers, HPC stakeholders, and small and medium-sized enterprises (SMEs) to develop and apply high-performance software solutions for quantum mechanical simulations at the exascale. The final goal of the project is to develop a set of flagship quantum Monte Carlo codes that will be able to take full advantage of the capabilities of the recent exascale computers.

To achieve this goal, TREX’s main focus is on the development of a user-friendly and open-source software suite in the domain of stochastic quantum chemistry simulations, which integrates TREX community codes within an interoperable, high-performance platform. This will make it possible to greatly enhance the tools available to the research community for the design of new materials and the understanding of the fundamental properties of matter. In parallel, TREX will work on show-cases to leverage this methodology for commercial applications, as well as developing and implementing software components and services that make it easier for commercial operators and research communities to use HPC resources for these applications.

Focus CoE

The Focus Centre of Excellence was a support platform for the European HPC centres of excellence to effectively fulfil their roles in the development of extreme-scale software applications for addressing scientific, industrial or societal challenges. FocusCoE coordinated strategic directions and collaboration, and provided support services in relation to both academic and industrial outreach and promotion of CoE services.
The FocusCoE objectives were:

  1. To create a platform, the EU HPC CoE General Assembly, which allows all HPC CoEs to collectively define an overriding strategy and collaborative implementation for interactions with, and contributions to, the EU HPC Ecosystem;
  2. To support the HPC CoEs to achieve enhanced interaction with industry, and SMEs in particular, through concerted out-reach and business development actions;
  3. To instigate concerted action on training by and for the complete set of HPC CoEs - providing a consolidating vehicle for user training offered by the CoEs and by PRACE (PATCs) and providing cross-area training to the CoEs (e.g. on sustainable business development); and
  4. To promote and concert the capabilities of and services offered by the HPC CoEs and development of the EU HPC CoE “brand” raising awareness with stakeholders and both academic and industrial users.

EOSC-Hub

The EOSC-hub brought together multiple service providers to create the Hub: a single contact point for European researchers and innovators to discover, access, use and reuse a broad spectrum of resources for advanced data-driven research.

For researchers, this meant a broader access to services supporting their scientific discovery and collaboration across disciplinary and geographical boundaries.

The project mobilised providers from the EGI Federation, EUDAT CDI, INDIGO-DataCloud and other major European research infrastructures to deliver a common catalogue of research data, services and software for research.

EUDAT

EUDAT 's vision is to enable European researchers (from any research discipline) to preserve, find, access and process data in a trusted environment that is part of a European Collaborative Data Infrastructure (CDI). This CDI is a network of collaborating, cooperating data centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe’s largest scientific data centres.

EUDAT offers common data services, supporting multiple research communities as well as individuals, through a geographically distributed, resilient network of 35 European organisations (see EUDAT Partners). These shared services and storage resources are distributed across 15 European nations and data is stored alongside some of Europe’s most powerful supercomputers.

ExaFLOW

The main goal of ExaFLOW was to address key algorithmic challenges in CFD (Computational Fluid Dynamics) to enable simulation at exascale, guided by a number of use cases of industrial relevance, and to provide open-source pilot implementations. Thus, driven by problems of practical engineering interest, the project focused on important simulation aspects, including:

  • error control and adaptive mesh refinement in complex computational domains,
  • resilience and fault tolerance in complex simulations,
  • solver efficiency via mixed discontinuous and continuous Galerkin methods and appropriate optimised preconditioners,
  • heterogeneous modelling to allow for different solution algorithms in different domain zones,
  • evaluation of energy efficiency in solver design, and
  • parallel input/output and in-situ compression for extreme data.

INTERTWinE

The INTERTWinE project addressed the problem of programming model design and implementation for the Exascale. The project worked on a number of different APIs and identified a set of combinations between them, which are of interest to applications developers and raise interoperability issues. The project had a set of applications and kernels, which were ported to the various API combinations so that there would be concrete instances with which to motivate the project's work on specifications and runtime implementations, be able to test out new ideas in runtime implementations, and be able to provide example code to developers.

MaX

MaX was a user-focused, problem-oriented European Centre of Excellence. It supported developers and end users of advanced applications in the field of materials, and worked at the frontiers of both current and future high performance computing (HPC) technologies to facilitate the evolution of HPC for materials research and innovation. MaX was creating an ecosystem of capabilities, ambitious applications, data workflows and analysis, as well as user-oriented services. In addition MaX was developing advanced programming models, novel algorithms, domain-specific libraries, in-memory data management, software/hardware co-design and technology-transfer actions to assist the transition to exascale in the materials domain.

AllScale

 AllScale aimed to boost the parallel applications development productivity, their portability, and runtime efficiency. The project was intended to reduce energy needs, and thus improve the resource efficiency utilisation of small to extreme scale parallel systems.The achieved outcomes that were being validated with the AllScale pilot applications related to these space weather simulations, environmental hazard and fluid dynamics:

  • iPIC3D - implicit Particle-in-Cell code for Space Weather Applications

  • AMDADOS - Adaptive Meshing and Data Assimilation for the Deepwater Horizon Oil Spill

  • Fine/Open - Large Industrial unsteady CFD simulations

All of those were provided by the small to medium-sized enterprise (SME), industry and academic consortium partners.

SAGE

SAGE was a European Horizon 2020 funded research project, with 10 highly respected partners led by Seagate. A multi-disciplinary collaborative approach was essential to understand and address the needs of storage systems for data intensive applications and use cases of the future. The SAGE Project aimed to re-define data storage for the next decade, with the depth of capabilities for the Exascale HPC compute era alongside a breadth of future ‘Big Data’ applications.

SNIC Cloud

The goal of the SNIC Cloud Infrastructure project was to create a sustainable and generic SNIC Cloud infrastructure (Infrastructure as a Service, IaaS) to form a basis for SNIC Cloud projects and to provide the necessary structure for running project specific Platform as a Service (PaaS) services. The SNIC Cloud project is run by PDC, HPC2N, C3SE and UPPMAX.

EGEE/EGI

The objective of the Enabling Grids for E-Science in Europe and European Grid Initiative  was to create and maintain a pan-European Grid Infrastructure in collaboration with National Grid Initiatives (NGIs) in order to guarantee the long-term availability of a generic e-infrastructure for all European research communities and their international collaborators. PDC lead the EGEE security coordination group.

EPiGRAM

EPiGRAM was an EC-funded FP7 project about exascale computing. The aim of the EPiGRAM project was to prepare Message Passing and PGAS programming models for exascale systems by fundamentally addressing their main current limitations. The concepts that were developed were tested on and guided by two applications in the engineering and space weather domains chosen from the suite of codes in current EC exascale projects.

ScalaLife

ScalaLife was an EU-FP7 funded project; its goals were to improve the scalability and performance of several important Life Science applications. The legacy of the project is the ScalaLife Competence Center for Computational Bio-molecular Research (www.scalalife.eu), which is providing long-term expert support with efficient HPC usage and code optimization of selected codes.

CRESTA

CRESTA brought together four of Europe’s leading supercomputing centres, with one of the world’s major equipment vendors, two of Europe’s leading programming tools providers and six application and problem owners to explore how the exaflop challenge can be met. The project had two integrated strands: one focused on enabling a key set of co-design applications for exascale, and the other focused on building and exploring appropriate systemware for exascale platforms. The six co-design vehicles represent an exceptional group of applications used by European academia and industry to solve critical grand challenge issues including biomolecular systems, fusion energy, the virtual physiological human, numerical weather prediction and engineering.

Heat Re-use with Chemistry Building

As a part of PDC's ongoing work to reduce our environmental footprint, we have been engaging in heat re-use projects. Previously excess heat from the Cray XE6 supercomputer Lindgren was used to heat a building  used by the School of Chemistry Science and Engineering at the KTH campus.

SWEGRID

SWEGRID was a computational grid consisting of 600 computers distributed in the form of six 100-node clusters distributed with one cluster at each of six sites throughout Sweden. The project commenced in 2003 and was funded by the KA Wallenberg foundation  and SNIC.

GRDI2020

GRDI2020: Towards a 10-Year Vision for Global Research Data Infrastructures, is a Europe-driven initiative proposed as a Coordination Action under the Seventh Framework Programme FP7 funded by GÉANT; Infrastructure unit of DG-INFSO of the European Commission.

ECEE

ECEE - Enabling clouds for eScience - is where the cloud projects NEON, BalticCloud, NGS, GRNET cloud, SARA cloud, UCM (OpenNebula), StratusLab, VENUS-C, SEECCI and CESGA collaborated to find out as much as possible, as quick as possible, about how clouds could help their users in their daily work - eScience.

Venus-C

VENUS-C: Virtual multidisciplinary EnviroNments USing Cloud infrastructures aimed to develop and deploy a Cloud Computing service for research and industry communities in Europe by offering an industrial-quality service-oriented platform based on virtualisation technologies.

DEISA

The Distributed European Infrastructure for Supercomputing Applications, was a consortium of leading national supercomputing centres that aimed to foster pan-European world-leading computational science research.

Uni-Verse

Uni-Verse was an EC-funded project based on an open source IP-based platform for multi-user, interactive, distributed, high-quality 3D graphics and audio for home, public and personal use. The project developed several tools for Verse including high-quality 3D-audio and acoustic simulation.

The PDC VR-CUBE

The PDC VR-CUBE was a fully immersive visualization environment - it was the first Cave-like device in the world with projection on all six sides of the cube. The PDC VR-CUBE also had an advanced 3D sound system.

OMII-Europe

OMII-Europe (Open Middleware Infrastructure Institute for Europe) aimed to bring together the best technologies from Europe and elsewhere and make them available in an easily usable and supported form to scientists across the ERA. It was funded by the European Commission.

NextGRID

The goal of NextGRID was to develop an architecture for the Next Generation Grid. This project was funded by the European Commission. PDC's contribution to the NextGRID project involved investigating trust federation and mapping mechanisms for managing aggregated security and trust relationships in dynamic virtual organizations belonging to the next generation of Grids.

NEON

The aim of the Northern Europe Cloud Computing project NEON was to review the promises and summarize the overall benefits that cloud computing could provide to the Nordic eScience community.

ICEAGE

International Collaboration to Extend and Advance Grid Education

BalticGrid and BalticGrid-II

The Baltic Grid was an EU Integrated Infrastructure Inititive to build and operate a Grid infrastructure in the Baltic states and Belarus. The project was coordinated by KTH PDC. The project started on the 1st of November 2005 and ended in April 2010.