HPC at the ICP¶
Why do we need HPC at the ICP?¶
The ICP is largely built around performing large scale simulations with a focus on soft matter physics, energy materials, active matter and fluid dynamics. Multiscale modeling is often necessary to adequately capture physical properties that resolve different time- or length-scales. This type of modelling couples particle-based and lattice-based algorithms to resolve different scales. Some algorithms can leverage GPU accelerators for lattice-based problems and machine learning, or large memory compute nodes for large-scale data analysis or simulations involving billions of particles.
Which HPC expertise do we have?¶
The ICP manages its own cluster, Ant, for high-performance parallel computing [1]. It is also used for benchmarking and improving the scalability of parallel algorithms, together with a fleet of GPU-equipped servers exclusively dedicated to software testing. The ICP has access to the SimTech cluster and bwHPC resources (bwForCluster, bwUniCluster 2.0). Through the PRACE program, the ICP can apply for computing time at any European HPC facility, and currently has development access to the petascale Vega supercomputer.
The University of Stuttgart is an active player in the HPC field: it is a shareholder of the bwHPC initiative [2] and owns the HLRS supercomputing center and the IPVS. The ICP is a member of the Center of Excellence MultiXscale [3] [4]. All ICP PIs are project members of the Cluster of Excellence SimTech [5] and Christian Holm was a member of the SFB 716 [6]. The ICP, University of Stuttgart and SimTech have agreements to cover the participation fees and travel costs to EuroHPC training events of their staff members and students, in an effort to foster continuous learning in HPC. The ICP employs a HPC-RSE (Jean-Noël Grad), the SFB 1313 employs a FAIR-RSE (Hamza Oukili), SimTech employs a RDM-RSE (Sarbani Roy), and IntCDC employs a RDM-RSE (Matthias Braun). Their role is to assist domain scientists in leveraging highly parallel computing environments, writing quality-assured and future-proof software/libraries/scripts, and making simulation data archivable/findable/re-usable in compliance with the requirements of funding agencies and academic institutions.
HPC-driven research poses unique challenges in terms of software engineering, energy efficiency, software quality assurance, scientific reproducibility and data management. We actively participate in these discussions and disseminate their outcome to domain scientists of the University of Stuttgart through regular meetings and seminars. In addition, SimTech organizes the SIGDIUS Seminars, a monthly event to discuss policies, infrastructure and tools for software engineers, data stewards and domain scientists. IntCDC organizes a Software Carpentry every semester [14]. The IPVS offers a RSE course Simulation Software Engineering every winter semester. The HLRS, SimTech and University of Stuttgart are founding members of the str-RSE chapter of the German Research Software Engineers association, and manage a rich portfolio of highly extensible research software that are funded by software engineering grants [7] [8] [9] [10] [11] [12] [13].
What HPC facilities do we have access to?¶
University clusters
Ant cluster at the ICP
Bee cluster at the ICP
Ehlers cluster at SimTech
Vulcan cluster at the HLRS
bwForCluster at the bwHPC
bwUniCluster at the bwHPC
HPC centers