bwForCluster

Overview

bwForCluster is a collection of federated Tier-3 regional clusters with NVIDIA GPUs, available to the University of Stuttgart for research and teaching. For development and small jobs, consider using bwUniCluster instead, which needs less paperwork and has much more diverse hardware.

Each cluster has its own set of intended uses:

The complete list of DFG subject areas for each cluster is available in bwHPC Domains.

Partitions and nodes

For details, refer to:

See cluster status page for outage notifications.

Filesystems

Access

For University of Stuttgart personnel, applications are processed by the HLRS. Follow the instructions outlined in the HLRS page bwForCluster Access. You need to communicate your personal information and write a short abstract of your research project or teaching project. Once your application is approved, you will need to register an account at KIT and fill out a questionnaire. The review phase takes a few working days.

Be advised that entitlements are time-limited: 1 year for students, or contract end date for academic staff. No reminder will be sent before entitlements are revoked by TIK. Students need to ask for an extension before the cutoff date. Academic staff whose contract gets renewed need to ask for an extension before the end date of the old contract (in the e-mail, mention the new contract end date). To check your entitlements, log into bwIDM, open the “Shibboleth” tab and look for http://bwidm.de/entitlement/bwForCluster.

Afterwards, create an account on the bwHPC service by following the instructions in Registration/bwForCluster. You need 2FA, and SMS are not an option. If you don’t have a YubiKey or a device capable of managing software tokens, it is possible to use the KeePassXC software instead (see TOTP).

Each cluster can be used free of charge. However, the compute time is tracked in a Rechenvorhaben (RV, German for compute project). To reduce the administrative burden on the ICP side and the bwHPC side, PIs have applied for group RVs. Please add yourself as a co-worker to the RV that best aligns with your research plans:

  • Christian Holm: Helix for soft matter (“Equilibrium properties and dynamics of hydrogels”, see RV project description)

  • Alexander Schlaich: JUSTUS 2 for quantum chemistry (expired)

  • Maria Fyta: JUSTUS 1 for quantum chemistry (expired)

Get the RV acronym and password from the PI to register as a co-worker. You will get a confirmation e-mail with the RV acronym and a link to the service registration page, where you will set up your password and 2FA. The PI responsible for the RV will be notified of your registration by e-mail.

Once access is granted, refer to the bwForCluster user documentation. See also Using bwForCluster for building software and submitting jobs.

RV registration form

The RV is only valid for one year and must be renewed by the PI. Once a RV is approved, the PI will receive an acronym and a password that group members can then use when registering on the cluster. To apply for a RV, use the bwForCluster RV registration page. The PI must fill out a form with the following fields:

Compute Project (RV)

  • RV Title: project title

  • RV Description (2000 characters): description of the intended work by the PI and all co-workers

  • Scientific Field:

    • choose “Natural Sciences” as the main field

    • choose the subfield based on the cluster:

      • Helix: e.g. “Statistical Physics, Soft Matter, Biological Physics, Nonlinear Dynamics”

      • JUSTUS2: e.g. “Optics, Quantum Optics and Physics of Atoms, Molecules and Plasmas”

  • Additionally assigned subject fields: optional keywords, such as machine-learned potentials, lattice-Boltzmann, etc.

  • Field of activity: select “research”

  • Type of activity: select “production”

  • Parallel Paradigm: check all boxes that apply to your software

    • Sequential: software can only use 1 CPU core per job, no GPU

    • Parallel (distributed memory): for MPI-parallel software

    • Parallel (shared memory): for OpenMP-capable software

    • Parallel (accelerators): for GPU-capable software

  • Programming language: when building software from sources, select the relevant languages, e.g. C, C++, Fortran, CUDA, Python

  • Numerical methods: e.g. density functional theory, molecular dynamics, Monte Carlo, lattice-Boltzmann, Finite element method, etc.

  • Software packages: e.g. ESPResSo, GROMACS, NAMD, Tensorflow, etc.

Requested resources (for a limited period of one year)

  • Required CPU hours for 12 months: budget ~300,000 core hours per person per year, rounded up to the nearest million

  • Planned maximum number of parallel used CPU cores per job

  • Estimated maximal memory requirements per CPU core (in GB): 2 GB for a typical usage, but make sure the hardware has enough RAM available, if not, factor that in when choosing the maximum number of parallel CPU cores (i.e. if the hardware has 2 GB per core and you need 4 GB per core, then you must use half the CPU cores on each node while reserving all of them, because slurm can’t oversubscribe the RAM)

  • Maximum persistent disk space for this RV (in GB)

  • Estimated maximal temporary disk space per job (in GB)

Own Personal Data (RV responsible)

  • Institute: write “Institute for Computational Physics”

  • other fields: automatically filled out from your TIK account

Obligations

Use of the clusters must be acknowledged in scientific publications. Citation details of these publications must be communicated to the bwHPC-S5 project (publications@bwhpc.de).

For details, refer to:

The PI of the RV will have to contribute to DFG reports and possibly to DFG proposals too. For details, refer to Registration/bwForCluster/RV.

Publications