bwUniCluster¶
Overview¶
bwUniCluster 2.0 is a Tier-3, heterogeneous regional cluster with NVIDIA GPUs, available to the University of Stuttgart for general purpose and teaching. For research purposes, consider using bwForCluster instead, which has more resources.
See cluster status page for outage notifications.
See the bwUniCluster2.0/Hardware and Architecture page for more information.
Compute node |
Nodes |
Sockets |
Cores |
Clock speed |
RAM |
Local SSD |
Bus |
Accelerators |
VRAM |
Interconnect |
---|---|---|---|---|---|---|---|---|---|---|
Thin |
200 + 60 |
2 |
40 |
2.1 GHz |
96‒192 GB |
0.96 TB |
SATA |
- |
- |
100 Gbps |
HPC |
260 |
2 |
40 |
2.1 GHz |
96 GB |
0.96 TB |
SATA |
- |
- |
100 Gbps |
IceLake |
272 |
2 |
64 |
2.6 GHz |
256 GB |
1.8 TB |
NVMe |
- |
- |
200 Gbps |
Fat |
6 |
4 |
80 |
2.1 GHz |
3 TB |
4.8 TB |
NVMe |
- |
- |
IB HDR |
GPU x4 |
14 |
2 |
40 |
2.1 GHz |
384 GB |
3.2 TB |
NVMe |
4x V100 |
32 GB |
IB HDR |
GPU x8 |
10 |
2 |
40 |
2.6 GHz |
768 GB |
15 TB |
NVMe |
8x V100 |
32 GB |
IB HDR |
IceLake GPU x4 |
15 |
2 |
64 |
2.5 GHz |
512 GB |
6.4 TB |
NVMe |
4x A100 / H100 |
80/94 GB |
200 Gbps |
Login |
4 |
2 |
40 |
384 GB |
100 Gbps |
Partitions and nodes¶
This cluster uses queues instead of partitions.
dev_*
partitions are only used for development, i.e. debugging or performance optimization.
queue |
node |
default resources |
minimum resources |
maximum resources |
---|---|---|---|---|
|
thin |
time=10,
mem-per-cpu=1125mb
|
time=30, nodes=1, mem=180000mb,
ntasks-per-node=40, (threads-per-core=2)
6 nodes are reserved for this queue
|
|
|
thin |
time=30,
mem-per-cpu=1125mb
|
time=72:00:00, nodes=1, mem=180000mb,
ntasks-per-node=40, (threads-per-core=2)
|
|
|
hpc |
time=10,
mem-per-cpu=1125mb
|
nodes=2 |
time=30, nodes=4, mem=90000mb,
ntasks-per-node=40, (threads-per-core=2)
8 nodes are reserved for this queue
|
|
hpc |
time=30,
mem-per-cpu=1125mb
|
nodes=2 |
time=72:00:00, mem=90000mb, nodes=80,
ntasks-per-node=40, (threads-per-core=2)
|
|
IceLake |
time=10,
mem-per-cpu=1950mb
|
nodes=2 |
time=30, nodes=8, mem=249600mb,
ntasks-per-node=64, (threads-per-core=2)
8 nodes are reserved for this queue
|
|
IceLake |
time=10,
mem-per-cpu=1950mb
|
nodes=2 |
time=72:00:00, nodes=80, mem=249600mb,
ntasks-per-node=64, (threads-per-core=2)
|
|
IceLake + A100 |
time=10,
mem-per-gpu=127500mb,
cpus-per-gpu=16
|
time=30, nodes=1, mem=510000mb,
ntasks-per-node=64, (threads-per-core=2)
|
|
|
IceLake + A100 |
time=10,
mem-per-gpu=127500mb,
cpus-per-gpu=16
|
time=48:00:00, nodes=9, mem=510000mb,
ntasks-per-node=64, (threads-per-core=2)
|
|
|
IceLake + H100 |
time=10,
mem-per-gpu=127500mb,
cpus-per-gpu=16
|
time=48:00:00, nodes=5, mem=510000mb,
ntasks-per-node=64, (threads-per-core=2)
|
|
|
fat |
time=10,
mem-per-cpu=18750mb
|
mem=180001mb |
time=72:00:00, nodes=1, mem=3000000mb,
ntasks-per-node=80, (threads-per-core=2)
|
|
gpu4 |
time=10,
mem-per-gpu=94000mb,
cpus-per-gpu=10
|
time=30, nodes=1, mem=376000,
ntasks-per-node=40, (threads-per-core=2)
1 node is reserved for this queue
|
|
|
gpu4 |
time=10,
mem-per-gpu=94000mb,
cpus-per-gpu=10
|
time=48:00:00, mem=376000, nodes=14,
ntasks-per-node=40, (threads-per-core=2)
|
|
|
gpu8 |
time=10,
mem-per-cpu=94000mb,
cpus-per-gpu=10
|
time=48:00:00, mem=752000, nodes=10,
ntasks-per-node=40, (threads-per-core=2)
|
Source: bwUniCluster2.0/Batch Queues.
Thin nodes¶
There are 260 nodes equipped with 2 Intel Xeon Gold 6230 (20 cores, 40 threads, 2.1 GHz, 125 W), 96 GB or 192 GB of RAM, InfiniBand HDR (100 Gbps blocking) interconnect. There are no GPUs.
HPC nodes¶
There are 260 nodes equipped with 2 Intel Xeon Gold 6230 (20 cores, 40 threads, 2.1 GHz, 125 W), 96 GB of RAM, InfiniBand HDR (100 Gbps) interconnect. There are no GPUs.
IceLake nodes¶
There are 272 nodes equipped with 2 Intel Xeon Platinum 8358 (32 cores, 64 threads, 2.6 GHz, 250 W), 256 GB of RAM, InfiniBand HDR (200 Gbps) interconnect. There are no GPUs.
Fat nodes¶
There are 6 nodes equipped with 4 Intel Xeon Gold 6230 (20 cores, 40 threads, 2.1 GHz, 125 W) and 3 TB of RAM, InfiniBand HDR interconnect. There are no GPUs.
GPU x4 nodes¶
There are 14 nodes equipped with 2 Intel Xeon Gold 6230 (20 cores, 40 threads, 2.1 GHz, 125 W) and 384 GB of RAM, InfiniBand HDR interconnect. Each node has 4 NVIDIA Tesla V100 SXM2 (300 Gbps, 300 W, 32 GB HBM2).
GPU x8 nodes¶
There are 10 nodes equipped with 2 Intel Xeon Gold 6248 (20 cores, 40 threads, 2.6 GHz, 150 W), 768 GB of RAM, InfiniBand HDR interconnect. Each node has 8 NVIDIA Tesla V100 SXM2 (300 Gbps, 300 W, 32 GB HBM2).
IceLake GPU x4 nodes¶
There are 15 nodes equipped with 2 Intel Xeon Platinum 8358 (32 cores, 64 threads, 2.5 GHz, 250 W), 512 GB of RAM, InfiniBand HDR (200 Gbps) interconnect. Each node has either 4 NVIDIA Tesla A100 (80 GB HBM2e) or 4 NVIDIA Tesla H100 (94 GB HBM).
Login nodes¶
There are 4 nodes equipped with 2 CPUs (20 cores, 40 threads, 2.6 GHz, 150 W), 384 GB of RAM, InfiniBand HDR (100 Gbps blocking) interconnect.
Filesystems¶
Access¶
For University of Stuttgart personnel, applications are processed by the HLRS. Follow the instructions outlined in the HLRS page bwUniCluster Access. You need to communicate your personal information and write a short abstract of your research project or teaching project. Once your application is approved, you will need to register an account at KIT and fill out a questionnaire. The review phase takes a few working days.
Be advised that entitlements are time-limited: 1 year for students, or contract
end date for academic staff. No reminder will be sent before entitlements are
revoked by TIK. Students need to ask for an extension before the cutoff date.
Academic staff whose contract gets renewed need to ask for an extension before the
end date of the old contract (in the e-mail, mention the new contract end date).
To check your entitlements, log into bwIDM,
open the “Shibboleth” tab and look for http://bwidm.de/entitlement/bwUniCluster
.
Afterwards, create an account on the bwHPC service by following the instructions in Registration/bwUniCluster. You need 2FA, and SMS are not an option. If you don’t have a YubiKey or a device capable of managing software tokens, it is possible to use the KeePassXC software instead (see TOTP).
Once access is granted, refer to the bwUniCluster2.0 user documentation. See also Using bwUniCluster for building software and submitting jobs.
Obligations¶
Use of the cluster must be acknowledged in scientific publications. Citation details of these publications must be communicated to the bwHPC-S5 project (publications@bwhpc.de). For details, refer to bwUniCluster2.0/Acknowledgement.
Publications¶
[Kuron et al., 2019]: ICP ESPResSo simulations on bwUniCluster
[Zeman et al., 2021]: ICP GROMACS simulations on Hazel Hen with support from bwHPC