The platform brings together 4448 cores, 26 GPUs and 34.2 TB of memory. Computing power is rated at around 400 TFlops. It is notably made up of 72 fine nodes, 16 thick nodes, 6 bi-gpus nodes and 3 quad-gpus nodes and 4 tri-fpga nodes. A visualization node is also available with Open OnDemand.
The cluster is running through partitions which sometimes have limitations.
Contents
view
Compute nodes
CPU
- 48 thin nodes (2017) – dual-processor Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz servers, 128 GB of memory
-> 28 cores and 4 GB/core - 24 thin nodes (2022) – dual-processor AMD EPYC 7513 @ 2.60GHz servers, 256 GB of memory
-> 64 cores and 4000 MB/core - 12 big nodes (2017)- dual-processor Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz servers, 512 GB of memory
-> 28 cores and 16 GB/core - 4 big nodes (2022) – dual-processor AMD EPYC 7513 @ 2.60GHz servers, 1 TB of memory
-> 64 core and 16000 MB/core
GPU
- 6 bi-gpus nodes (2019) – dual-processor Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz servers, 384 GB of memory and 2 Tesla V100-32G GPU cards
-> 40 cores, 2 GPUs Tesla V100 and 9 GB/core - 2 quad-gpus nodes (2019) – dual-processor Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz servers, 768 GB of memory and 4 Tesla V100 SXM2-32G-NVLink GPU cards
-> 40 cores, 4 GPUs Tesla V100 and 18 GB/core - 1 quad-gpus node (2022) – dual-processor AMD EPYC 7513 @ 2.60GHz server, 1 TB of memory and 4 Tesla A100-40G GPUs
-> 64 cores, 4 GPUs Tesla A100 and 16000MB/core
FPGA
- 4 tri-fpga nodes (2022) – 4 dual-processor AMD EPYC 7502 @ 2.50GHz servers, 1 TB of memory and 3 Xilinx U280 FPGA boards
-> 64 cores, 3 FPGA cards and 16000 MB/core
Visualisation
- One visualisation node (2019) : an Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz dual-processor server, 192 GB of memory and a Tesla P40 GPU card (36 cores, 192GB of RAM and 1 GPU Tesla P40)
Hosting
The platform hosts :
- 8 medium nodes (40 cores and 384 GB of RAM) as part of the ARTISTIC project
- 1 gpu node (16 cores, 1 GPU Tesla T4 and 32 GB of RAM) for LAMFA laboratory
Storage
- /users : 512 TB : Directories of users, results
- /scratch : 292 TB : Fast distributed file system beeGFS, for performance/running computations
- Next Cloud Recherche : 300 TB : Research data storage
- Backup (600 TB)
Recent developments
Strong support from Amiens Métropole has enabled the platform to be updated, for which support for “critical” equipment has just been extended. The addition of 24 thin nodes, 4 thick nodes, 1 Quad GPU node and 4 tri-FPGA nodes (about 180 TFlops) will mainly be intended for project depositories (energy and medical).