Description

The platform brings together 6660 cores, 42 GPUs and 49.6 TB of memory. Computing power is rated at around 1 PFlops. It is notably made up of 78 fine nodes, 16 thick nodes, 10 bi-gpus nodes, 3 quad-gpus nodes and 1 octo-gpus node and hybrid nodes equipped in particular with FPGA cards. A visualization node is also available with OpenOnDemand.
The cluster is running through partitions which sometimes have limitations.

Compute nodes

CPU

  • 48 thin nodes (2017) – dual-processor Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz servers, 128 GB of memory
    -> 28 cores and 4 GB/core
  • 24 thin nodes (2022) – dual-processor AMD EPYC 7513 @ 2.60GHz servers, 256 GB of memory
    -> 64 cores and 4000 MB/core
  • 12 big nodes (2017)- dual-processor Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz servers, 512 GB of memory
    -> 28 cores and 16 GB/core
  • 4 big nodes (2022) – dual-processor AMD EPYC 7513 @ 2.60GHz servers, 1 TB of memory
    -> 64 core and 16000 MB/core
  • 6 nodes (2025) – dual-processor AMD EPYC 9745 @ 2.4GHz servers with 1.5 TB of memory
    -> 256 cores and 6144 MB/core

GPU

  • 6 bi-gpus nodes (2019) – dual-processor Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz servers, 384 GB of memory and 2 Tesla V100-32G GPU cards
    -> 40 cores, 2 GPUs Tesla V100 and 9 GB/core
  • 2 quad-gpus nodes (2019) – dual-processor Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz servers, 768 GB of memory and 4 Tesla V100 SXM2-32G-NVLink GPU cards
    -> 40 cores, 4 GPUs Tesla V100 and 18 GB/core
  • 1 quad-gpus node (2022) – dual-processor AMD EPYC 7513 @ 2.60GHz server, 1 TB of memory and 4 A100-40GB GPUs
    -> 64 cores, 4 GPUs A100 and 16000MB/core
  • 4 bi-gpu nodes (2025) – dual-processor AMD EPYC 9335 @ 3.00GHz server with 768 GB of memory and 2 H200-141GB GPUs
    -> 128 cores, 2 GPUs H200 and 6000MB/core
  • 1 octo-gpu node (2025) – dual-processor Intel(R) Xeon(R) Platinum 8568Y+ @ 2.30GHz with 2TB of memory and 8 H200-141GB GPUs
    -> 96 cores, 8 GPUs H200 et 21333MB/core

FPGA

  • 4 tri-fpga nodes (2022) – 4 dual-processor AMD EPYC 7502 @ 2.50GHz servers, 1 TB of memory and 3 Xilinx U280 FPGA boards
    -> 64 cores, 3 FPGA cards and 16000 MB/core

Visualisation

  • One visualisation node (2019) : an Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz dual-processor server, 192 GB of memory and a Tesla P40 GPU card (36 cores, 192GB of RAM and 1 GPU Tesla P40)

Hosted nodes

Storage

  • /users : 512 TB : Directories of users, results
  • /scratch : 292 TB : Fast distributed file system beeGFS, for performance/running computations
  • Next Cloud Recherche : 300 TB : Research data storage
  • Backup (600 TB)