The NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace™ and Hopper™ architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications.
CPU+GPU designed for giant-scale AI and HPC
Key Features
Two nodes in a 1U form factor. Each node supports the following:High density 1U 2-node GPU system with Integrated NVIDIA® H100 GPU (1 per Node)
NVIDIA Grace Hopper™ Superchip (Grace CPU and H100 GPU)
NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and GPU at 900GB/s
Up to 576GB of coherent memory per node including 480GB LPDDR5X and 96GB of HBM3 for LLM applications
2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField®-3 or ConnectX®-7
7 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control
New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5
Supercharges accelerated computing and generative AI with HBM3 and HBM3e GPU memory
Runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, HPC SDK, and Omniverse™
The new modular architecture is designed to standardise AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro’s advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.
Technical Specifications
Two nodes in a 1U form factor. Each node supports the following:High density 1U 2-node GPU system with Integrated NVIDIA® H100 GPU (1 per Node)
NVIDIA Grace Hopper™ Superchip (Grace CPU and H100 GPU)
NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and GPU at 900GB/s
Up to 576GB of coherent memory per node including 480GB LPDDR5X and 96GB of HBM3 for LLM applications
2x PCIe 5.0 x16 slots per node supporting NVIDIA BlueField®-3 or ConnectX®-7
7 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control
Supermicro’s new NVIDIA MGX line of servers includes:
ARS-111GL-NHR – 1 NVIDIA GH200 Grace Hopper Superchip, Air-Cooled
ARS-111GL-NHR-LCC – 1 NVIDIA GH200 Grace Hopper Superchip, Liquid-Cooled
ARS-111GL-DHNR-LCC – 2 NVIDIA GH200 Grace Hopper Superchips, 2 Nodes, Liquid-Cooled
ARS-121L-DNR – 2 NVIDIA Grace CPU Superchips in each of 2 Nodes, 288 Cores in total
ARS-221GL-NR – 1 NVIDIA Grace CPU Superchip in 2U
SYS-221GE-NR – Dual-socket 4th Gen Intel Xeon Scalable processors with up to 4 NVIDIA H100 Tensor Core or 4 NVIDIA PCIe GPUs
Every MGX platform can be enhanced with NVIDIA BlueField®-3 DPU and/or NVIDIA ConnectX®-7 interconnects for high-performance InfiniBand or Ethernet networking.
Rozmiar obudowy | 1U Rack |
---|---|
ilość nodów | 2 |
Procesor CPU | NVIDIA GH200 Grace Hopper™ Superchip |
Producent Procesora | NVIDIA |
podstawka procesora | System On Chip |
Chipset | SoC |
ILOŚĆ PROCESORÓW | 2xCPU |
ilość slotów pamięci | n/a |
Typ pamięci | DDR5 DIMM |
Standard pamięci | DDR5-5600 MHz |
ilość GPU/HPC | 2 |
interfejs SSD/HDD | PCIe 5.0 |
rozmiar kieszeni hdd/ssd | E1.S |
moc zasilaczy | 2700W |
ceryfikaty zasilaczy | 80 plus Titanium |
redundancja zasilaczy | tak |
Application | GPU server, AI, AI Training, AI Inference and Machine Learning, AI / Deep Learning, HPC |
Gwarancja | 3 lata |
Konfiguracja