PCIe 5.0 x16 Switch Dual-Root 12 PCIe 5.0 x16 LP slot(s) AIOM/OCP 3.0 Support
Key Applications
High Performance Computing
VDI
AI/Deep Learning Training
Media/Video Streaming
Cloud Gaming
Animation and Modeling
Design & Visualization
3D Rendering
Diagnostic Imaging
Key Features
32 DIMM slots Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5
13 PCIe Gen 5.0 X16 FHFL Slots
AIOM/OCP 3.0 Support
8x 2.5" Hot-swap SATA drive bays
8x HOT SWAP 2.5” SATA/SAS (AOC required)
8x2.5" Hot-swap NVMe drive bays
8 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control
4x 2700W (2+2) Redundant Power Supplies, Titanium Level
NVIDIA HGX combines NVIDIA A100 Tensor Core GPUs with high-speed interconnects to form the world’s most powerful servers. With 16 A100 GPUs, HGX has up to 1.3 terabytes (TB) of GPU memory and over 2 terabytes per second (TB/s) of memory bandwidth for unprecedented acceleration.
Compared to previous generations, HGX provides up to a 20X AI speedup out of the box with Tensor Float 32 (TF32) and a 2.5X HPC speedup with FP64. NVIDIA HGX delivers a staggering 10 petaFLOPS, forming the world’s most powerful accelerated scale-up server platform for AI and HPC.
Massive datasets, exploding model sizes, and complex simulations require multiple GPUs with extremely fast interconnections and a fully accelerated software stack. The NVIDIA HGX™ AI supercomputing platform brings together the full power of NVIDIA GPUs, NVIDIA® NVLink®, NVIDIA InfiniBand networking, and a fully optimized NVIDIA AI and HPC software stack from the NVIDIA NGC™ catalog to provide the highest application performance. With its end-to-end performance and flexibility, NVIDIA HGX enables researchers and scientists to combine simulation, data analytics, and AI to drive scientific progress.
Motherboard | Super X13DEG-OA |
Processor | |
CPU | Dual Socket E (LGA-4677) 4th Gen Intel® Xeon® Scalable processors Note Supports up to 350W TDP CPUs (Aircooled) Supports up to 350W TDP CPUs (Liquid Cooled) |
Cores | Up to 40C/80T; Up to 60MB Cache |
GPU | |
Supported GPU | GPU-NVH100-80,GPU-NVA100-80-NC |
CPU-GPU Interconnect | PCIe 5.0 x16 Switch Dual-Root |
GPU-GPU Interconnect | NVIDIA® NVLink™ Bridge (optional) |
System Memory | |
Memory | Memory Capacity: 32 DIMM slots Up to 8TB: 32x 256 GB DRAM Memory Type: 4800MHz ECC DDR5 DRAM |
Memory Voltage | 1.2 V |
Error Detection | ECC |
On-Board Devices | |
Chipset | Intel® C741 |
Network Connectivity | 2x 10GbE BaseT with Intel® X710-AT2 |
IPMI | Support for Intelligent Platform Management Interface v.2.0 IPMI 2.0 with virtual media over LAN and KVM-over-LAN support |
Input / Output | |
Video | 1 VGA port(s) |
System BIOS | |
BIOS Type | AMI 32MB SPI Flash EEPROM |
Management | |
Software | OOB Management Package (SFT-OOB-LIC ) Redfish API IPMI 2.0 SSM Intel® Node Manager SPM KVM with dedicated LAN SUM NMI Watch Dog SuperDoctor® 5 |
Power Configurations | ACPI Power Management Power-on mode for AC power recovery |
GPC Health Monitoring | |
CPU | 8+4 Phase-switching voltage regulator Monitors for CPU Cores, Chipset Voltages, Memory |
FAN | Fans with tachometer monitoring Pulse Width Modulated (PWM) fan connectors Status monitor for speed control |
Temperature | Monitoring for CPU and chassis environment Thermal Control for fan connectors |
Chassis | |
Form Factor | 4U Rackmount |
Model | CSE-418G2TS-R5K40P |
Dimensions and Weight | |
Height | 7" (178mm) |
Width | 17.2" (437mm) |
Depth | 29" (737mm) |
Package | 26.57" (H) x 27" (W) x 41" (D) |
Weight | Net Weight: 65.5 lbs (29.7 kg) Gross Weight: 100 lbs (45.3 kg) |
Available Color | Black Front & Silver Body |
Front Panel | |
Buttons | Power On/Off button System Reset button |
LEDs | Hard drive activity LED Network activity LEDs Power status LED System Overheat & Power Fail LED |
Expansion Slots | |
PCI-Express (PCIe) | 12 PCIe 5.0 x16 LP slot(s) 1 PCIe 5.0 X16 slot(s) |
Drive Bays / Storage | |
Hot-swap | 24x 2.5" hot-swap NVMe/SATA/SAS drive bays (8x 2.5" NVMe hybrid; 8x 2.5" NVMe dedicated) |
M.2 | 2 M.2 NVMe |
System Cooling | |
Fans / Liquid Cooling |
8 heavy duty fans with optimal fan speed control Direct to Chip (D2C) Cold Plate (optional) |
Power Supply | |
PSU |
2700W Redundant Power Supplies with PMBus |
AC Input | 2700W: 200-240Vac / 50-60Hz |
Dimension | 73.5 x 40 x 203 mm |
+12V | Max: 225A / Min: 0A (200Vac-240Vac) |
Certification | Titanium Level |
Operating Environment | |
Environmental Spec. | Operating Temperature: 10°C ~ 35°C (50°F ~ 95°F) Non-operating Temperature: -40°C to 60°C (-40°F to 140°F) Operating Relative Humidity: 8% to 90% (non-condensing) Non-operating Relative Humidity: 5% to 95% (non-condensing) |
Deep learning models are exploding in size and complexity, requiring a system with large amounts of memory, massive computing power, and fast interconnects for scalability. With NVIDIA NVSwitch™ providing high-speed, all-to-all GPU communications, HGX can handle the most advanced AI models. With A100 80GB GPUs, GPU memory is doubled, delivering up to 1.3TB of memory in a single HGX. Emerging workloads on the very largest models like deep learning recommendation models (DLRM), which have massive data tables, are accelerated up to 3X over HGX powered by A100 40GB GPUs.
Rozmiar obudowy | 4U Rack |
---|---|
ilość nodów | 1 |
Procesor CPU | 4th Gen Intel® Xeon® Scalable processors, 5th Gen Intel® Xeon® Processors |
Producent Procesora | Intel |
podstawka procesora | LGA-4677 (Socket E) |
Chipset | Intel C741 |
ILOŚĆ PROCESORÓW | 2xCPU |
ilość slotów pamięci | 32 DIMM slots |
Typ pamięci | DDR5 DIMM |
Standard pamięci | DDR5-4800 MHz |
Total PCI-E Slots# | 13 |
ilość GPU/HPC | 8 |
interfejs SSD/HDD | PCIe 4.0 x4, SATA |
rozmiar kieszeni hdd/ssd | 2.5" 15mm |
kieszenie 2.5'' | 24 |
złącza M.2 | 1 |
moc zasilaczy | 2700W |
ceryfikaty zasilaczy | 80 plus Titanium |
redundancja zasilaczy | 2+2 |
Application | GPU server, AI, AI Training, AI / Deep Learning, Visual Computing, HPC, Virtualization |
Gwarancja | 3 lata |
Konfiguracja