DGX H100 P4387 System 640GB Full, Std Support, 3 Years
NVIDIA DGX H100 Purchasing Information
What's Included with NVIDIA DGX H100 Purchase
NOTE:
1 - Both Commercial and EDU SKUs available
2 - NVIDIA Enterprise Standard Support will cover onsite Hardware support, remote Hardware and Software support, Software updates and upgrades–in a timely manner.
3 - Premium Support is also available.
4 - Up to two additional years of support can be purchased initially when ordering a NVIDIA DGX H100.
5 - Installation Service is required when ordering a NVIDIA DGX H100.
6 - Additional Media Retention (CMR or SDMR) Services may be purchased in addition to the equivalent Support Services
For complete details, view the "DGX Systems Appliance Support Services Terms and Conditions" and the "End-User License Agreement (EULA)".
For additional services, view the "Technical Account Manager (TAM) Services for NVIDIA Products Terms and Conditions".
8x NVIDIA H100 GPUs With 640 Gigabytes of Total GPU Memory
18x NVIDIA® NVLink® connections per GPU, 900 gigabytes per second of bidirectional GPU-to-GPU bandwidth
4x NVIDIA NVSwitches™
7.2 terabytes per second of bidirectional GPU-to-GPU bandwidth, 1.5X more than previous generation
10x NVIDIA ConnectX®-7 400 Gigabits-Per-Second Network Interface
1 terabyte per second of peak bidirectional network bandwidth
Dual 56-Core 4th Gen Intel® Xeon® Scalable Processors and 2 TB System Memory Powerful CPUs for the most intensive AI jobs
30 Terabytes NVMe SSD
High speed storage for maximum performance
Artificial intelligence has become the go-to approach for solving difficult business challenges. Whether improving customer service, optimizing supply chains, extracting business intelligence,
or designing cutting-edge products and services across nearly every industry, AI gives organizations the mechanism to realize innovation. And as a pioneer in AI infrastructure, NVIDIA DGX™
systems provide the most powerful and complete AI platform for bringing these essential ideas to fruition.
NVIDIA DGX H100 powers business innovation and optimization. The latest iteration of NVIDIA’s legendary DGX systems and the foundation of NVIDIA DGX SuperPOD™, DGX H100 i s an AI
powerhouse that features the groundbreaking NVIDIA H100 Tensor Core GPU. The system is designed to maximize AI throughput, providing enterprises with a highly refined, systemized, and
scalable platform to help them achieve breakthroughs in natural language processing, recommender systems, data analytics, and much more. Available on-premises and through a wide variety of access and deployment options, DGX H100 delivers the performance needed for enterprises to solve the biggest challenges with AI.
Specifications |
|
GPU | 8x NVIDIA H100 Tensor Core GPUs |
GPU memory | 640GB total |
Performance | 32 petaFLOPS FP8 |
NVIDIA® NVSwitch™ | 4x |
System power usage | 11.3kW max |
CPU | Dual 56-core 4th Gen Intel® Xeon® Scalable Processors |
System memory | 2TB |
Networking | 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI > 400Gb/s InfiniBand or > 400Gb/s Ethernet 2x dual-port NVIDIA ConnectX-7 VPI > 1x 400Gb/s InfiniBand > 1x 200Gb/s Ethernet |
Management network | 10Gb/s onboard NIC with RJ45 50Gb/s Ethernet optional NIC Host baseboard management controller (BMC) with RJ45 |
Storage | OS: 2x 1.92TB NVMe M.2 |
Internal storage: | 8x 3.84TB NVMe U.2 |
Software | NVIDIA AI Enterprise – Optimized AI software NVIDIA Base Command – Orchestration, scheduling, and cluster management Ubuntu / Red Hat Enterprise Linux / Rocky – Operating system |
Support | Comes with 3-year business-standard hardware and software support |
System weight | 287.6lbs (130.45kgs) |
System dimensions | Height: 14.0in (356mm) Width: 19.0in (482.2mm) Length: 35.3in (897.1mm) |
Operating temperature range |
5–30°C (41–86°F) |
of Excellence AI has bridged the gap between science and business. No longer the domain of experimentation, AI is used day in and day out by companies large and small to fuel their innovation and optimize their business. As the fourth generation of the world’s first purpose-built AI infrastructure, DGX H100 is designed to be the centerpiece of an enterprise AI center of excellence. It’s a fully optimized hardware and software platform that includes full support for the new range of NVIDIA AI software solutions, a rich ecosystem of third-party support, and access to expert advice from NVIDIA professional services. DGX H100 offers proven reliability, with DGX systems being used by thousands of customers around the world spanning nearly every industry.
As the world’s first system with the NVIDIA H100 Tensor Core GPU, NVIDIA DGX H100 breaks the limits of AI scale and performance. It features 9X more performance, 2X faster networking with NVIDIA ConnectX®-7 smart network interface cards (SmartNICs), and high-speed scalability for NVIDIA DGX SuperPOD. The next-generation architecture is supercharged for the largest,
most complex AI jobs, such as natural language processing and deep learning recommendation models.
NVIDIA Base Command powers every DGX system, enabling organizations to leverage the best of NVIDIA software innovation. Enterprises can unleash the full potential of their DGX investment with a proven platform that includes enterprise-grade orchestration and cluster management, libraries that accelerate compute, storage and network infrastructure, and an operating
system optimized for AI workloads. Additionally, DGX systems include NVIDIA AI Enterprise, offering a suite of software optimized to streamline AI development and deployment.
Leadership-Class Infrastructure on Your Terms
AI for business is about more than performance and capabilities. It’s also about fitting neatly into an organization’s IT envelope and practices. DGX H100 can be installed on-premises for direct management, colocated in NVIDIA DGX-Ready data centers, and accessed through NVIDIA-certified managed service providers.
And with the DGX-Ready Lifecycle Management program, organizations get a predictable financial model to keep their deployment at the leading edge. This makes DGX H100 as easy to use and acquire as traditional IT infrastructure, with no additional burden on busy IT staff—which lets organizations leverage AI for their businesses today instead of waiting for tomorrow.
To learn more about NVIDIA DGX H100, visit
nvidia.com/DGX-H100
Rozmiar obudowy | 8U Rack |
---|---|
ilość nodów | 1 |
Procesor CPU | 4th Gen Intel® Xeon® Scalable processors |
Producent Procesora | Intel |
podstawka procesora | FC-LGA4189 (Socket P+) |
Chipset | Intel C621A |
ILOŚĆ PROCESORÓW | 2xCPU |
ilość slotów pamięci | 32 DIMM slots |
Typ pamięci | DDR5 DIMM |
Standard pamięci | DDR5-4800 MHz |
ilość GPU/HPC | 8 |
interfejs SSD/HDD | PCIe 4.0 x4, PCIe 3.0 x4/x8, SATA |
rozmiar kieszeni hdd/ssd | 2.5" 15mm |
kieszenie 2.5'' | 8 |
złącza M.2 | 2 |
moc zasilaczy | 3000W |
ceryfikaty zasilaczy | 80 plus Titanium |
redundancja zasilaczy | tak |
Application | GPU server, AI, AI Training, AI Inference and Machine Learning, AI / Deep Learning, HPC, Big Data |
Gwarancja | 3 lata |
Konfiguracja