PNY NVIDIA A100 TCSA100M-PB 40GB HBM2 PCIe4.0 250W
Unprecedented Acceleration for World’s Highest-Performing Elastic Data Centers
The NVIDIA A100 80GB Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20x higher performance over the prior NVIDIA Volta generation. A100 can efficiently scale up or be partitioned into seven isolated GPU instances, with Multi-Instance GPU (MIG) providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands.
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale, while allowing IT to optimize the utilization of every available A100 GPU.
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
TF32 Tensor Core 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS*
INT4 Tensor Core 1248 TOPS | 2496 TOPS*
Thermal Solutions Passive
vGPU Support NVIDIA Virtual Compute Server (vCS)
System Interface PCIE 4.0 x16
Maximum Power Consumption 300 W
CUDA Cores | 6912 |
Streaming Multiprocessors | 108 |
Tensor Cores | Gen 3 | 432 |
GPU Memory | 80GB HBM2e ECC on by Default |
Memory Interface | 5120-bit |
Memory Bandwidth | 1555 GB/s |
NVLink | 2-Way, 2-Slot, 600 GB/s Bidirectional |
MIG (Multi-Instance GPU) Support | Yes, up to 7 GPU Instances |
FP64 | 9.7 TFLOPS |
FP32 | 19.5 TFLOPS |
TF32 Tensor Core | 156 TFLOPS | 312 TFLOPS* |
Data Center Class Reliability Designed for 24 x 7 data center operations and driven by power-efficient hardware and components selected for optimum performance, durability, and longevity. Every NVIDIA H100 board is designed, built and tested by NVIDIA to the most rigorous quality and performance standards, ensuring that leading OEMs and systems integrators can meet or exceed the most demanding real-world conditions. |
NVIDIA Hopper ArchitectureNVIDIA A100 is the world's most powerful data center GPU for AI, data analytics, and high-performance computing (HPC) applications. Building upon the major SM enhancements from the Turing GPU, the NVIDIA Ampere architecture enhances tensor matrix operations and concurrent executions of FP32 and INT32 operations. |
Third Generation Tensor CoresPurpose-built for deep learning matrix arithmetic at the heart of neural network training and inferencing functions, the NVIDIA A100 includes enhanced Tensor Cores that accelerate more datatypes (TF32 and BF16) and includes a new Fine-Grained Structured Sparsity feature that delivers up to 2x throughput for tensor matrix operations compared to the previous generation. |
CUDA® Cores | 5120 |
---|---|
Architecture | NVIDIA Ampere |
GPU rozwiązania i zastosowanie | NVIDIA Virtual Compute Server (vCS), NVIDIA NVLink, Visualization and AI, NVIDIA AI Enterprise for VMWare |
wsparcie VR | tak |
chłodzenie grafiki | pasywne serwerowe |
Single-Precision Performance | up to 19 TFLOPS |
rodzaj pamięci | HBM2e |
pamieć karty | 40GB |
Mocowanie i profil kart | PCIe FHHL/ HHHL |
Gwarancja | 3 lata |
Konfiguracja