NVIDIA ConnectX-7 adapter card, 200Gb/s NDR200 IB, Single-port OSFP, PCI e 5.0 x16 Extension option (Socket Direct ready), Secure boot, No Crypto , MCX75510AAS-HEAT

NVIDIA ConnectX-7 adapter card, 200Gb/s NDR200 IB, Single-port OSFP, PCI e 5.0 x16 Extension option (Socket Direct ready), Secure boot, No Crypto , MCX75510AAS-HEAT

NVIDIA ConnectX-7 adapter card, 200Gb/s NDR200 IB, Single-port OSFP, PCI e 5.0 x16 Extension option (Socket Direct ready), Secure boot, No Crypto , MCX75510AAS-HEAT

    

     

NVIDIA ConnectX-7 adapter card, 200Gb/s NDR200 IB, Single-port OSFP, PCI e 5.0 x16 Extension option (Socket Direct ready), Secure boot, No Crypto ,
Towar dostępny na zamówienie
SKU
MCX75510AAS-HEAT
  From Factor

 Data Transmission

Rate

No. of Port PCIe Direct Socket Secure Boot Crypto Bracket Lifecycle
MCX75510AAS-HEAT PCIe Half Height, Half Length
2.71 in. x 6.6 in. (68.90mm x 167.65 mm)
InfiniBand: NDR200 200Gb/s (Default speed)
Ethernet: 200GbE
Single-port OSFP PCIe x16 Gen 4.0/5.0 @ SERDES 16GT/s/32GT/s Optional: PCIe x16 Gen 4.0 @ SERDES 16GT/s - Tall Bracket Mass Production

Protocol Support

InfiniBand: IBTA v1.5a

Auto-Negotiation: NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane).

Ethernet: 200GAUI-4 C2M, 200GBASE-CR4, 100GAUI-2 C2M, 100GBASE-CR4, 100GBASE-CR2, 100GBASE-CR1, 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE-CR4, 40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASE-CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI
Electrical and Thermal Specifications Voltage: 12V, 3.3VAUX 
Maximum current: 100mA
Maximum power available through OSFP port: 17W (Not thermally supported)
Electrical and thermal specifications are provided in NVIDIA ConnectX-7 Electrical and Thermal Specifications document, which is available at NVOnline following login.

ConnectX-7 Adapters Smart, Accelerated Networking for Modern Data Center Infrastructures

The NVIDIA® ConnectX®-7 family of Remote Direct Memory Access (RDMA) network
adapters supports InfiniBand and Ethernet protocols and a range of speeds up to
400Gb/s. It enables a wide range of smart, scalable, andfeature-rich networking
solutions that address traditional enterprise needs up to the world’s most-demanding
AI, scientific computing, and hyperscale cloud data center workloads.


The NVIDIA® ConnectX®-7 family of Remote Direct Memory Access (RDMA) network
adapters supports InfiniBand and Ethernet protocols and a range of speeds up to
400Gb/s. It enables a wide range of smart, scalable, andfeature-rich networking
solutions that address traditional enterprise needs up to the world’s most-demanding
AI, scientific computing, and hyperscale cloud data center workloads.

Accelerated Networking and Security
ConnectX-7 provides a broad set of software-defined, hardware-accelerated
networking, storage, and security capabilities which enable organizations to
modernize and secure their IT infrastructures. Moreover, ConnectX-7 empowers
agile and high-performance solutions from edge to core data centers to clouds,
all while enhancing network security and reducing the total cost of ownership.


Accelerate Data-Driven Scientific Computing
ConnectX-7 provides ultra-low latency, extreme throughput, and innovative NVIDIA
In-Network Computing engines to deliver the acceleration, scalability, and feature-
rich technology needed for today’s modern scientific computing workloads.


InfiniBand Interface
> InfiniBand Trade Association Spec 1.5
compliant
> RDMA, send/receive semantics
> 16 million input/output (IO) channels
> 256 to 4Kbyte maximum transmission
unit (MTU), 2Gbyte messages
Enhanced Ethernet Networking
> Zero-Touch RoCE
> ASAP² Accelerated Switch and
Packet Processing™ for SDN and VNF
acceleration
> Single Root I/O Virtualization (SR-IOV)
> VirtIO acceleration
> Overlay network acceleration: VXLAN,
GENEVE, NVGRE
> Programmable flexible parser
> Connection tracking (L4 firewall)
> Flow mirroring, sampling and statistics
> Header rewrite
> Hierarchical QoS
> Stateless TCP offloads
Cybersecurity
> Inline hardware IPsec encryption and
decryption: AES-GCM 128/256-bit key,
IPsec over RoCE
> Inline hardware TLS encryption and
decryption: AES-GCM 128/256-bit key
> Inline hardware MACsec encryption and
decryption: AES-GCM 128/256-bit key
> Platform security: secure boot with
hardware root-of-trust, secure
firmware update, flash encryption,
and device attestation
Ethernet Interface
> Up to 4 network ports supporting
NRZ, PAM4 (50G and 100G), in various
configurations
> Up to 400Gb/s total bandwidth
> RDMA over Converged Ethernet (RoCE)
Storage Accelerations
> Block-level encryption: XTS-AES
256/512-bit key
> NVMe over Fabrics (NVMe-oF)
> NVMe over TCP (NVMe/TCP)
> T10 Data Integrity Field (T10-DIF)
signature handover
> SRP, iSER, NFS over RDMA, SMB Direct
Advanced Timing and Synchronization
> Advanced PTP: IEEE 1588v2 (any
profile), G.8273.2 Class C, 12
nanosecond accuracy, line-rate
hardware timestamp (UTC format)
> SyncE: Meets G.8262.1 (eEEC)
> Configurable PPS In and Out
> Time-triggered scheduling
> PTP-based packet pacing
> Time-Sensitive Networking (TSN)

Enhanced InfiniBand Networking
> Hardware-based reliable transport
> Extended Reliable Connected (XRC)
> Dynamically Connected Transport
(DCT)
> GPUDirect® RDMA
> GPUDirect Storage

Management and Control
> NC-SI, MCTP over SMBus, and MCTP
over PCIe
> PLDM for Monitor and Control
DSP0248
> PLDM for Firmware Update DSP0267
> PLDM for Redfish Device Enablement
DSP0218
> PLDM for FRU DSP0257

SPDM DSP0274
> Serial Peripheral Interface (SPI) to flash
> JTAG IEEE 1149.1 and IEEE 1149.6

PCI Express Interface
> PCIe Gen 5.0 compatible, 32 lanes
> Support for PCIe bifurcation
> NVIDIA Multi-Host™ supports
connection of up to 4x hosts
> Transaction layer packet (TLP)
processing hints (TPH)
> PCIe switch Downstream Port
Containment (DPC)
> Support for MSI/MSI-X mechanisms
> Advanced error reporting (AER)
> Access Control Service (ACS) for
peer-to-peer secure communication
> Process Address Space ID (PASID)
> Address translation services (ATS)
> Support for SR-IOV
Adaptive routing support
> Enhanced atomic operations
> Advanced memory mapping, allowing
user mode registration (UMR)
> On-demand paging (ODP), including
registration-free RDMA memory access
> Enhanced congestion control
> Burst buffer offload
> Single root IO virtualization (SR-IOV)
> Optimized for HPC software libraries
including:
> NVIDIA HPC-X®, UCX®, UCC, NCCL,
OpenMPI, MVAPICH, MPICH,
OpenSHMEM, PGAS
> Collective operations offloads
> Support for NVIDIA Scalable
Hierarchical Aggregation and Reduction
Protocol (SHARP)™
> Rendezvous protocol offload
> In-network on-board memory
Remote Boot
> Remote boot over InfiniBand
> Remote boot over Internet Small
Computer Systems Interface (iSCSI)
> Unified Extensible Firmware Interface
(UEFI)
> Preboot Execution Environment (PXE)
Operating Systems/Distributions
> In-box drivers for major operating
systems:
> Linux: RHEL, Ubuntu
> Windows
> Virtualization and containers
> VMware ESXi (SR-IOV)
> Kubernetes
Więcej informacji
Interfejs portu QSFP112
Mocowanie i profil kart PCIe FHHL/ HHHL
Gwarancja 3 lata
Napisz własną recenzję
Tylko zarejestrowani użytkownicy mogą pisać Recenzje. Proszę Zaloguj się lub Załóż konto
test
Zapytaj o cenę
Trwa wysyłanie wiadomości...

Wiadomość została wysłana. Dziękujemy.

Wiadomość NIE została wysłana. Prosimy wypełnić prawidłowo wszystkie pola.
Wiadomość NIE została wysłana. Prosimy o kontakt z Działem Handlowym.
Produkt

Konfiguracja

Imię i nazwisko
Adres e-mail
Nazwa firmy
Treść wiadomości