PCI Express (PCIe) |
Uses the following 2x PCIe x16 interfaces:
|
Up to 25 Gigabit Ethernet |
Mellanox adapters comply with the following IEEE 802.3 standards: 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE – IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet |
Memory |
|
Overlay Networks | In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-6 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol. |
RDMA and RDMA over Converged Ethernet (RoCE) | ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over Band and Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-6 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks. |
Mellanox PeerDirect™ | PeerDirect™ communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-6 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes. |
CPU Offload | Adapter functionality enabling reduced CPU overhead allowing more available CPU for computation tasks. Open vSwitch (OVS) offload using ASAP2(TM) • Flexible match-action flow tables • Tunnelingencapsulation / decapsulation |
Quality of Service (QoS) | Support for port-based Quality of Service enabling various application requirements for latency and SLA. |
Hardware-based I/O Virtualization | ConnectX-6 provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. |
Storage Acceleration |
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage • RDMA for high-performance storage access |
SR-IOV | ConnectX-6 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. |
High-Performance Accelerations | • Tag Matching and Rendezvous Offloads • Adaptive Routing on Reliable Transport • Burst Buffer Offloads for Background Checkpointing |
Operating Systems/Distributions
- RHEL/CentOS
- Windows
- FreeBSD
- VMware
- OpenFabrics Enterprise Distribution (OFED)
- OpenFabrics Windows Distribution (WinOF-2)
Connectivity
- Interoperable with 1/10/25/40/50/100/200 Gb/s Ethernet switches
- Passive copper cable with ESD protection
- Powered connectors for optical and active cable support
Manageability
ConnectX-6 technology maintains support for manageability through a BMC. ConnectX-6 PCIe stand-up adapter can be connected to a BMC using MCTP over SMBus or MCTP over PCIe protocols as if it is a standard Mellanox PCIe stand-up adapter. For configuring the adapter for the specific manageability solution in use by the server, please contact Mellanox Support.