Image may not exactly match the product.

Mellanox ConnectX-5 VPI Adapter Card - Network Adapter - PCIe 3.0 X16 - 100 Gigabit QSFP28 X 1 - For P/N: UCSC-C125-U, UCSC-C4200-SFF, UCSC-C4200-SFF=, UCSC-C4200-SFF-RF - UCSC-O-M5S100GF

SKU:
UCSC-O-M5S100GF
Shipping:
Calculated at Checkout
$3,252.93 $1,381.63
(You save $1,871.30 )

Image may not exactly match the product.

SPECIAL ORDER ONLY - THIS ITEM IS CURRENTLY AVAILABLE FOR PURCHASE, THOUGH THE ETA DEPENDS ON CISCO'S AVAILABILITY THROUGH DISTRIBUTION CHANNELS. FEEL FREE TO CONTACT US VIA PHONE, CHAT, OR SELECT "REQUEST A QUOTE" TO INQUIRE ABOUT AVAILABILITY.

Currently available for Pre-Order only

Check details for availability or request a quote

$3,252.93 $1,381.63
(You save $1,871.30 )
Availability:
This product is currently only available for pre-order

Condition:
New
Network Protocols:
10 Gigabit Ethernet, 100 Gigabit Ethernet, 25 Gigabit Ethernet, 40 Gigabit Ethernet, 50 Gigabit Ethernet, Gigabit Ethernet
Device Type:
Network adapter
Product Line:
Mellanox ConnectX-5
Connectivity Type:
Wired
Model:
VPI Adapter Card
Data Transfer Rate:
100 Gbps
Host Interface:
PCI Express 3.0 x16

Done shopping? You can create a PDF of your cart for later or for your purchasing dept! Details at checkout.

Share:
  • HPC environments
    ConnectX-5 VPI for Open Compute Project (OCP) NIC utilizes both IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technologies, delivering high bandwidth, low latency, and high computation efficiency for high performance, data intensive and scalable compute and storage platforms. ConnectX-5 offers significant enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and Rendezvous Tag Matching offload, hardware support for out-of-order RDMA Write and Read operations, as well as additional Network Atomic and PCIe Atomic operations support. ConnectX-5 complements switch adaptive-routing capabilities, and supports out-of-order data delivery, while maintaining in-order completion semantics. Additionally ConnectX-5 NIC provides multipath reliability and efficient support for many network topologies, such as DragonFly+. ConnectX-5 also supports GPUDirect for enhanced Machine Learning applications, Burst Buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative Dynamic Connected Transport (DCT) service that ensures extreme scalability for compute and storage systems.
  • Storage environments
    NVMe storage devices are gaining popularity, offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhacements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency. As with the earlier generations of ConnectX adapters, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Moreover, the intelligent ConnectX-5 flexible pipeline capabilities, including flexible parser and flexible match-action tables, can be programmed, which enables hardware offloads for future protocols. ConnectX-5 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. Moreover, with ConnectX-5 Network Function Virtualization (NFV), a VM can be used as a virtual appliance. With full data-path operations offloads as well as hairpin hardware capability and service chaining, data can be handled by the Virtual Appliance with minimum CPU utilization. With these capabilities, data center administrators benefit from better server utilization while reducing cost, power, and cable complexity, allowing for more Virtual Appliances, Virtual Machines and more tenants on the same hardware.
  • Cloud and Web2.0 environments
    Cloud and Web2.0 customers who are developing their platforms on Software Defined Network (SDN) environments, are leveraging their servers' Operating System Virtual-Switching capabilities to enable maximum flexibility. Open V-Switch (OVS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Virtual switch traditionally resides in the hypervisor and switching is based on twelve-tuple matching on flows. The virtual switch or virtual router software-based solution is CPU intensive, affecting system performance and preventing fully utilizing available bandwidth. ASAP2 - Mellanox Accelerated Switch and Packet Processing technology allows for offloading the vSwitch/vRouter by handling the data plane in the NIC hardware, without modifying the control plane. This results in significantly higher vSwitch/vRouter performance without the associated CPU load. The vSwitch/vRouter offload functions that are supported by ConnectX-5 include Overlay Networks (for example, VXLAN, NVGRE, MPLS, GENEVE, and NSH) headers' encapsulation and de-capsulation, as well as Stateless offloads of inner packets, packet headers' re-write enabling NAT functionality, and more.

ConnectX-5 with Virtual Protocol Interconnect (VPI) supports 100Gb/s InfiniBand and Ethernet connectivity, super-low latency and very high message rate, plus NVMe over Fabric offloads, providing the highest performance and most flexible solution for Open Compute Project servers and storage appliances, while supporting the most demanding applications and markets: Machine Learning, Data Analytics, and more.

Specs Overview

Detailed Specs