Intel Ethernet Network Adapter E810-CQDA2 Internal Fiber 100000 Mbit/s

SKU
E810CQDA2
In stock
£626.50 £522.08
-
+
Overview
Intel Ethernet Network Adapter E810CQDA2
More Information
Connectivity technology Wired
Host interface PCI Express
Interface Fiber
Maximum data transfer rate 100000 Mbit/s
SKU E810CQDA2
EAN 5032037172141
Manufacturer Intel
Availability In Stock
We found other products you might like!

You may also be interested in

Compare Products
Product Intel Ethernet Network Adapter E810-CQDA2 Internal Fiber 100000 Mbit/s Intel Ethernet Network Adapter E810-CQ...
£626.50 £522.08
Lenovo 01CV830 network card Internal Fiber 16000 Mbit/s Lenovo 01CV830 network card Internal F...
£989.71 £824.76
Mellanox Technologies MCX512A-ACAT network card Internal Fiber 25000 Mbit/s Mellanox Technologies MCX512A-ACAT net...
£745.06 £620.88
QNAP QXG-100G2SF-E810 network card Internal Fiber 100000 Mbit/s
Hot Product
QNAP QXG-100G2SF-E810 network card Int...
£911.04 £759.20
Broadcom BCM957508-N2100G network card Internal Fiber 100000 Mbit/s
Bestseller
Broadcom BCM957508-N2100G network card...
£928.73 £773.94
Nvidia ConnectX-6 Dx Internal Fiber 100000 Mbit/s
New
Nvidia ConnectX-6 Dx Internal Fiber 10...
£1,215.71 £1,013.09
SKU
E810CQDA2
01CV830
MCX512A-ACAT
QXG-100G2SF-E810
BCM957508-N2100G
900-9X658-0056-SB1
Description
iWARP/RDMA
iWARP delivers converged, low-latency fabric services to data centers through Remote Direct Memory Access (RDMA) over Ethernet. The key iWARP components that deliver low-latency are Kernel Bypass, Direct Data Placement, and Transport Acceleration.

Intel® Data Direct I/O Technology
Intel® Data Direct I/O Technology is a platform technology that improves I/O data processing efficiency for data delivery and data consumption from I/O devices. With Intel DDIO, Intel® Server Adapters and controllers talk directly to the processor cache without a detour via system memory, reducing latency, increasing system I/O bandwidth, and reducing power consumption.

PCI-SIG* SR-IOV Capable
Single-Root I/O Virtualization (SR-IOV) involves natively (directly) sharing a single I/O resource between multiple virtual machines. SR-IOV provides a mechanism by which a Single Root Function (for example a single Ethernet Port) can appear to be multiple separate physical devices.

Flexible Port Partitioning
Flexible Port Partitioning (FPP) technology utilizes industry standard PCI SIG SR-IOV to efficiently divide your physical Ethernet device into multiple virtual devices, providing Quality of Service by ensuring each process is assigned to a Virtual Function and is provided a fair share of the bandwidth.

Virtual Machine Device Queues (VMDq)
Virtual Machine Device Queues (VMDq) is a technology designed to offload some of the switching done in the VMM (Virtual Machine Monitor) to networking hardware specifically designed for this function. VMDq drastically reduces overhead associated with I/O switching in the VMM which greatly improves throughput and overall system performance
The Emulex 16 Gb (Generation 6) Fibre Channel (FC) host bus adapters (HBAs) are an ideal solution for all Lenovo System x servers requiring high-speed data transfer in storage connectivity for virtualized environments, data backup, and mission-critical applications. They are designed to meet the needs of modern networked storage systems that utilize high performance and low latency solid state storage drives for caching and persistent storage as well as hard disk drive arrays.

The Emulex 16 Gb Gen 6 FC HBAs feature ExpressLane™, which prioritizes mission-critical traffic in congested networks ensuring maximum application performance on flash storage arrays. They also seamlessly support Brocade ClearLink™ diagnostics through Emulex OneCommand® Manager, ensuring the reliability and management of storage network when connected to Brocade Gen 5 FC SAN fabrics.
Intelligent RDMA-enabled network adapter card with advanced application offload capabilities for High-Performance Computing, Web2.0, Cloud and Storage platforms

ConnectX-5 EN supports two ports of 100Gb Ethernet connectivity, while delivering low sub-600ns latency, extremely high message rates, PCIe switch and NVMe over Fabric offloads. ConnectX-5 providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.
The dual-port QXG-100G2SF-E810 100 GbE network expansion card with Intel® Ethernet Controller E810 supports PCIe 4.0 and provides up to 100Gbps bandwidth to overcome performance bottlenecks. With iWARP/RDMA and SR-IOV support (available soon), the QXG-100G2SF-E810 greatly boosts network efficiency and is ideal for I/O-intensive and latency-sensitive virtualization and data centers. The QXG-100G2SF-E810 is a perfect match for all-flash storage to realize the highest performance potential for ever-demanding IT challenges.

Ensure reliable data transfer with Forward Error Correction (FEC)
The QXG-100G2SF-E810 supports FEC to overcome packet loss – a potentially common occurrence in fast, long-distance networks. FEC helps the receiving side detect and recover lost data to correct bit errors, thus ensuring reliable data transmission over “noisy” communication channels. With three FEC modes, you can use a suitable cable and switch to build an optimal network environment.

Pair with high-speed switches for high-performance, low-latency data centers
The QXG-100G2SF-E810 network expansion card can be connected to a switch either with a QSFP28 cable or a QSFP28 to (4) SFP28 cable. You can also configure network redundancy to achieve network failover via the switch for continuous service and high availability.

Unleash the full potential of the NVMe all-flash TS-h2490FU
Supporting twenty-four (24) U.2 NVMe Gen 3 x4 SSDs, QNAP’s flagship TS-h2490FU NVMe all-flash storage features a PCIe Gen 4 x16 slot that enables the QXG-100G2SF-E810 to fully realize 100Gbps performance and eliminate bottlenecks in modern data centers, virtualization, cloud applications, and mission-critical backup/restore tasks.

Supports Windows and Linux servers/workstations
Besides QNAP devices, the QXG-100G2SF-E810 supports many platforms (including Windows 10, Windows Server, and Linux/Ubuntu) allowing you to attain optimal business performance for a wider range of system applications and services. The higher bandwidth density with reduction in links helps reduce cabling footprint and operational costs.
No
Dual-Port 100GbE/Single-Port 200GbE SmartNIC
ConnectX-6 Dx SmartNIC is the industry’s most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC provides up to two ports of 100Gb/s or a single-port of 200Gb/s Ethernet connectivity and delivers the highest return on investment (ROI) of any smart network interface card. ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50Gb/s (PAM4) and 25/10Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.
Short Description
Intel Ethernet Network Adapter E810CQDA2
Emulex 16Gbps Gen 6 FC Single-Port HBA Adaper for Lenovo System X Servers
ConnectX-5 EN Network Interface Card 10/25GbE Dual-Port SFP28 PCIe3.0 x8
2x 100G QSFP28, PCIe 4.0 x16
Dual-Port 100 Gb/s Ethernet PCI Express 4.0 x16 OCP 3.0 SFF
ConnectX-6 Dx EN adapter card, 100GbE, OCP3.0, With Host Management, Dual-port QSFP56, PCIe 4.0 x16, No Crypto, Thumbscrew (Pull Tab) Bracket
Manufacturer
Intel
Lenovo
Mellanox Technologies
QNAP
Broadcom
Nvidia
Search engine powered by ElasticSuite