The X540-T1 Single Port Ethernet Converged Network Adapter from Intel is used by many OEMs (Original Equipment Manufacturers) as a single-chip solution for LAN on Motherboard (LOM) to deliver 10 Gigabit Ethernet (10 GbE) on server platforms. It works with legacy Gigabit Ethernet (GbE) switches and Cat 6A cabling. The auto-negotiation between 1 GbE and 10 GbE provides the necessary backwards compatibility for smooth transition and easy migration to 10 GbE. The Intel X540 uses low cost, Cat6 and Cat 6A cabling. The MAC and PHY are integrated into a single-chip solution, eliminating active heat sink and reducing per-port power consumption. Its flexible reach from 1 meter to 100 meters supports the latest network architectures including Top of Rack (ToR), Middle of Row (MoR) and End of Row (EoR). With 10x increase in performance, it also features Unified Networking (iSCSI, FCoE and LAN), Virtualization (VMDq and SRIOV) and Flexible Port Partitioning (FPP). It provides bandwidth-intensive applications with highly affordable 10 GbE network performances and cost-effective RJ-45 connectivity for distances up to 100 meters. The Intel X540 is a dual-port 10GBASE-T adapter with MAC + PHY integrated into a single-chip solution, which reduces cost and power consumption. It features a low profile, enabling higher bandwidth and throughput from standard and low-profile PCIe slots and servers. Its ability to efficiently balance network loads on multiple CPU cores increases performance on multi-processor systems when used with Receive-Side Scaling from Microsoft or Scalable I/O on Linux. The adapter supports remote booting to an iSCSI or FCoE drive. iSCSi simplifies SAN connectivity iSCSI uses Ethernet to carry storage traffic, extending the familiarity and simplicity of Ethernet to storage networking, without the need for SAN-specific adapters or switches. Intel's Open FCoE solution enables the adapter to support Fiber Channel payload encapsulated in Ethernet frames. Open FCoE is supported on 10GBASE-T as 10GBASE-T switches. The Open FCoE architecture uses a combination of FCoE initiators in Microsoft Windows, Linux operating systems and the VMware ESXi hypervisor to deliver high-performance FCoE solutions using standard 10 GbE Ethernet adapters. This approach enables to simplify the data center and standardize on a single adapter for LAN and SAN connectivity. The Ethernet Adapter is designed to fully offload the FCoE data path to deliver a fully-featured converged network adapter (CNA) functionality without compromising on power efficiency and interoperability. Flexible Port Partitioning (FPP) The adapter enables FPP (Flexible Port Partitioning) through which virtual controllers can be used by the Linux host directly and/or assigned to virtual machines. FPP allows you to use the functionality of SR-IOV to assign up to 63 processes per port to virtual functions in Linux. Moreover, it enables administrators to partition their 10 GbE bandwidth across multiple processes, ensuring a QoS by giving each assigned process equal bandwidth. Network administrators may also rate-limit each of these services to control how much of the 10 GbE pipe is available to each process. Data Center Bridging (DCB) Delivers Lossless Ethernet Data Center Bridging (DCB) guarantees lossless delivery, congestion notification, priority-based flow control and priority groups. The combination of 10 GbE and unified networking helps organizations overcome connectivity challenges and simplify the data center infrastructure. Unified Networking Unified Networking solutions allow you to combine the traffic of multiple data center networks like LAN and SAN onto a single efficient network fabric. NFS, iSCSI or Fiber Channel over Ethernet (FCoE) can carry both network and storage traffic at speeds of up to 10 Gbps. Unified Networking is available on every server either through LAN-on-Motherboard (LOM) implementation or via an add-in Network Interface Card (NIC). Open Architecture integrates networking with the server, thus reducing complexity and overhead while enabling a flexible and scalable data center network. Proven Ethernet Unified Networking is built on trusted Intel Ethernet technology, enabling customers to deploy FCoE or iSCSI while maintaining the quality of their traditional Ethernet networks. The adapter is compliant with the European Union directive to reduce use of hazardous materials, RoHS and lead-free technology. I/O Features The adapter enables MXI support to minimize overhead interrupts. The low latency feature can bypass the automatic moderation of time intervals between the interrupts based on the sensitivity of the incoming data. Header Splits and Replication in Receive helps the driver focus on relevant part of the packet without the need to parse it. Multiple queues for packet handling eliminates waiting/buffer overflow and provide efficient packet prioritization. Tx/Rx IP, SCTP, TCP and UDP checksum offloading (IPv4, IPv6) capabilities lower processor usage. Checksum and segmentation capabilities are extended to new standard packet type. Tx TCP segmentation offload (IPv4, IPv6) increases throughput as well. It is compatible with large-send offload feature (in Microsoft Windows Server operating systems). Compatibility with x8 and x16 standard and low-profile PCI Express slots enables each PCI Express slot port to operate without interfering or competing with other ports. Receive/Transmit side scaling for Windows and scalable I/O for Linux (IPv4, IPv6, TCP/ UDP) enable direction of the interrupts to the processor cores in order to improve the CPU usage rate. RJ-45 connections over CAT-6a cabling ensure compatibility with cable lengths up to 100 meters. Virtualization Features Virtual Machine Device queues (VMDq) offload data sorting from the hypervisor to silicon, improving data throughput and CPU usage. Enhanced QoS (Quality of Service) feature provides weighted round-robin servicing for the Tx data and prevents head-of-line blocking. Sorting is based on MAC addresses and VLAN tags. 64 queues per port provide loopback functionality. Data transfer between the virtual machines within the same physical server doesn't go out to the wire and back in, improving throughput and CPU usage. It also supports replication of multicast and broadcast data. PCI-SIG SR-I/O Virtualization Implementation offers 64 virtual functions per port. The physical configuration of each port is divided into multiple virtual ports. Each virtual port is assigned to an individual virtual machine directly by bypassing the virtual switch in the hypervisor, resulting in near-native performance. It is integrated with Intel Virtualization technology for Directed I/O (Intel VT-d) to provide data protection between virtual machines by assigning separate physical addresses in the memory to each virtual machine. Advanced packet filtering: 24 exact-matched addresses (unicast or multicast) 4096-bit hash filter for unicast and multicast frames Promiscuous (unicast and multicast) transfer mode support Optional filtering of invalid frames VLAN support with VLAN tag insertion, stripping and packet filtering for up to 4096 VLAN tags enable creation of multiple VLAN segments. The adapter enables widespread deployment as it supports most Network Operating Systems.