ETH640i Mezz Network Adapter User Guide-6W100

HomeSupportServersNetwork AdapterConfigure & DeployUser ManualsETH640i Mezz Network Adapter User Guide-6W100
01-Text
Title Size Download
01-Text 5.81 MB

Safety information

To avoid bodily injury or device damage, read the following information carefully before you operate the network adapter.

General operating safety

To avoid bodily injury or damage to the device, follow these guidelines when you operate the network adapter:

·     Only H3C authorized or professional engineers are allowed to install or replace the network adapter.

·     Before installing or replacing the network adapter, stop all services, power off the blade server, and then remove the blade server.

·     When disassembling, transporting, or placing the blade server, do not use excessive force. Make sure you use even force and move the device slowly.

·     Place the blade server on a clean, stable workbench or floor for servicing.

·     To avoid being burnt, allow the blade server and its internal modules to cool before touching them.

Electrical safety

Clear the work area of possible electricity hazards, such as ungrounded chassis, missing safety grounds, and wet work area.

ESD prevention

Electrostatic charges that build up on people and other conductors might damage or shorten the lifespan of the network adapter.

Preventing electrostatic discharge

To prevent electrostatic damage, follow these guidelines:

·     Transport or store the network adapter in an antistatic bag.

·     Keep the network adapters in antistatic bags until they arrive at an ESD-protected area.

·     Place the network adapter on an antistatic workbench before removing it from its antistatic bag.

·     Install the network adapter immediately after you remove it from its antistatic bag.

·     Avoid touching pins, leads, or circuitry.

·     Put away the removed network adapter in an antistatic bag immediately and keep it secure for future use.

Grounding methods to prevent electrostatic discharge

The following are grounding methods that you can use to prevent electrostatic discharge:

·     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

·     Take adequate personal grounding measures, including wearing antistatic clothing and static dissipative shoes.

·     Use conductive field service tools.

·     Use a portable field service kit with a folding static-dissipating work mat.


Configuring the network adapter

The figures in this section are for illusion only.

Viewing mapping relations between network adapter ports and ICM ports

For detailed information about ICM and network adapter connections, contact Technical Support.

To view the actual mapping relations between the network adapter ports and ICM ports, log in to OM and access the Compute Nodes > Port Mapping page.

Verifying the identification status of the network adapter in the operating system

This section describes how to verify if the network adapter port has been identified in the operating system. It uses CentOS 7.5 and Windows Server 2016 as examples.

Linux operation systems

1.     Execute the lspci | grep MT27710 command to view PCI device information for the ETH640i network adapter.

As shown in Figure 1, two PCIe devices represent the two ports on the network adapter.

Figure 1 Viewing PCI device information

 

2.     Execute the ifconfig -a command to verify that the two network adapter ports have been identified. The port names are determined by the operating system naming rule. If no ports are identified, install the most recent driver and try again. For more information, see "Installing and removing a network adapter driver in the operating system."

Figure 2 Viewing information about network adapter ports

 

Windows operating systems

1.     Open Network Connections and verify that the Mellanox network adapters can be displayed correctly.

Figure 3 Viewing network adapters

 

2.     If the network adapter is not displayed, open Device Manager, and examine if a PCI device exists in the Network adapters > Other devices window.

¡     If a PCI device exists, an error has occurred on the driver. Install the most recent driver and try again. For more information, see "Installing and removing a network adapter driver in the operating system."

¡     If no PCI devices exist, verify that the network adapter is installed securely.

Figure 4 Viewing network adapters

 

Installing and removing a network adapter driver in the operating system

The driver used by the network adapter and the installation method for the driver varies by operating system. This section uses CentOS 7.5 and Windows Server 2016 as examples.

Linux operating systems

1.     Execute the modinfo mlx5_core command to view the current driver version.

Figure 5 Viewing the driver version

 

2.     If the driver is a .tgz file, run the executable file and install the driver directly.

a.     Copy the .tgz driver file (for example, MLNX_OFED_LINUX-4.6-1.0.1.1-rhel7.5-x86_64.tgz) to the operating system.

b.     Execute the tar -xvf file_name.tgz command to decompress the driver.

c.     Execute the cd MLNX_OFED_LINUX-<ver> command to access the directory of the source code package and install the driver.

Figure 6 Installing the driver

 

d.     After the installation finishes, restart the operating system or execute the rmmod mlx5_core and modprobe mlx5_core commands to reload the driver for the driver to take effect.

e.     Execute the modinfo mlx5_core or ethtool -i ethX command to verify that the driver version is correct.

The ethX argument represents the port on the network adapter.

Figure 7 Verifying the driver version

 

3.     To uninstall the driver, run the ./uninstall.sh script in the driver installation directory. Restart the operating system or execute the rmmod mlx5_core and modprobe mlx5_core commands to load the old driver.

Windows operating systems

1.     Verify the current driver for the network adapter.

a.     Click the Start icon to enter the menu.

b.     Select Control Panel > Hardware > Device Manager.

Figure 8 Opening Device Manager

 

c.     Right click the port on the network adapter, and then select Properties > Driver.

Figure 9 Device Manager

 

2.     Install the driver.

a.     Obtain the driver from the H3C official website.

b.     Double click the driver and then click Next >.

Figure 10 Installing the driver

 

c.     After the installation finishes, restart the operating system to have the driver take effect.

d.     Verify that the driver version has been updated.

Figure 11 Verifying the driver version

 

3.     Remove the driver.

a.     Click the Start icon to enter the menu page.

b.     Select Control Panel > Hardware > Device Manager.

c.     Right click the network adapter whose driver is to be removed, select Properties > Driver, and then click Uninstall.

Figure 12 Removing a driver

 

Configuring PXE

This section describes how to enable PXE on a network adapter in the BIOS. To use the PXE feature, you must set up a PXE server. You can obtain the setup method for a PXE server from the Internet.

To configure PXE:

1.     During startup of the server, press Delete or ESC as prompted to enter the BIOS Setup utility.

2.     Enable PXE.

In UEFI boot mode:

a.     Click the Advanced tab, select Network Stack Configuration, and then press Enter.

Figure 13 Advanced page

 

b.     Set Ipv4 PXE Support and Ipv6 PXE Support to Enabled.

Figure 14 Enabling PXE in UEFI mode

 

In Legacy boot mode:

a.     Click the Advanced tab, select Network_Adapter > NIC Configuration, and then press Enter.

b.     Set Legacy Boot Protocol to PXE.

Figure 15 Advanced page

 

3.     Press F4 to save the configuration.

The server restarts automatically. During startup, press F12 at the POST phase to boot the server from PXE.

Configuring iSCSI SAN

The iSCSI feature must cooperate with a remote network storage device. The configuration methods for network storage devices vary by device. For more information, see the related document for the storage device. This document describes only configuration on the local server.

This document uses Windows and RHEL 7.5 as examples to describe how to configure iSCSI SAN for the network adapter.

Configuring iSCSI SAN in a Windows operating system

1.     Assign an IP address to the network interface on the network adapter that connects to the iSCSI network storage device. Make sure the host and iSCSI storage device can reach each other.

Figure 16 Configuring the local IP address

 

2.     Enable and configure iSCSI.

a.     Open Control Panel, and then click iSCSI Initiator. Click OK on the dialog box that opens.

Figure 17 Clicking iSCSI Initiator

 

b.     Click the Configuration tab, click Change, and then configure the name of the local iSCSI initiator.

Figure 18 Configuring the name of the iSCSI initiator

 

c.     Click the Targets tab, specify the peer storage IP address, and click Quick Connect to scan address information about the peer device (network storage device).

Figure 19 Adding the address information about the peer device

 

d.     Click the Targets tab and view the status of the discovered target. If the status is inactive, click Connect to change the target status to Connected. Then, close the dialog box.

Figure 20 Connecting the target

 

3.     Adding the network disk.

Before adding the network disk, make sure the related configuration has been completed on the network storage device.

a.     Open Control Panel, and then select Hardware > Device Manager > Network adapters. Right click the network adapter port, and then select Scan for hardware changes.

Figure 21 Scanning iSCSI network storage device

 

b.     Click the Start icon at the left bottom of the system. Open Disk Management to verify that a disk which is in Unknown state is displayed.

Figure 22 Disk Management

 

c.     Right click the disk name, and then select Online.

Figure 23 Bringing the disk online

 

d.     Right click the disk name, and then select Initialize Disk.

Figure 24 Initializing the disk

 

e.     Right click the Unallocated area to assign a volume to the disk as prompted.

Figure 25 Assigning a volume to the disk

 

Figure 26 Volume assignment completed

 

4.     Verify that the new volume has been added.

Figure 27 Verifying the new volume

 

Configuring iSCSI SAN in a Red Hat system

Before configuring iSCSI SAN, make sure the iSCSI client software package has been installed on the server.

To configure iSCSI SAN in RHEL 7.5:

1.     Assign an IP address to the network interface which connects to the iSCSI network storage device. Make sure the server and the iSCSI storage device can reach each other.

Figure 28 Configuring the local IP address

 

2.     Execute the cat initiatorname.iscsi command in the /etc/iscsi directory to view the IQN of the local iSCSI initiator. If no IQN is specified, use the vi command to specify one manually.

Figure 29 Configuring the name of the local iSCSI initiator

 

3.     Execute the iscsiadm -m -discovery -t st -p target-ip command to probe the IQN of the iSCSI target (peer iSCSI storage device). The target-ip argument represents the IP address of the peer iSCSI storage device.

Figure 30 Probing the IQN of the iSCSI target

 

4.     Execute the iscsiadm -m node -T iqn-name -p target-ip -l command to connect the iSCSI target. The iqn-name argument represents the IQN of the iSCSI target. The target-ip argument represents the IP address of the iSCSI target.

Figure 31 Connecting the iSCSI target

 

 

NOTE:

·     To disconnect the iSCSI target, execute the iscsiadm -m node -T iqn-name -p target-ip -u command.

·     To delete the iSCSI target node information, execute the iscsiadm -m node -o delete -T iqn-name -p target-ip command.

 

5.     Execute the lsblk command to view the newly-added network disks.

Before viewing the newly-added network disks, make sure related configuration has been finished on the network storage device.

Figure 32 Viewing the newly-added network disks

 

 

NOTE:

In this example, two volumes have been created on the storage server so that two network disks are added.

 

6.     Execute the mkfs command to format the newly-added disks.

Figure 33 Formatting a newly-added disk

 

7.     Execute the mount command to mount the disk.

Figure 34 Mounting the disk

 

Configuring SR-IOV

1.     Enter the BIOS Setup utility.

2.     Select Advanced > PCI Subsystem Settings, and then press Enter.

Figure 35 Advanced screen

 

3.     Select SR-IOV Support and set it to Enabled. Press ESC until you return to the BIOS Setup main screen.

Figure 36 Setting SR-IOV Support to Enabled

 

4.     Select Socket Configuration > IIO Configuration > Intel@ VT for Directed I/O (VT-d), and then press Enter.

Figure 37 Socket Configuration screen

 

5.     Select Intel@ VT for Directed I/O (VT-d) and set it to Enable. Press ESC until you return to the BIOS Setup main screen.

Figure 38 Intel@ VT for Directed I/O (VT-d) screen

 

6.     Click the Advanced tab, select a port of the network adapter, and press Enter. Select Device Level Configuration and set Virtualization Mode to SR-IOV. Perform the task for both ports of the network adapter. Save the configuration and restart the server.

Figure 39 Enabling Virtualization Mode

 

7.     During startup, press E. Press the arrow keys to turn pages. Add intel_iommu=on to the specified position to enable IOMMU. Press Ctrl-x to continue to start the server.

Figure 40 Enabling IOMMU

 

8.     After you enter the system, execute the dmesg | grep IOMMU command to verify that IOMMU is enabled.

Figure 41 Verifying that IOMMU is enabled

 

9.     Execute the echo NUM > /sys/class/infiniband/mlx5_X/device/sriov_numvfs command to assign a specified number of VFs to a PF port.

The NUM argument represents the number of VFs to be assigned. The mlx5_X argument represents the PF port name.

Figure 42 Assigning VFs to a PF port

 

10.     Execute the lspc | grep MT27710 command to verify that VFs have been assigned successfully.

Figure 43 Verifying VF assignment

 

11.     Execute the virt-manager command to run the VM manager. Select File > New Virtual Machine to create a VM.

Figure 44 Creating a VM

 

12.     On the New Virtual Machine page, add a virtual NIC as instructed by the callouts in Figure 45.

Figure 45 Adding a virtual NIC

 

13.     Install the vNIC driver and execute the ifconfig ethVF hw ether xx:xx:xx:xx:xx:xx command to configure an MAC address for the vNIC. The ethVF argument represents the virtual NIC name. The xx:xx:xx:xx:xx:xx argument represents the MAC address.

Configuring advanced features

Configuring VLAN (802.1Q VLAN)

This section uses RHEL 7.5 as an example.

To configure 802.1Q VLAN in the operating system:

1.     Execute the modprobe 8021q command to load the 802.1Q module.

2.     Execute the ip link add link ethX name ethX.id type vlan id id command to create a VLAN interface on a physical port. The ethX argument represents the physical port name. The id argument represents the VLAN ID.

3.     Execute the ip -d link show ethX.id command to verify that the VLAN interface has been created successfully.

Figure 46 Creating a VLAN interface

 

4.     Execute the ip addr add ipaddr/mask brd brdaddr dev ethX.id and ip link set dev ethX.id up commands to assign an IP address to the VLAN interface and set the VLAN interface state to UP, respectively. The ipaddr/mask argument represents the IP address and mask of the VLAN interface. The brdaddr argument represents the broadcast address. The ethX.id argument represents the VLAN ID.

To delete a VLAN interface, execute the ip link set dev ethX.id down and ip link delete ethX.id commands.

Figure 47 Assigning an IP address to the VLAN interface and set the VLAN interface state to UP

 

Configuring 802.1Q double-tagging

1.     Use either of the following method to add a VLAN tag to a VF on a network adapter port.

¡     Use the sysfs command:

echo '100:0:802.1q' > /sys/class/net/ens1f0/device/sriov/0/vlan

¡     Use the ip link command:

ip link set dev ens1f0 vf 0 vlan 100

This method requires the most recent kernel version.

2.     Execute the ip link show command to verify the configuration result.

Figure 48 Verifying the configuration result

 

3.     Assign the VF configured with VLAN to a VM, and then create another VLAN for the interface of the VM.

Figure 49 Creating another VLAN

 

4.     Configure another VM in the same way. The two VMs can communicate with each other in double-tagging format.

Configuring bonding (Linux)

This section uses RHEL 7.5 as an example to describes how to configure bonding in mode 6.

To configure bonding in mode 6:

1.     Execute the vi ifcfg-bond0 command in the /etc/sysconfig/network-scripts/ directory to create a configuration file for bond0 and add the following information:

BOOTPROTO=static

DEVICE=bond0

NAME=bond0

TYPE=Bond

BONDING_MASTER=yes

ONBOOT=yes

IPADDR=192.168.50.88  //Configure the interface IP address for bond0

PREFIX=24  //Configure the subnet mask

GATEWAY=

DNS=

BONDING_OPTS=”miimon=100 mode=6”  //Set the detection interval to 100 ms and the bonding mode to 6

Figure 50 Configuring bond0

 

2.     Edit the configuration file for a slave interface. Execute the vi ifcfg-ethX command and add the following information to the configuration file:

ONBOOT=yes

MASTER=bond0

SLAVE=yes

For other slave interfaces to be added to bond0, repeat this step.

Figure 51 Editing the configuration file for a slave interface

 

3.     Execute the service network restart command to restart the network service and have bond0 take effect.

Figure 52 Restarting the network service

 

4.     Execute the cat /proc/net/bonding/bond0 command to view information about bond0 and network adapter. In this example, bond0 and the two slave interfaces are in all in up state.

Figure 53 Viewing information about bond0

 

Figure 54 Viewing information about the network adapter (1)

 

Figure 55 Viewing information about the network adapter (2)

 

Configuring teaming (Windows)

1.     Open Server Manager, and then select Local Server > NIC Teaming > Disabled to enter the NIC Teaming page.

Figure 56 Entering the NIC Teaming page

 

2.     Select TASKS > New Team to create a team.

Figure 57 Creating a team

 

3.     Configure the team name and select the network adapters to be added to the team. Select Additional properties, configure the properties, and then click OK.

Team creation in Switch Independent mode takes a long time.

Figure 58 Configuring a new team

 

4.     After team creation finishes, you can view the network adapter 111 on the Network Connections page.

Figure 59 Viewing the new network adapter

 

Configuring TCP offloading

1.     Execute the ethtool -k ethx command to view the support and enabling state for the offload features. The ethx argument represents the port name of the network adapter.

Figure 60 Viewing the support and enabling state for the offload features

 

2.     Execute the ethtool -K ethX feature on/off command to enable or disable an offload feature. The ethx argument represents the port name of the network adapter. The feature argument represents the offload feature name. The value for the argument includes tso, lso, lro, gso, and gro.

Figure 61 Disabling offload features

 


Appendix A  Specifications and features

The ETH640i network adapter (product model: NIC-ETH640i-Mb-2*25G) is an Ethernet adapter that provides two 25-GE ports. It can be applied to the B16000 blade server chassis to provide network interfaces connecting blade servers to ICMs. The network adapter exchanges data with blade servers by using PCIe 3.0 x8 channels and uses the two 25-GE ports to connect to the ICMs through the mid plane. It supports applications such as NIC and iSCSI.

Figures in this section are for illustration only.

Network adapter view

The ETH640i network adapter can be applied to 2-processor half-width, 2-processor full-width, and 4-processor full-width blade servers. For the installation positions of the network adapter, see "Compatible blade servers."

Figure 62 ETH640i network adapter

 

Specifications

Product specifications

Table 1 ETH640i network adapter product specifications

Item

Specifications

Basic properties

Network adapter type

Ethernet adapter

Chip model

Mellanox MT27712A0-FDCF-AE

Max power consumption

10 W

Input voltage

12 VDC

Bus type

PCIe 3.0 x8

Network properties

Connectors

2 × 25G KR

Data rate

25 Gbps

Duplex mode

Full duplex

Standards

802.3ba, 802.3ae, 802.1Qaz, 802.1Qap, 802.1Qad, 802.1Q, 802.1p, 802.1Qau, 802.1Qbb, 802.1Qbg

 

Technical specifications

Table 2 ETH640i network adapter technical specifications

Category

Item

Specifications

Physical parameters

Dimensions (H × W × D)

25.05 × 61.60 × 95.00 mm (0.99 × 2.43 × 3.74 in)

Weight

100 g (3.53 oz)

Environment parameters

Temperature

·     Operating: 5°C to 45°C (41°F to 113°F)

·     Storage: –40°C to +70°C (–40°F to 158°F)

Humidity

·     Operating: 8% RH to 90% RH, noncondensing

·     Storage: 5% RH to 95% RH, noncondensing

Altitude

·     Operating: –60 to +5000 m (–196.85 to +16404.20 ft) The maximum acceptable temperature decreases by 0.33°C (32.59°F) for every 100 m (328.08 ft) increase in altitude from 900 m (2952.76 ft).

·     Storage: –60 to +5000 m (–196.85 to +16404.20 ft)

 

Features

Feature compatibility

Table 3 Features supported by the network adapter

Feature

Supported

Jumbo frames

Load balancing

802.1Q VLANs

QinQ

Auto negotiation

PXE Boot

FCoE

×

FCoE Boot

×

iSCSI

√*

iSCSI Boot

×

SR-IOV

VMDq

√*

Multiple Rx Queues (RSS)

√*

TCP/IP Stateless Offloading

√*

TCP/IP Offload Engine (TOE)

√*

Wake-on-LAN

×

RDMA

√ (RoCE only)

NPAR

×

NCSI

×

NIC bonding

 

 

NOTE:

An asterisk (*) indicates that the feature is not available for VMware ESXi.

 

Feature description

PXE

The network adapter supports PXE boot. During booting, the network adapter, which acts as the PXE client, obtains an IP address from the PXE server and uses TFTP to download and run the PXE boot file.

iSCSI

Both ports on the network adapter support iSCSI SAN, but do not support iSCSI remote boot.

iSCSI is a new storage technology which integrates SCSI interfaces and Ethernet. Based on iSCSI, the device can transmit commands and data through SCSI interfaces on the network so that cross-province and cross-city storage resource sharing can be realized among equipment rooms. iSCSI supports storage capacity expansion without service interruption, and provides storage resources for multiple servers.

SR-IOV

Both ports on the network adapter support SR-IOV.

SR-IOV allows users to integrate network hardware resources and multiple VMs to operate on the integrated hardware. The provided virtualization functions include I/O sharing, integration, isolation, migration, and simplified management. Virtualization might reduce system performance because of management program cost. To resolve the performance issue, PCI-SIG introduces the SR-IOV standard to create Virtual Functions (VFs). A VF is a lightweight PCIe function directly assigned to a VM, which bypasses the management program layer to move main data.

A PF is a complete PCIe function and a VF is a lightweight PCIe function separated from a PF. You can assign a VF to an application. A VF shares physical device resources and executes I/O with no cost of the CPU and VM management program.

The network adapter supports a maximum of 127 VFs on each port. You can change the maximum number of VFs from the BIOS.

VLAN

·     802.1Q VLAN

Each port on the network adapter supports a maximum of 4094 VLANs.

A network adapter only transmits packets, and does not tag or untag packets. The VLAN ID is in the range of 1 to 4094 and is assigned by the operating system.

VLAN refers to a group of logical devices and users working at Layer 2 and Layer 3. A VLAN is a broadcast domain. Communication between VLANs is supported by Layer 3 routers. Compared with LAN, VLAN has less adding and modification overhead and can control broadcasts to enhance network security and bring flexibility.

·     IEEE 802.1ad Provider Bridges (QinQ)

QinQ is the achievement of the IEEE 802.1ad Provider Bridges standard and an extension of the IEEE 802.1Q VLAN. By adding an extra 802.1Q flag (VLAN ID field) to Ethernet packets, QinQ creates VLANs in a VLAN to further isolate traffic.

Bonding (Linux)

Bonding has the following modes:

·     mode=0, round-robin policy (balance-rr)Transmits data packets between backup devices in sequence. This mode is used commonly.

·     mode=1, active-backup policy (active-backup)Only the master device is in active state. A backup device takes over the services when the master device fails. This mode is used commonly.

·     mode=2, XOR policy (balance-xor)Transmits data packets based on a specified transmission hash policy.

·     mode=3, broadcast policyTransmits data packets out of each interface of a backup device. This mode is error tolerant.

·     mode=4, IEEE 802.3ad dynamic link aggregation (802.3ad)Creates an aggregation group where group members share the same rated speed and full duplex mode settings. Backup device selection for traffic output is based on transmission hash policy. In this mode, the switch must support IEEE 802.3ad and have specific configurations.

·     mode=5, adaptive transmit load balancing (balance-tlb)Does not require specific switches. This mode allocates outgoing traffic to backup devices according to the device loads. If a backup device that is receiving traffic fails, another backup device takes over the MAC address of the faulty backup device.

·     mode=6, adaptive load balancing (balance-alb)Does not require switches. This mode integrates the balance-tlb mode and load balancing of IPv4 packet receiving. It is realized by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local device and changes the source MAC address into a unique MAC address of a backup device in bonding, allowing different peers to communicate with different MAC addresses. This mode is used commonly.

Teaming (Windows)

This section uses the Windows Server 2012 R2 operating system as an example.

NIC teaming has the following modes:

·     Static teamingA switch-dependent mode in which member NICs must connect to the same physical switch. This mode requires the support of the switch.

·     Switch independentMember NICs can be connected to different switches in active/standby mode. Load balancing aggregation can be realized only when the member NICs connect to the same switch.

·     LACPYou must enable LACP on the switch first. This mode integrates multiple NICs into one logical link. Data is transmitted at the fastest speed in LACP mode.

After teaming finishes, you must configure the load balancing mode. Load balancing has the following modes:

·     Address hash modeIn this mode, when a packet arrives at the team, the device uses the hash algorithm to calculate the packet sending physical NIC based on the destination address information (MAC address, IP address, and port number). This mode cannot control traffic direction. If a large amount of traffic goes to the same destination address, the traffic will be sent by the same physical NIC.

·     Hyper-V port modeUsed for Hyper-V mode. Compared with the address hash mode, the Hyper-V port mode has higher traffic distribution efficiency. In this mode, data are transmitted by different physical NICs bound to the vNIC and the binding is based on vNICs instead of VMs. As a best practice, enable this mode when you use a Hyper-V external virtual switch.

·     Dynamic modeIntroduced for Windows Server 2012 R2 and later. In this mode, data is evenly distributed to all NICs to make full use of bandwidth resources. This mode is the most optimal load balancing mode.

TCP offloading

TCP offloading is a TCP acceleration technology. It offloads TCP/IP work to network interface controllers for the workload to be processed by hardware. On a high-speed Ethernet, for example, 10-GE Ethernet, processing TCP/IP packet headers consumes great CPU resources. Using NIC hardware to process the headers can ease the CPU burden.

Offload allocates some data processing work (for example, fragmentation and reassembly) which should be done by the operating system to the NIC hardware to reduce CPU resource consumption and enhance processing performance.

Features related to TCP are as follows:

·     TCP segmentation offload (TSO)—Segments TCP packets.

·     Large segment offload (LSO)/large receive offload (LRO)When the sending data exceeds the specified MTU, the operating system submits a transmission request to the NIC only once. The NIC then automatically segments, encapsulates, and sends the data packets. If a large number of fragments are received, LRO helps to assemble multiple fragments to a larger one and submits the larger fragment to the operating system.

·     Generic segmentation offload (GSO) and generic receive offload (GRO)Detects features supported by the NIC automatically. If the NIC supports fragmentation, the system sends TCP fragments to the NIC directly. If the network adapter does not support fragmentation, the system fragments the packets first, and then sends the fragments to the NIC.

RDMA

RDMA is a remote direct memory access technology, aiming to deal with the data processing delay on the server during network transmission. It transmits materials to the storage area of a computer through network directly and moves data from a system to a remote system storage media rapidly without impacting the operating system. It reduces the overhead of copying and context switching for the external storage media, and frees memory bandwidth and CPU cycles, optimizing the application system performance.


Appendix B  Hardware and software compatibility

Compatible operating systems

For operating systems compatible with the network adapter, contact Technical Support.

Compatible blade servers

Table 4 Compatible blade servers

Blade server model

Blade server type

Network adapter slots

Applicable slots

Installation positions

H3C UniServer B5700 G3

2-processor half-width

3

Mezz1, Mezz2, Mezz3

See Figure 63

H3C UniServer B5800 G3

2-processor full-width

3

Mezz1, Mezz2, Mezz3

See Figure 64

H3C UniServer B7800 G3

4-processor full-width

6

Mezz1, Mezz2, Mezz3, Mezz4, Mezz5, Mezz6

See Figure 65

H3C UniServer B5700 G5

2-processor half-width

3

Mezz1, Mezz2, Mezz3

See Figure 63

 

Figure 63 Network adapter installation positions on a 2-processor half-width blade server

 

Figure 64 Network adapter installation positions on a 2-processor full-width blade server

 

Figure 65 Network adapter installation positions on a 4-processor full-width blade server

 

Compatible ICMs

Network adapters and ICM compatibility

The network adapter supports the following ICMs:

·     H3C UniServer BX1010E

·     H3C UniServer BT616E

·     H3C UniServer BT1004E

Network adapter and ICM interconnection

For details about ICM and mezzanine network adapter connections, contact Technical Support.

Mapping relations between network adapter slot and ICM slot

Network adapters connect to ICMs through the mid plane. The mapping relations between a network adapter and ICMs depend on the blade server on which the network adapter resides. For installation locations of ICMs, see Figure 68.

For network adapters installed in a 2-processor half-width or full-width blade server, their mapping relations with ICMs are as shown in Figure 66.

·     Network adapter in Mezz1 is connected to ICMs in slots 1 and 4.

·     Network adapter in Mezz2 is connected to ICMs in slots 2 and 5.

·     Network adapter in Mezz3 is connected to ICMs in slots 3 and 6.

Figure 66 Network adapter and ICM mapping relations (2-processor half-width or full-width blade server)

 

For network adapters installed in a 4-processor full-width blade server, their mapping relations with ICMs are as shown in Figure 67.

·     Network adapters in Mezz1 and Mezz4 are connected to ICMs in slots 1 and 4.

·     Network adapters in Mezz2 and Mezz5 are connected to ICMs in slots 2 and 5.

·     Network adapters in Mezz3 and Mezz6 are connected to ICMs in slots 3 and 6.

Figure 67 Network adapter and ICM mapping relations (4-processor full-width blade server)

 

Figure 68 ICM slots

 

Mapping relations between network adapter port and ICM internal port

For mapping relations between ICM port and mezzanine network adapter port, contact Technical Support.

Networking applications

As shown in Figure 69, the network adapters are connected to the ICMs. Each internal port of the ICMs support 25GE service applications, and the external ports are connected to the Internet to provide Internet access for the blade server on which the network adapter resides.

Figure 69 Mezzanine network and ICM interconnection

 


Appendix C  Acronyms

Acronym

Full name

F

FC

Fiber Channel

FCoE

Fiber Channel Over Ethernet

I

iSCSI

Internet Small Computer System Interface

N

NCSI

Network Controller Sideband Interface

NPAR

NIC Partitioning

P

PCIe

Peripheral Component Interconnect Express

PF

Physical Function

PXE

Preboot Execute Environment

R

RDMA

Remote Direct Memory Access

RoCE

RDMA over Converged Ethernet

S

SAN

Storage Area Network

SR-IOV

Single Root I/O Virtualization

T

TCP

Transmission Control Protocol

V

VF

Virtual Function

VMDq

Virtual Machine Data Queue

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网