ETH681i Mezz Network Adapter User Guide-6W100

HomeSupportServersNetwork AdapterConfigure & DeployUser ManualsETH681i Mezz Network Adapter User Guide-6W100

01-Text


Safety information

To avoid bodily injury or device damage, read the following information carefully before you operate the network adapter.

General operating safety

To avoid bodily injury or damage to the device, follow these guidelines when you operate the network adapter:

·     Only H3C authorized or professional engineers are allowed to install or replace the network adapter.

·     Before installing or replacing the network adapter, stop all services, power off the blade server, and then remove the blade server.

·     When disassembling, transporting, or placing the blade server, do not use excessive force. Make sure you use even force and move the device slowly.

·     Place the blade server on a clean, stable workbench or floor for servicing.

·     To avoid being burnt, allow the blade server and its internal modules to cool before touching them.

Electrical safety

Clear the work area of possible electricity hazards, such as ungrounded chassis, missing safety grounds, and wet work area.

ESD prevention

Electrostatic charges that build up on people and other conductors might damage or shorten the lifespan of the network adapter.

Preventing electrostatic discharge

To prevent electrostatic damage, follow these guidelines:

·     Transport or store the network adapter in an antistatic bag.

·     Keep the network adapters in antistatic bags until they arrive at an ESD-protected area.

·     Place the network adapter on an antistatic workbench before removing it from its antistatic bag.

·     Install the network adapter immediately after you remove it from its antistatic bag.

·     Avoid touching pins, leads, or circuitry.

·     Put away the removed network adapter in an antistatic bag immediately and keep it secure for future use.

Grounding methods to prevent electrostatic discharge

The following are grounding methods that you can use to prevent electrostatic discharge:

·     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

·     Take adequate personal grounding measures, including wearing antistatic clothing and static dissipative shoes.

·     Use conductive field service tools.

·     Use a portable field service kit with a folding static-dissipating work mat.


Configuring the network adapter

The figures in this section are for illusion only.

Viewing mapping relations between network adapter ports and ICMs

To view the mapping relations between the network adapter ports and ICM internal ports, log in to OM and access the Blade Servers > Port Mapping page.

Viewing the identification status of network adapter ports in the operating system

This section describes how to verify if a network adapter port has been identified in the operating system. It uses CentOS 7.5 and Windows Server 2016 as examples.

Linux operation systems

1.     Execute the lspci | grep QL41000 command to view PCI device information for the ETH681i network adapter.

The system can identify a minimum of two PCI devices for each network adapter, which corresponds to two network adapter ports.

Figure 1 Viewing PCI device information

 

2.     Execute the ifconfig -a command to verify that the two network adapter ports are identified. The port names are determined by the operating system naming rule. If no ports are identified, install the most recent driver and try again. For more information, see "Installing and uninstalling a network adapter driver in the operating system."

Figure 2 Viewing information about network adapter ports

 

If NPAR is enabled for the network adapter, you can view 16 PCI devices after executing the lspci | grep QL41000 command.

Figure 3 Information about a network adapter enabled with NPAR

 

Windows operating systems

1.     Open Network Connections and verify that the Qlogic FastLinQ QL41202H network adapters can be displayed correctly. This indicates that the ETH681i network adapter has been identified.

Figure 4 Viewing network adapters

 

2.     If the network adapter is not displayed, open Device Manager, and examine if an Ethernet controller exists in the Network adapters > Other devices window.

¡     If an Ethernet controller exists, an error has occurred on the driver. Install the most recent driver and try again. For more information, see "Installing and uninstalling a network adapter driver in the operating system."

¡     If no Ethernet controllers exist, verify that the network adapter is installed securely.

Figure 5 Viewing network adapters

 

Installing and uninstalling a network adapter driver in the operating system

The driver used by the network adapter and the installation method for the driver vary by operating system. This section uses CentOS 7.5 and Windows Server 2016 as examples.

Linux operating systems

Viewing the current driver version

Execute the modinfo qede command to view the current driver version.

Figure 6 Viewing the driver version

 

Installing the driver

1.     If the driver is an .rpm file, run the executable file and install the driver directly.

a.     Copy the RPM driver file (for example, kmod-qlgc-fastlinq-8.38.2.0-1.rhel7u5.x86_64.rpm) to the operating system.

b.     Execute the rpm -ivh file_name.rpm command to install the driver.

Figure 7 Installing the driver

 

c.     After the installation finishes, restart the operating system to have the driver take effect.

d.     Execute the modinfo qede or ethtool -i ethX command to verify that the driver version is correct.

The ethX argument represents the port on the network adapter.

Figure 8 Verifying the driver version

 

2.     If the driver is a .tar.gz compressed file, you must compile it first.

a.     Execute the tar -zxvf fastlinq-<ver>.tar.gz command to decompress the file.

b.     Execute the cd fastlinq-<ver> command to enter the directory of the source file.

c.     Execute the make install command to compile the file and install the driver.

Figure 9 Compiling the file and installing the driver

 

d.     After the installation finishes, restart the operating system or execute the rmmod qede and modprobe qede commands to have the driver take effect.

Uninstalling the driver

To uninstall the .rpm file, execute the rpm -e kmod-qlgc-fastlinq command. Restart the operating system or execute the rmmod qede and modprobe qede commands to load the old driver.

Windows operating systems

Viewing the current driver version

1.     Click the Start icon to enter the menu.

2.     Select Control Panel > Hardware > Device Manager.

Figure 10 Opening Device Manager

 

3.     Right click the port on the network adapter, and then select Properties > Driver.

Figure 11 Device Manager

 

Installing the driver

1.     Obtain the driver from the H3C official website.

2.     Double click the driver and then click Next >.

Figure 12 Installing the driver

 

3.     After the installation finishes, restart the operating system to have the driver take effect.

4.     Verify that the driver version has been updated.

Figure 13 Verifying the driver version

 

Uninstalling the driver.

1.     Click the Start icon to enter the menu page.

2.     Select Control Panel > Hardware > Device Manager.

3.     Right click the network adapter whose driver is to be removed, select Properties > Driver, and then click Uninstall.

Figure 14 Removing a driver

 

Configuring PXE

This section describes how to enable PXE on a network adapter in the BIOS. To use the PXE feature, you must set up a PXE server. You can obtain the setup method for a PXE server from the Internet.

To configure PXE:

1.     During startup of the server, press Delete or ESC as prompted to enter the BIOS Setup utility.

2.     To enable PXE in UEFI mode:

a.     Click the Advanced tab, select Network Stack Configuration, and then press Enter.

Figure 15 The Advanced page

 

b.     Set Ipv4 PXE Support and Ipv6 PXE Support to Enabled.

Figure 16 Enabling PXE in UEFI mode

 

3.     To configure PXE for the network adapter:

a.     Click the Advanced tab, select Network Adapter > Port Level Configuration, and then press Enter.

b.     Set Boot Mode to PXE.

Figure 17 Setting Boot Mode to PXE

 

4.     Press F4 to save the configuration.

The server restarts automatically. During startup, press F12 at the POST phase to boot the server from PXE.

Configuring iSCSI

The iSCSI feature must cooperate with a remote network storage device. The configuration methods for network storage devices vary by device. For more information, see the related document for the storage device. This document describes only configuration on the local server.

Configuring iSCSI boot

iSCSI boot is supported only in UEFI boot mode.

To configure iSCSI boot:

1.     To configure iSCSI boot in the BIOS, click the Advanced tab and select iSCSI Configuration.

Figure 18 Selecting iSCSI Configuration

 

2.     Configure the IQN, select Add an Attempt, and select a network port based on the MAC address.

 

 

NOTE:

Select the correct mezzanine network adapter slot and port number. For more information, see "Feature compatibility."

 

Figure 19 Mezzanine network adapter configuration

 

3.     Set iSCSI Mode to Enabled. Configure iSCSI parameters and then select save.

For more information about iSCSI parameters, see the OM online help.

Figure 20 Configuring iSCSI

 

4.     On the Save&Exit screen, select Save Changes and Reset.

Figure 21 Saving the configuration and restarting the server

 

5.     Install the operating system (for example, RHEL 7.5). Specify the system disk as the network disk.

a.     Press e to edit the setup parameters.

Figure 22 Pressing e to edit the setup parameters

 

b.     Enter the ip=ibft string after quiet, and then press Ctrl-x.

Figure 23 Adding the ip=ibft string

 

c.     Click INSTALLATION DESTINATION.

Figure 24 Clicking INSTALLATION DESTINATION

 

d.     On the page that opens, click Add a disk to add a network disk.

Figure 25 Adding a network disk

 

e.     Select the target network disk, and click Done at the upper left corner.

The network disk is now specified as the system disk.

Figure 26 Selecting the target network disk

 

You can continue to install the operating system.

Configuring iSCSI SAN

This document uses Windows and RHEL 7.5 as examples to describe how to configure iSCSI SAN for the network adapter.

Windows operating systems

1.     Assign an IP address to the network interface on the network adapter that connects to the iSCSI network storage device. Make sure the host and iSCSI storage device can reach each other.

Figure 27 Configuring the local IP address

 

2.     Enable and configure iSCSI.

a.     Open Control Panel, and then click iSCSI Initiator. Click OK on the dialog box that opens.

Figure 28 Clicking iSCSI Initiator

 

b.     Click the Configuration tab, click Change, and then configure the name of the local iSCSI initiator.

Figure 29 Configuring the name of the iSCSI initiator

 

c.     Click the Discovery tab and click Discover Portals to add the address information about the peer device (network storage device).

Figure 30 Adding the address information about the peer device

 

d.     Click the Targets tab. Click Connect to change the target status to Connected. Then, close the dialog box.

Figure 31 Connecting the target

 

3.     Adding the network disk.

Before adding the network disk, make sure the related configuration has been completed on the network storage device.

a.     Open Control Panel, and then select Hardware > Device Manager > Storage controllers. Right click the iSCSI adapter, and then select Scan for hardware changes.

Figure 32 Scanning iSCSI network storage device

 

b.     Click the Start icon and open Disk Management to verify that a disk which is in Unknown state is displayed.

Figure 33 Disk Management

 

c.     Right click the disk name, and then select Online.

Figure 34 Bringing the disk online

 

d.     Right click the disk name, and then select Initialize Disk.

Figure 35 Initializing the disk

 

e.     Right click the Unallocated area to assign a volume to the disk as prompted.

Figure 36 Assigning a volume to the disk

 

Figure 37 Volume assignment completed

 

4.     Open This PC, and verify that the new volume has been added.

Figure 38 Verifying the new volume

 

Red Hat systems

Before configuring iSCSI SAN, make sure the iSCSI client software package has been installed on the server.

To configure iSCSI SAN in RHEL 7.5:

1.     Assign an IP address to the network interface which connects to the iSCSI network storage device. Make sure the server and the iSCSI storage device can reach each other.

Figure 39 Configuring the local IP address

 

2.     Execute the cat initiatorname.iscsi command in the /etc/iscsi directory to view the IQN of the local iSCSI initiator. If no IQN is specified, use the vi command to specify one manually.

Figure 40 Configuring the name of the local iSCSI initiator

 

3.     Execute the iscsiadm -m -discovery -t st -p target-ip command to probe the IQN of the iSCSI target (peer iSCSI storage device). The target-ip argument represents the IP address of the peer iSCSI storage device.

Figure 41 Probing the IQN of the iSCSI target

 

4.     Execute the iscsiadm -m node -T iqn-name -p target-ip -l command to connect the iSCSI target. The iqn-name argument represents the IQN of the iSCSI target. The target-ip argument represents the IP address of the iSCSI target.

Figure 42 Connecting the iSCSI target

 

 

NOTE:

·     To disconnect the iSCSI target, execute the iscsiadm -m node -T iqn-name -p target-ip -u command.

·     To delete the iSCSI target node information, execute the iscsiadm -m node -o delete -T iqn-name -p target-ip command.

 

5.     Execute the lsblk command to view the newly-added network disks.

Before viewing the newly-added network disks, make sure related configuration has been finished on the network storage device.

Figure 43 Viewing the newly-added network disks

 

 

NOTE:

In this example, two volumes have been created on the storage server so that two network disks are added.

 

6.     Execute the mkfs command to format the newly-added disks.

Figure 44 Formatting a newly-added disk

 

7.     Execute the mount command to mount the disk.

Figure 45 Mounting the disk

 

Configuring NPAR

1.     Enter the BIOS, click the Advanced tab, and select NIC-ETH681i-Mb-2x25G.

2.     Return to the previous screen, select Partitioning Mode, and change the mode from Default to NPAR.

Figure 46 NPAR configuration screen

 

3.     Access Partitions Configuration.

Figure 47 Mezzanine network adapter configuration screen

 

4.     Configure PF parameters.

Figure 48 Configuring PF parameters

 

5.     Save the configuration and restart the server.

Configuring SR-IOV

1.     Enter the BIOS Setup utility.

2.     Select Advanced > PCI Subsystem Settings, and then press Enter.

Figure 49 Advanced screen

 

3.     Select SR-IOV Support and set it to Enabled. Press ESC until you return to the BIOS Setup main screen.

Figure 50 Setting SR-IOV Support to Enabled

 

4.     Select Socket Configuration > IIO Configuration > Intel@ VT for Directed I/O (VT-d), and then press Enter.

Figure 51 Socket Configuration screen

 

5.     Select Intel@ VT for Directed I/O (VT-d) and set it to Enable. Press ESC until you return to the BIOS Setup main screen.

Figure 52 Intel@ VT for Directed I/O (VT-d) screen

 

6.     Click the Advanced tab and select the first port of the network adapter. Select Device Level Configuration and set SR-IOV to Enabled. Save the configuration and restart the system. Configuration on the first port applies to all ports of all the network adapters.

Figure 53 Enabling SR-IOV

 

7.     During startup, press E. Press the arrow keys to turn pages. Add intel_iommu=on to the specified position to enable IOMMU. Press Ctrl-x to continue to start the server.

Figure 54 Enabling IOMMU

 

8.     After you enter the operating system, execute the dmesg | grep IOMMU command to verify that IOMMU is enabled.

Figure 55 Verifying that IOMMU is enabled

 

9.     Execute the echo NUM > /sys/class/net/ethX/device/sriov_numvfs command to assign a specified number of VFs to a PF port.

The NUM argument represents the number of VFs to be assigned. The ethX argument represents the PF port name. Execute the lspc | grep BCM57840 command to verify that VFs have been assigned to the PF port successfully.

Figure 56 Assigning VFs to a PF port

 

10.     Execute the virt-manager command to run the VM manager. Select File > New Virtual Machine to create a VM.

Figure 57 Creating a VM

 

11.     On the New Virtual Machine page, add a virtual NIC as instructed by the callouts in Figure 58.

Figure 58 Adding a virtual NIC

 

12.     Install the vNIC driver and execute the ifconfig ethVF hw ether xx:xx:xx:xx:xx:xx command to configure an MAC address for the vNIC. The ethVF argument represents the virtual NIC name. The xx:xx:xx:xx:xx:xx argument represents the MAC address.

Configuring advanced features

Configuring VLAN (802.1Q VLAN)

This section uses RHEL 7.5 as an example.

To configure 802.1Q VLAN in the operating system:

1.     Execute the modprobe 8021q command to load the 802.1Q module.

2.     Execute the ip link add link ethX name ethX.id type vlan id id command to create a VLAN interface on a physical port. The ethX argument represents the physical port name. The id argument represents the VLAN ID.

3.     Execute the ip d link show ethX.id command to verify that the VLAN interface has been created successfully.

Figure 59 Creating a VLAN interface

 

4.     Execute the ip addr add ipaddr/mask brd brdaddr dev ethX.id and ip link set dev ethX.id up commands to assign an IP address to the VLAN interface and set the VLAN interface state to UP, respectively. The ipaddr/mask argument represents the IP address and mask of the VLAN interface. The brdaddr argument represents the broadcast address. The ethX.id argument represents the VLAN ID.

To delete a VLAN interface, execute the ip link set dev ethX.id down and ip link delete ethX.id commands.

Figure 60 Assigning an IP address to the VLAN interface and set the VLAN interface state to UP

 

Configuring bonding (Linux)

This section uses RHEL 7.5 as an example to describes how to configure bonding in mode 6.

To configure bonding in mode 6:

1.     Execute the vi ifcfg-bond0 command in the /etc/sysconfig/network-scripts/ directory to create a configuration file for bond0 and add the following information:

BOOTPROTO=static

DEVICE=bond0

NAME=bond0

TYPE=Bond

BONDING_MASTER=yes

ONBOOT=yes

IPADDR=192.168.50.88  //Configure the interface IP address for bond0

PREFIX=24  //Configure the subnet mask

GATEWAY=

DNS=

BONDING_OPTS=”miimon=100 mode=6”  //Set the detection interval to 100 ms and the bonding mode to 6

Figure 61 Configuring bond0

 

2.     Edit the configuration file for a slave interface. Execute the vi ifcfg-ethX command and add the following information to the configuration file:

ONBOOT=yes

MASTER=bond0

SLAVE=yes

For other slave interfaces to be added to bond0, repeat this step.

Figure 62 Editing the configuration file for a slave interface

 

3.     Execute the service network restart command to restart the network service and have bond0 take effect.

Figure 63 Restarting the network service

 

4.     Execute the cat /proc/net/bonding/bond0 command to view information about bond0 and network adapter. In this example, bond0 and the two slave interfaces are in all in up state.

Figure 64 Viewing information about bond0

 

Figure 65 Viewing information about the network adapter (1)

 

Figure 66 Viewing information about the network adapter (2)

 

Configuring teaming (Windows)

1.     Open Server Manager, and then select Local Server > NIC Teaming > Disabled to enter the NIC Teaming page.

Figure 67 Entering the NIC Teaming page

 

2.     Select TASKS > New Team to create a team.

Figure 68 Creating a team

 

3.     Configure the team name and select the network adapters to be added to the team. Select Additional properties, configure the properties, and then click OK.

Team creation in Switch Independent mode takes a long time.

Figure 69 Configuring a new team

 

4.     After team creation finishes, you can view the network adapter 111 on the Network Connections page.

Figure 70 Viewing the new network adapter

 

Configuring TCP offloading

TCP offloading is a TCP acceleration technology. On a high speed Ethernet, for example, 10-GE Ethernet, processing TCP/IP packet headers consumes great CPU resources. Using NIC hardware to process the headers can ease the CPU burden.

Offload allocates some data processing work (for example, fragmentation and reassembly) which should be done by the operating system to the NIC hardware to reduce CPU resource consumption and enhance processing performance.

Features related to TCP are as follows:

·     TCP segmentation offload (TSO)—Segments TCP packets.

·     Large segment offload (LSO)/large receive offload (LRO)When the sending data exceeds the specified MTU, the operating system submits a transmission request to the NIC only once. The NIC then automatically segments, encapsulates, and sends the data packets. If a large number of fragments are received, LRO helps to assemble multiple fragments to a larger one and submits the larger fragment to the operating system.

·     Generic segmentation offload (GSO) and generic receive offload (GRO)Detects features supported by the NIC automatically. If the NIC supports fragmentation, the system sends TCP fragments to the NIC directly. If the network adapter does not support fragmentation, the system fragments the packets first, and then sends the fragments to the NIC.

To configure TCP offloading:

1.     Execute the ethtool -k ethx command to view the support and enabling state for the offload features. The ethx argument represents the port name of the network adapter.

Figure 71 Viewing the support and enabling state for the offload features

 

2.     Execute the ethtool -K ethX feature on/off command to enable or disable an offload feature. The ethx argument represents the port name of the network adapter. The feature argument represents the offload feature name. The value for the argument includes tso, lso, lro, gso, and gro.

Figure 72 Disabling offload features

 


Appendix A  Specifications and features

The ETH681i mezzanine network adapter (product model: NIC-ETH681i-Mb-2*25G) is an Ethernet adapter that provides two 25-GE ports. It can be applied to the blade servers in the B16000 chassis to provide network interfaces connecting blade servers to ICMs. The network adapter exchanges data with blade servers by using PCIe 3.0 x8 and uses the two 25-GE ports to connect to the ICMs through the mid plane. It supports NIC and iSCSI applications.

Figures in this section are for illustration only.

Network adapter view

The ETH681i mezzanine network adapter can be applied to 2-processor half-width, 2-processor full-width, and 4-processor full-width B16000 blade servers. For the installation positions of the network adapter, see "Compatible blade servers."

Figure 73 ETH681i mezzanine network adapter

 

Specifications

Product specifications

Table 1 ETH681i mezzanine network adapter product specifications

Item

Specifications

Basic properties

Network adapter type

Ethernet adapter

Chip model

Cavium QL41202A-A2G

Max power consumption

13 W

Input voltage

12 VDC

Bus type

PCIe 3.0 x8

Network properties

Connectors

2 × 25G KR

Data rate

25 Gbps

Duplex mode

Full duplex

Standards

802.1Qbb, 802.1Qaz, 802.1Qbg, 802.1Qbh, 802.3ad, 802.1Qau, 802.1BR, 802.3by, 802.1AS, 802.1p, 802.1q

 

Technical specifications

Table 2 ETH681i Mezz network adapter technical specifications

Category

Item

Specifications

Physical parameters

Dimensions (H × W × D)

25.05 × 61.60 × 95.00 mm (0.99 × 2.43 × 3.74 in)

Weight

100 g (3.53 oz)

Environment parameters

Temperature

·     Operating: 5°C to 45°C (41°F to 113°F)

·     Storage: –40°C to +70°C (–40°F to 158°F)

Humidity

·     Operating: 8% RH to 90% RH, noncondensing

·     Storage: 5% RH to 95% RH, noncondensing

Altitude

·     Operating: –60 to +5000 m (–196.85 to +16404.20 ft) (The maximum acceptable temperature decreases by 0.33°C (32.59°F) for every 100 m (328.08 ft) increase in altitude from 900 m (2952.76 ft))

·     Storage: –60 to +5000 m (–196.85 to +16404.20 ft)

 

Features

Feature compatibility

Table 3 Features supported by the network adapter

Feature

Supported

Jumbo frames

Load balancing

802.1Q VLANs

QinQ

×

Auto negotiation

PXE Boot

FCoE

×

FCoE Boot

×

iSCSI

iSCSI Boot

√* (UEFI only)

SRIOV

VMDq

√*

Multiple Rx Queues (RSS)

√*

TCP/IP Stateless Offloading

√*

TCP/IP Offload Engine (TOE)

√*

Wake-on-LAN

×

RDMA

NPAR

NCSI

×

NIC bonding

 

 

NOTE:

√* indicates that the feature is not available for VMware ESXi.

 

Feature description

PXE boot

The network adapter supports PXE boot. During booting, the blade server, which acts as the PXE client, obtains an IP address from the PXE server and uses TFTP to download and run the PXE boot file.

iSCSI

Both ports on the network adapter support iSCSI SAN and iSCSI remote boot.

iSCSI is a new storage technology which integrates SCSI interfaces and Ethernet. Based on iSCSI, the device can transmit commands and data through SCSI interfaces on the network so that cross-province and cross-city storage resource sharing can be realized among equipment rooms. iSCSI supports dynamic configuration and storage capacity expansion without service interruption, and provides storage resources for multiple servers.

NPAR

NPAR divides network adapter ports into multiple partitions based on the number of PFs. Each port on the ETH681i network adapter can be divided into eight partitions and a network adapter can be divided into 16 partitions.

SR-IOV

Both ports on the network adapter support SR-IOV.

SR-IOV allows users to integrate network hardware resources and run multiple VMs on the integrated hardware. The virtualization technology provides abundant features, such as I/O sharing, integration, isolation and migration, and simplified management.

Virtualization might degrade performance because of management program consumption. To resolve the performance issue, PCI-SIG adopts SR-IOV to create VFs. It uses a virtualization method that directly assigns lightweight PCIe functions to VMs, which bypasses the management program layer to move main data.

A PF provides full PCIe functions and a VF provides lightweight PCIe functions separated from a PF. You can assign a VF to an application. The virtual functions share physical device resources and execute I/O with no cost of the CPU and VM management program.

The network adapter supports 0 to 96 VFs for each PF.

VLAN (802.1Q VLAN)

Each port on the network adapter supports a maximum of 4094 VLANs.

A network adapter only transmits packets, and does not tag or untag packets. The VLAN ID is in the range of 1 to 4094 and is assigned by the operating system.

VLAN refers to a group of logical devices and users working at Layer 2 and Layer 3. A VLAN is a broadcast domain. Communication between VLANs is supported by Layer 3 routers. Compared with LAN, VLAN has less adding and modification overhead and can control broadcasts to enhance network security and bring flexibility.

BONDING (Linux)

BONDING has the following modes:

·     mode=0, round-robin policy (balance-rr)Transmits data packets between two slave devices in sequence. This mode is used commonly.

·     mode=1, active-backup policy (active-backup)Only the master device is in active state. A backup device takes over the services when the master device fails. This mode is used commonly.

·     mode=2, XOR policy (balance-xor)Transmits data packets based on a specified transmission hash policy.

·     mode=3, broadcast policyTransmits data packets out of each slave interface. This mode is error tolerant.

·     mode=4, IEEE 802.3ad dynamic link aggregation (802.3ad)Creates an aggregation group where group members share the same rated speed and full duplex mode settings. Slave device selection for traffic output is based on transmission hash policy. In this mode, the switch must support IEEE 802.3ad and have specific configurations.

·     mode=5, adaptive transmit load balancing (balance-tlb)Does not require specific switches. This mode allocates outgoing traffic to slave devices according to the device loads. If a slave device that is receiving traffic fails, another slave device takes over the MAC address of the faulty slave device.

·     mode=6, adaptive load balancing (balance-alb)Does not require switches. This mode integrates the balance-tlb mode and load balancing of IPv4 packet receiving. It is realized by ARP negotiation. The BONDING driver intercepts the ARP replies sent by the local device and changes the source MAC address into a unique MAC address of a slave device in the BOND, allowing different peers to communicate with different MAC addresses. This mode is used commonly.

Teaming (Windows)

This section uses the Windows Server 2012 R2 operating system as an example.

Typically, NIC Teaming has the following modes:

·     Static Teaming—A switch-dependent mode in which member NICs must connect to the same physical switch. This mode requires the support of switches.

·     Switch independentMember NICs can be connected to different switches in active/standby mode. Load balancing aggregation can be realized only when the member NICs connect to the same switch.

·     LACPYou must enable LACP on the switch first. This mode integrates multiple NICs into one logical link. Data is transmitted at the fastest speed in LACP mode.

Besides configuring the Teaming mode, you must also configure the load balancing mode. Load balancing has the following modes:

·     Address Hash mode—In this mode, when a packet arrives at Teaming, the device uses the hash algorithm to calculate the packet sending physical NIC based on the destination address information (MAC address, IP address, and port number). This mode cannot control traffic direction. If a large amount of traffic goes to the same destination address, the traffic will be sent by the same physical NIC.

·     Hyper-V port modeTransmits data out of different physical NICs bound to virtual NICs on a per-virtual NIC basis, instead of a per-VM basis. This mode has higher efficiency if compared with the address hash mode. As a best practice, enable this mode when you use a Hyper-V external virtual switch.

·     Dynamic modeFirstly introduced in Windows Server 2016. In this mode, data is evenly distributed to all NICs to make full use of bandwidth resources. This mode is the most optimal load balancing mode.

TCP offloading

TCP offloading is a TCP acceleration technology. It unloads TCP/IP stack workloads to network interface controllers and uses hardware to process the workloads. On a high speed Ethernet, for example, 10-GE Ethernet, processing TCP/IP packet headers consumes great CPU resources. Using NIC hardware to process the headers can ease the CPU burden.

RDMA

RDMA is a remote direct memory access technology, aiming to deal with the data processing delay on the server during network transmission. It transmits materials to the storage area of a computer through network directly and moves data from a system to a remote system storage media rapidly without impacting the operating system. It reduces the overhead of copying and context switching for the external storage media, and frees memory bandwidth and CPU cycles, optimizing the application system performance.


Appendix B  Hardware and software compatibility

Compatible blade servers

Table 4 Compatible blade servers

Blade server model

Blade server type

Network adapter slots

Applicable slots

Installation positions

H3C UniServer B5700 G3

2-processor half-width

3

Mezz1, Mezz2, Mezz3

See Figure 74

H3C UniServer B5800 G3

2-processor full-width

3

Mezz1, Mezz2, Mezz3

See Figure 75

H3C UniServer B7800 G3

4-processor full-width

6

Mezz1, Mezz2, Mezz3, Mezz4, Mezz5, Mezz6

See Figure 76

H3C UniServer B5700 G5

2-processor half-width

3

Mezz1, Mezz2, Mezz3

See Figure 74

 

Figure 74 Network adapter installation positions on a 2-processor half-width blade server

 

Figure 75 Network adapter installation positions on a 2-processor full-width blade server

 

Figure 76 Network adapter installation positions on a 2-processor full-width blade server

 

Compatible ICMs

Network adapters and ICM compatibility

The network adapter supports the following ICMs:

·     H3C UniServer BX1010E

·     H3C UniServer BT616E

·     H3C UniServer BT1004E

Network adapter and ICM interconnection

Network adapters connect to ICMs through the mid plane. The mapping relations between a network adapter and ICMs depend on the blade server on which the network adapter resides. For installation locations of ICMs, see Figure 79.

For network adapters installed in a 2-processor half-width or full-width blade server, their mapping relations with ICMs are as shown in Figure 77.

·     Network adapter in Mezz1 is connected to ICMs in slots 1 and 4.

·     Network adapter in Mezz2 is connected to ICMs in slots 2 and 5.

·     Network adapter in Mezz3 is connected to ICMs in slots 3 and 6.

Figure 77 Network adapter and ICM mapping relations (2-processor half-width or full-width blade server)

 

For network adapters installed in a 4-processor full-width blade server, their mapping relations with ICMs are as shown in Figure 78.

·     Network adapters in Mezz1 and Mezz4 are connected to ICMs in slots 1 and 4.

·     Network adapters in Mezz2 and Mezz5 are connected to ICMs in slots 2 and 5.

·     Network adapters in Mezz3 and Mezz6 are connected to ICMs in slots 3 and 6.

Figure 78 Network adapter and ICM mapping relations (4-processor full-width blade server)

 

Figure 79 ICM slots

 

Networking applications

As shown in Figure 80, the network adapters are connected to the ICMs. Each internal port of the ICMs support 25GE service applications, and the external ports are connected to the Internet to provide Internet access for the blade server on which the network adapter resides.

Figure 80 Mezzanine network and ICM interconnection

 


Appendix C  Acronyms

Acronym

Full name

F

FC

Fiber Channel

FCoE

Fiber Channel Over Ethernet

I

iSCSI

Internet Small Computer System Interface

N

NCSI

Network Controller Sideband Interface

NPAR

NIC Partitioning

P

PCIe

Peripheral Component Interconnect Express

PF

Physical Function

PXE

Preboot Execute Environment

R

RDMA

Remote Direct Memory Access

RoCE

RDMA over Converged Ethernet

S

SAN

Storage Area Network

SR-IOV

Single Root I/O Virtualization

T

TCP

Transmission Control Protocol

V

VF

Virtual Function

VMDq

Virtual Machine Data Queue

 

不同款型规格的资料略有差异, 详细信息请向具体销售和400咨询。 H3C保留在没有任何通知或提示的情况下对资料内容进行修改的权利!
  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网