- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
01-Text | 8.32 MB |
Contents
Configuring the network adapter
Viewing mapping relations between network adapter ports and ICM ports
Verifying the identification status of network adapter ports in the operating system
Installing and removing a network adapter driver in the operating system
iSCSI boot cannot be used to install H3C CAS
Port goes down after the network adapter speed is set to 1 Gbps
Virtual port created through NPAR configuration cannot be switched to FCoE mode
Appendix A Specifications and features
Appendix B Hardware and software compatibility
Network adapters and ICM compatibility
Network adapter and ICM interconnection
Safety information
To avoid bodily injury or device damage, read the following information carefully before you operate the network adapter.
General operating safety
To avoid bodily injury or damage to the device, follow these guidelines when you operate the network adapter:
· Only H3C authorized or professional engineers are allowed to install or replace the network adapter.
· Before installing or replacing the network adapter, stop all services, power off the blade server, and then remove the blade server.
· When disassembling, transporting, or placing the blade server, do not use excessive force. Make sure you use even force and move the device slowly.
· Place the blade server on a clean, stable workbench or floor for servicing.
· To avoid being burnt, allow the blade server and its internal modules to cool before touching them.
Electrical safety
Clear the work area of possible electricity hazards, such as ungrounded chassis, missing safety grounds, and wet work area.
ESD prevention
Preventing electrostatic discharge
To prevent electrostatic damage, follow these guidelines:
· Transport or store the network adapter in an antistatic bag.
· Keep the network adapters in antistatic bags until they arrive at an ESD-protected area.
· Place the network adapter on an antistatic workbench before removing it from its antistatic bag.
· Install the network adapter immediately after you remove it from its antistatic bag.
· Avoid touching pins, leads, or circuitry.
· Put away the removed network adapter in an antistatic bag immediately and keep it secure for future use.
Grounding methods to prevent electrostatic discharge
The following are grounding methods that you can use to prevent electrostatic discharge:
· Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.
· Take adequate personal grounding measures, including wearing antistatic clothing and static dissipative shoes.
· Use conductive field service tools.
· Use a portable field service kit with a folding static-dissipating work mat.
Configuring the network adapter
The figures in this section are for illusion only.
Viewing mapping relations between network adapter ports and ICM ports
To view the mapping relations between the network adapter ports and ICM ports, log in to OM and access the Compute Nodes > Port Mapping page.
Verifying the identification status of network adapter ports in the operating system
This section describes how to verify if the network adapter ports have been identified by the operating system. It uses CentOS 7.4 and Windows Server 2012 R2 as examples.
Linux operation systems
1. Execute the lspci | grep BCM57840 command to view PCI device information for the ETH521i network adapter. The four PCI devices represent the four ports of the network adapter.
Figure 1 Viewing PCI device information
2. Execute the ifconfig -a command to verify that the four network adapter ports are recognized. The port names are determined by the operating system naming rule. If no ports are recognized, install the most recent driver and try again. For more information, see "Installing and removing a network adapter driver in the operating system."
Figure 2 Viewing network adapter port information
If the network adapter is enabled with NPAR, the output from the lspci | grep BCM57840 and ifconfig -a commands display 8 PCI devices and 8 NIC ports.
Figure 3 Command output for a network adapter enabled with NPAR
Windows operating systems
1. Open Network Connections and verify that the Qlogic 57840 adapters can be displayed correctly, which indicates that the ETH521i network adapters are installed correctly.
Figure 4 Viewing network adapters
2. If the network adapter is not displayed, open Device Manager, and examine if an Ethernet controller exists in the Network adapters > Other devices window.
¡ If an Ethernet controller exists, an error has occurred on the driver. Install the most recent driver and try again. For more information, see "Installing and removing a network adapter driver in the operating system."
¡ If no Ethernet controllers exist, verify that the network adapter is installed securely.
Figure 5 Viewing network adapters
Installing and removing a network adapter driver in the operating system
The driver used by the network adapter and the installation method for the driver varies by operating system. This section uses CentOS 7.4 and Windows Server 2012 R2 as examples.
Linux operating systems
1. Execute the modinfo bnx2x command to view the current driver version.
Figure 6 Viewing the driver version
2. If the driver is an .rpm file, run the executable file and install the driver directly.
a. Copy the RPM driver file (for example, kmod-netxtreme2-7.14.46-1.rhel7u4.x86_64.rpm) to the operating system.
b. Execute the rpm –ivh file_name.rpm command to install the driver.
Figure 7 Installing the driver
c. After the installation finishes, restart the operating system or execute the rmmod bnx2x and modprobe bnx2x commands to reload the driver to have the driver take effect.
d. Execute the modinfo bnx2x or ethtool –i ethX command to verify that the driver version is correct.
The ethX argument represents the port on the network adapter.
Figure 8 Verifying the driver version
3. If the driver is a .tar.gz compressed file, you must compile it first.
a. Execute the tar –zxvf netxtreme2-<ver>.tar.gz command to decompress the file.
b. Execute the cd netxtreme2-<ver> command to enter the directory of the source file.
c. Execute the make install command to compile the file and install the driver.
Figure 9 Compiling the file and installing the driver
d. After the installation finishes, restart the operating system or execute the rmmod bnx2x and modprobe bnx2x commands to reload the driver to have the driver take effect.
4. To uninstall the driver, execute the rpm –e kmod-netxtreme2 command. Restart the operating system or execute the rmmod bnx2x and modprobe bnx2x commands to load the old driver.
Windows operating systems
1. Verify the current driver for the network adapter.
a. Click the Start icon to enter the menu.
b. Select Control Panel > Hardware > Device Manager.
Figure 10 Opening Device Manager
c. Right click the port on the network adapter, and then select Properties > Driver.
Figure 11 Device Manager
2. Install the driver.
a. Obtain the driver from the H3C official website.
b. Double click the driver and then click Next >.
Figure 12 Installing the driver
c. After the installation finishes, restart the operating system to have the driver take effect.
d. Verify that the driver version has been updated.
Figure 13 Verifying the driver version
3. Remove the driver.
a. Click the Start icon to enter the menu page.
b. Select Control Panel > Hardware > Device Manager.
c. Right click the network adapter whose driver is to be removed, select Properties > Driver, and then click Uninstall.
Figure 14 Removing a driver
Configuring PXE
This section describes how to enable PXE on a network adapter in the BIOS. To use the PXE feature, you must set up a PXE server. You can obtain the setup method for a PXE server from the Internet.
To configure PXE:
1. During startup of the server, press Delete or ESC as prompted to enter the BIOS Setup utility.
2. Enable PXE.
In UEFI boot mode:
a. Click the Advanced tab, select Network Stack Configuration, and then press Enter.
Figure 15 The Advanced page
b. Set Ipv4 PXE Support and Ipv6 PXE Support to Enabled.
Figure 16 Enabling PXE in UEFI mode
In legacy mode:
a. Click the Advanced tab, select Network_Adapter > MBA Configuration Menu, and then press Enter.
b. Set Legacy Boot Protocol to PXE.
Figure 17 Enabling PXE in legacy mode
3. Press F4 to save the configuration.
The server restarts automatically. During startup, press F12 at the POST phase to boot the server from PXE.
Configuring iSCSI
The iSCSI feature must cooperate with a remote network storage device. The configuration methods for network storage devices vary by device. For more information, see the related document for the storage device. This document describes only configuration on the local server.
Configuring iSCSI boot
iSCSI boot is supported only in UEFI boot mode.
To configure iSCSI boot:
1. Log in to OM.
2. Click Policy Management. In the Compute Node Network Policy area, click Create.
3. On the page that opens, specify the slot number and model for the Mezz network adapter and then click the plus sign for the port to be configured.
Select the correct slot number and port number for the Mezz network adapter according to mapping relations described in "Network adapter and ICM interconnection."
4. In the expanded area, select iSCSI for PF Type, configure iSCSI parameters, and click Save. For more information about iSCSI parameters, see the OM online help.
5. Click Policy Management > Policy Application. In the policy application list, select a compute node slot, specify a network policy, and click Apply. In the dialog box that opens, click OK.
6. Install the operating system (for example, RHEL 7.5). Specify the network disk as the system disk.
a. Press e to edit the setup parameters.
Figure 18 Pressing e to edit the setup parameters
b. Enter the ip=ibft string after quiet, and then press Ctrl-x.
Figure 19 Adding the ip=ibft string
c. Click INSTALLATION DESTINATION.
Figure 20 Clicking INSTALLATION DESTINATION
d. On the page that opens, click Add a disk… to add a network disk.
Figure 21 Adding a network disk
e. Select the target network disk, and click Done at the upper left corner.
Figure 22 Selecting the target network disk
f. Continue to install the operating system.
Configuring iSCSI SAN
This document uses Windows and RHEL 7.5 as examples to describe how to configure iSCSI SAN for the network adapter.
Windows operating systems
1. Assign an IP address to the network interface on the network adapter that connects to the iSCSI network storage device. Make sure the host and iSCSI storage device can reach each other.
Figure 23 Configuring the local IP address
2. Enable and configure iSCSI.
a. Open Control Panel, and then click iSCSI Initiator. Click OK on the dialog box that opens.
Figure 24 Clicking iSCSI Initiator
b. Click the Configuration tab, click Change…, and then configure the name of the local iSCSI initiator.
Figure 25 Configuring the name of the iSCSI initiator
c. Click the Discovery tab and click Discover Portals to add the address information about the peer device (network storage device).
Figure 26 Adding the address information about the peer device
d. Click the Targets tab, and view the status of the discovered target. If the status is inactive, click Connect to change the target status to Connected. Then, close the dialog box.
Figure 27 Connecting the target
3. Adding the network disk.
Before adding the network disk, make sure the related configuration has been completed on the network storage device.
a. Open Control Panel, and then select Hardware > Device Manager > Storage controllers. Right click the iSCSI adapter, and then select Scan for hardware changes.
Figure 28 Scanning iSCSI network storage device
b. Click the Start icon to enter the menu and open Disk Management to verify that a disk which is in Unknown state is displayed.
Figure 29 Disk Management
c. Right click the disk name, and then select Online.
Figure 30 Bringing the disk online
d. Right click the disk name, and then select Initialize Disk.
Figure 31 Initializing the disk
e. Right click the Unallocated area to assign a volume to the disk as prompted.
Figure 32 Assigning a volume to the disk
Figure 33 Volume assignment completed
4. Access This PC and verify that the new volume has been added.
Figure 34 Verifying the new volume
Red Hat systems
Before configuring iSCSI SAN, make sure the iSCSI client software package has been installed on the server.
To configure iSCSI SAN in RHEL 7.5:
1. Assign an IP address to the network interface which connects to the iSCSI network storage device. Make sure the server and the iSCSI storage device can reach each other.
Figure 35 Configuring the local IP address
2. Execute the cat initiatorname.iscsi command in the /etc/iscsi directory to view the IQN of the local iSCSI initiator. If no IQN is specified, use the vi command to specify one manually.
Figure 36 Configuring the name of the local iSCSI initiator
3. Execute the iscsiadm –m –discovery –t st –p target-ip command to probe the IQN of the iSCSI target (peer iSCSI storage device). The target-ip argument represents the IP address of the peer iSCSI storage device.
Figure 37 Probing the IQN of the iSCSI target
4. Execute the iscsiadm –m node –T iqn-name –p target-ip -l command to connect the iSCSI target. The iqn-name argument represents the IQN of the iSCSI target. The target-ip argument represents the IP address of the iSCSI target.
Figure 38 Connecting the iSCSI target
|
NOTE: · To disconnect the iSCSI target, execute the iscsiadm –m node –T iqn-name –p target-ip –u command. · To delete the iSCSI target node information, execute the iscsiadm –m node –o delete –T iqn-name –p target-ip command. |
5. Execute the lsblk command to view the newly-added network disks.
Before viewing the newly-added network disks, make sure related configuration has been finished on the network storage device.
Figure 39 Viewing the newly-added network disks
|
NOTE: In this example, two volumes have been created on the storage server so that two network disks are added. |
6. Execute the mkfs command to format the newly-added disks.
Figure 40 Formatting a newly-added disk
7. Execute the mount command to mount the disk.
Figure 41 Mounting the disk
Configuring FCoE
The FCoE feature must cooperate with the remote network storage device. The configuration method for a network storage device varies by device. This document describes only configuration on the local server.
Configuring FCoE boot
FCoE boot is supported only in UEFI boot mode.
To configure FCoE boot:
1. Log in to OM.
2. Click Policy Management. In the Compute Node Network Policy area, click Create.
3. On the page that opens, specify the slot number and model for the Mezz network adapter and click the plus sign for the port to be configured.
Select the correct slot number and port number for the Mezz network adapter according to the mapping relations described in "Network adapter and ICM interconnection."
4. In the expanded area, select FCoE for PF Type, configure FCoE parameters, and click Save. For more information about FCoE parameters, see the OM online help.
5. Click Policy Management > Policy Application. In Policy Application List, select a compute node slot, specify a network policy, and click Apply. In the dialog box that opens, click OK.
6. Install the operating system (for example, RHEL 7.5) and specify the network disk as the system disk.
a. Press e to edit the setup parameters.
Figure 42 Pressing e to edit the setup parameters
b. Enter the ip=ibft string after quiet, and then press Ctrl-x.
Figure 43 Adding the ip=ibft string
c. Click INSTALLATION DESTINATION.
Figure 44 Clicking INSTALLATION DESTINATION
a. Click Add a disk to add a network disk.
Figure 45 Adding a network disk
e. On the page that opens, select the target network disk, and then click Done in the upper left corner.
Figure 46 Selecting the target network disk
f. Continue to install the operating system.
Configuring FCoE SAN
This document uses Windows, RHEL 7.5, and CAS E0706 as examples to describe how to configure FCoE SAN for the network adapter.
Windows operating systems
1. Configure FCoE on the FCoE storage device and switching fabric modules and make sure the FCoE link is unblocked. For how to configure FCoE on a switching fabric module, see the related command reference and configuration guide.
2. Open Control Panel, select Hardware > Device Manager > Storage controllers, right click the FCoE adapter, and then select Scan for hardware changes.
Figure 47 Scanning for FCoE network storage device
3. Click the Start icon to enter the menu and open Disk Management. Verify that a disk is in Unknown state.
Figure 48 Disk Management
4. Right click the disk name and select Online.
Figure 49 Making the disk online
5. Right click the disk name and select Initialize Disk.
Figure 50 Initializing the disk
6. Right click the Unallocated area and assign a volume to the disk as prompted.
Figure 51 Assigning a volume to the disk
Volume assignment has finished.
Figure 52 Volume assignment completed
7. Access This PC and verify that the new volume has been added.
Figure 53 Verifying the new volume
Red Hat systems
1. Configure FCoE on the FCoE storage device and switching fabric modules and make sure the FCoE link is unblocked. For how to configure FCoE on a switching fabric module, see the related command reference and configuration guide.
2. Execute the service fcoe start and service lldpad start commands to enable the FCoE and LLDP services, respectively.
Figure 54 Enabling the FCoE and LLDP services
3. Execute the service fcoe status and service lldpad status commands to verify that the FCoE and LLDP services are enabled.
Figure 55 Verifying the state of the FCoE and LLDP services
4. Execute the cp cfg-ethX cfg-ethM command in the /etc/fcoe directory to create and copy a configuration file for the FCoE port. The cfg-ethM argument represents the port used for FCoE connection.
Figure 56 Creating and copying a configuration file for the FCoE port
5. Execute the vi cfg-ethM command to edit and save the configuration file. Make sure the value of the FCOE_ENABLE field is yes and the value of the DCB_REQUIRED field is no.
Figure 57 Editing the configuration file
6. Execute the lldptool set-lldp -i ethM adminStatus=disabled command to set the LLDP management state to disabled. Verify that the value of the adminStatus field of ethM in the /var/lib/lldpad/lldpad.conf configuration file is 0.
If the command execution fails, add adminStatus = 0 to ethM for lldp in the configuration file manually.
Figure 58 Disabling LLDP management
7. Execute the service fcoe restart and service lldpad restart commands to restart the RCoE and LLDP services, respectively.
Figure 59 Restarting the RCoE and LLDP services
8. Execute the ifconfig command to verify that a subinterface for ethM has been created. The subinterface number is the VSAN number configured on the switching fabric module.
Figure 60 Verifying that a subinterface for ethM has been created
9. Execute the lsblk command to view the newly-added network disk.
Before viewing the newly-added network disk, make sure the related configuration has been finished on the network storage device.
Figure 61 Viewing the newly-added network disk
10. Format and mount the network disk.
CAS systems
1. Configure FCoE on the FCoE storage device and ICMs and make sure the FCoE link is unblocked. For how to configure FCoE on an ICM, see the related command reference and configuration guide and H3C UniServer B16000 Configuration Examples.
2. Access the operating system through KVM or remote login.
¡ If you access the operating system through KVM, select Local Command Shell to enter the CLI.
Figure 62 Selecting Local Command Shell
¡ If you access the operating system through remote login (for example, SSH), connect to the CLI of the operating system.
3. Execute the service fcoe start and service lldpad start commands to enable the FCoE and LLDP services, respectively.
Figure 63 Enabling the FCoE and LLDP services
4. Execute the service fcoe status and service lldpad status commands to verify that the FCoE and LLDP services are enabled, respectively.
Figure 64 Verifying the state of the FCoE and LLDP services
5. Execute the cp cfg-ethX cfg-ethM command in the /etc/fcoe directory to create and copy a configuration file for the FCoE port. The cfg-ethM argument represents the port used for FCoE connection.
Figure 65 Creating and copying a configuration file for the FCoE port
6. Execute the vi cfg-ethM command to edit and save the configuration file. Make sure the value of the FCOE_ENABLE field is yes and the value of the DCB_REQUIRED field is no.
Figure 66 Editing the configuration file
7. Execute the lldptool set-lldp -i ethM adminStatus=disabled command to set the LLDP management state to disabled. Verify that the value for the adminStatus field of ethM in the /var/lib/lldpad/lldpad.conf configuration file is 0.
If the command execution fails, add adminStatus = 0 to ethM for lldp in the configuration file manually.
Figure 67 Disabling LLDP management
8. Execute the service fcoe restart and service lldpad restart commands to restart the RCoE and LLDP services, respectively.
Figure 68 Restarting the RCoE and LLDP services
9. Execute the ifconfig command to verify that a subinterface for ethM has been created. The subinterface number is the VSAN number configured on the ICM.
Figure 69 Verifying that a subinterface for ethM has been created
10. Execute the lsblk command to view the newly-added network disk.
Before viewing the newly-added network disk, make sure the related configuration has been finished on the network storage device.
Figure 70 Viewing the newly-added network disk
11. Format and mount the network disk.
Configuring NPAR
1. Log in to OM.
2. Click Policy Management. In the Compute Node Network Policy area, click Create.
3. On the page that opens, specify the slot number and model for the Mezz network adapter and click the plus sign for the port to be configured.
4. In the expanded area, select NPAR for Multichannel Mode, configure PF parameters, and click Save. For more information about PF parameters, see the OM online help.
5. Click Policy Management > Policy Application. In Policy Application List, select a compute node slot, specify a network policy, and click Apply. In the dialog box that opens, click OK.
Configuring SR-IOV
1. Enter the BIOS Setup utility.
2. Select Advanced > PCI Subsystem Settings, and then press Enter.
Figure 71 Advanced screen
3. Select SR-IOV Support and set it to Enabled. Press ESC until you return to the BIOS Setup main screen.
Figure 72 Setting SR-IOV Support to Enabled
4. Select Socket Configuration > IIO Configuration > Intel@ VT for Directed I/O (VT-d), and then press Enter.
Figure 73 Socket Configuration screen
5. Select Intel@ VT for Directed I/O (VT-d) and set it to Enable. Press ESC until you return to the BIOS Setup main screen.
Figure 74 Intel@ VT for Directed I/O (VT-d) screen
6. Click the Advanced tab, select the first port of the network adapter, and press Enter. Set Multi-Function Mode to SR-IOV. Save the configuration and restart the server. Configuration on the first port applies to all ports of all the network adapters.
Figure 75 Configuring Multi-Function Mode
7. During startup, press E. Press the arrow keys to turn pages. Add intel_iommu=on to the specified position to enable IOMMU. Press Ctrl-x to continue to start the server.
Figure 76 Enabling IOMMU
8. After you enter the operating system, execute the dmesg | grep IOMMU command to verify that IOMMU is enabled.
Figure 77 Verifying that IOMMU is enabled
9. Execute the echo NUM > /sys/class/net/ethX/device/sriov_numvfs command to assign a specified number of VFs to a PF port.
The NUM argument represents the number of VFs to be assigned. The ethX argument represents the PF port name. Execute the lspc | grep BCM57840 command to verify that VFs have been assigned to the PF port successfully.
Figure 78 Assigning VFs to a PF port
10. Execute the virt-manager command to run the VM manager. Select File > New Virtual Machine to create a VM.
Figure 79 Creating a VM
11. On the New Virtual Machine page, add a virtual NIC as instructed by the callouts in Figure 80.
Figure 80 Adding a virtual NIC
12. Install the vNIC driver and execute the ifconfig ethVF hw ether xx:xx:xx:xx:xx:xx command to configure an MAC address for the vNIC. The ethVF argument represents the virtual NIC name. The xx:xx:xx:xx:xx:xx argument represents the MAC address.
Configuring advanced features
Configuring VLAN
Configuring 802.1Q VLAN
This section uses RHEL 7.5 as an example.
To configure 802.1Q VLAN in the operating system:
1. Execute the modprobe 8021q command to load the 802.1Q module.
2. Execute the ip link add link ethX name ethX.id type vlan id id command to create a VLAN interface on a physical port. The ethX argument represents the physical port name. The id argument represents the VLAN ID.
3. Execute the ip –d link show ethX.id command to verify that the VLAN interface has been created successfully.
Figure 81 Creating a VLAN interface
4. Execute the ip addr add ipaddr/mask brd brdaddr dev ethX.id and ip link set dev ethX.id up commands to assign an IP address to the VLAN interface and set the VLAN interface state to UP, respectively. The ipaddr/mask argument represents the IP address and mask of the VLAN interface. The brdaddr argument represents the broadcast address. The ethX.id argument represents the VLAN ID.
To delete a VLAN interface, execute the ip link set dev ethX.id down and ip link delete ethX.id commands.
Figure 82 Assigning an IP address to the VLAN interface and set the VLAN interface state to UP
Configuring IEEE 802.1ad Provider Bridges (QinQ)
1. Install the official lediag tool.
a. Copy the lediag tool package into the operating system.
b. Execute the tar –zxvf file_name.tar.gz command to decompress the package.
c. Execute the cd <lediag_directory> command to navigate to the directory where the decompressed package resides and then use the make command to compile the file.
2. Start the tool and switch to the corresponding network adapter.
a. After compiling, execute the ./load.sh –b10eng command in the lediag folder to enter engineering mode.
b. If the server has multiple network adapters installed, use the dev <num> command to switch to the corresponding network adapter. The <num> argument represents the number of a port on the network adapter. For example, if a network adapter has four ports, you can switch to the network adapter by specifying any of the four port numbers.
3. Enable QinQ.
a. Execute the rmmod bnx2x command to uninstall the driver.
b. Execute the nvm vlant command to enable QinQ.
c. Execute the exit command to exit engineering mode. Restart the server and enter the BIOS Setup utility to continue configuration.
Figure 83 Enabling QinQ
4. To configure QinQ in SF mode:
a. Click the Advanced tab, select network_adapter_port > Device Hardware Configuration Menu > QINQ Configuration, and then press Enter.
b. Set QINQ VLAN mode to QINQ and configure the port VLAN ID and VLAN priority.
c. After setting all the ports, press F4 to save the configuration.
Figure 84 Configuring QinQ in SF mode
5. To configure QinQ in NPAR mode:
a. Click the Advanced tab, select network_adapter_port > NIC Partitioning Configuration Menu > QINQ Configuration, and then press Enter.
b. Set QINQ VLAN mode to QINQ and configure the port VLAN ID and VLAN priority for the virtual NIC.
Figure 85 Configuring QinQ in NPAR mode
Configuring bonding (Linux)
This section uses RHEL 7.5 as an example to describes how to configure bonding in mode 6.
To configure bonding in mode 6:
1. Execute the vi ifcfg-bond0 command in the /etc/sysconfig/network-scripts/ directory to create a configuration file for bond0 and add the following information:
BOOTPROTO=static
DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
ONBOOT=yes
IPADDR=192.168.50.88 //Configure the interface IP address for bond0
PREFIX=24 //Configure the subnet mask
GATEWAY=
DNS=
BONDING_OPTS=”miimon=100 mode=6” //Set the detection interval to 100 ms and the bonding mode to 6
Figure 86 Configuring bond0
2. Edit the configuration file for a slave interface. Execute the vi ifcfg-ethX command and add the following information to the configuration file:
ONBOOT=yes
MASTER=bond0
SLAVE=yes
For other slave interfaces to be added to bond0, repeat this step.
Figure 87 Editing the configuration file for a slave interface
3. Execute the service network restart command to restart the network service and have bond0 take effect.
Figure 88 Restarting the network service
4. Execute the cat /proc/net/bonding/bond0 command to view information about bond0 and network adapter. In this example, bond0 and the two slave interfaces are in all in up state.
Figure 89 Viewing information about bond0
Figure 90 Viewing information about the network adapter (1)
Figure 91 Viewing information about the network adapter (2)
Configuring teaming (Windows)
1. Open Server Manager, and then select Local Server > NIC Teaming > Disabled to enter the NIC Teaming page.
Figure 92 Entering the NIC Teaming page
2. Select TASKS > New Team to create a team.
Figure 93 Creating a team
3. Configure the team name and select the network adapters to be added to the team. Select Additional properties, configure the properties, and then click OK.
Team creation in Switch Independent mode takes a long time.
Figure 94 Configuring a new team
4. After team creation finishes, you can view network adapter OneTeam on the Network Connections page.
Figure 95 Viewing the new network adapter
Configuring TCP offloading
1. Execute the ethtool –k ethx command to view the support and enabling state for the offload features. The ethx argument represents the port name of the network adapter.
Figure 96 Viewing the support and enabling state for the offload features
2. Execute the ethtool –K ethX feature on/off command to enable or disable an offload feature. The ethx argument represents the port name of the network adapter. The feature argument represents the offload feature name. The value for the argument includes tso, lso, lro, gso, and gro.
Figure 97 Disabling offload features
FAQs
iSCSI boot cannot be used to install H3C CAS
Symptom
When iSCSI boot is used to install H3C CAS on a mirrored storage volume for the blade server, the system can identify the storage IQN but cannot access the storage during disk selection.
Solution
This issue is related to system compatibility. The ETH521i network adapter does not support using iSCSI boot to install H3C CAS, Ubuntu12.04, or Ubuntu14.04.
Port goes down after the network adapter speed is set to 1 Gbps
Symptom
With the Windows operating system installed on the blade server, the port goes down after the port link speed is set to 1 Gbps, but the port operates correctly if the speed is set to 10 Gbps.
Solution
The ETH521i network adapter does not support operating at 1 Gbps or in auto-negotiation mode. Use the network adapter at 10 Gbps.
Virtual port created through NPAR configuration cannot be switched to FCoE mode
Symptom
With NPAR configured, after you set the virtual port to FCoE mode in the BIOS, the OM Web interface indicates that the port still operates in NIC mode.
Solution
With NPAR configured for the network adapter, only the first virtual port of a physical port supports FCoE. For other virtual ports, FCoE configuration in the BIOS does not take effect. To create an FCoE link, use the first virtual port of a physical port.
Appendix A Specifications and features
The ETH521i network adapter (product model: NIC-ETH521i-Mb-4*10G) is a CNA module that provides four 10-GE ports, of which all the four ports support FCoE and the first two ports support FCoE boot. It can be applied to the B16000 blade server chassis to provide network interfaces connecting blade servers to ICM slots. The network adapter exchanges data with blade servers by using PCIe x8 channels and uses the four 10G-KR ports to connect to the ICMs through the mid plane. It supports applications such as NIC, iSCSI, and FCoE to help realize network convergence.
Figures in this section are for illustration only.
Network adapter view
The ETH521i network adapter can be applied to 2-processor half-width, 2-processor full-width, and 4-processor full-width blade servers. For the installation positions of the network adapter, see "Compatible blade servers."
Figure 98 ETH521i network adapter
Specifications
Product specifications
Table 1 ETH521i Mezz network adapter product specifications
Item |
Specifications |
Basic properties |
|
Network adapter type |
CNA |
Chip model |
Cavium BCM57840S |
Max power consumption |
12 W |
Input voltage |
12 VDC |
Bus type |
PCIe 3.0 x8 |
Network properties |
|
Connectors |
4 × 10G-KR |
Data rate |
10 Gbps |
Duplex mode |
Full duplex |
Standards |
802.1p, 802.1q, 802.3ad, 802.3ae, 802.3x, 802.1Qbb, 802.1Qaz, 802.1Qau |
Technical specifications
Table 2 ETH521i Mezz network adapter technical specifications
Category |
Item |
Specifications |
Physical parameters |
Dimensions (H × W × D) |
25.05 × 61.60 × 95.00 mm (0.99 × 2.43 × 3.74 in) |
Weight |
100 g (3.53 oz) |
|
Environment parameters |
Temperature |
· Operating: 5°C to 45°C (41°F to 113°F) · Storage: –40°C to +70°C (–40°F to 158°F) |
Humidity |
· Operating: 8% RH to 90% RH, noncondensing · Storage: 5% RH to 95% RH, noncondensing |
|
Altitude |
· Operating: –60 to +5000 m (–196.85 to +16404.20 ft) The maximum acceptable temperature decreases by 0.33°C (32.59°F) for every 100 m (328.08 ft) increase in altitude from 900 m (2952.76 ft). · Storage: –60 to +5000 m (–196.85 to +16404.20 ft) |
Features
Feature compatibility
Table 3 Features supported by the network adapter
Feature |
Supported |
Jumbo frames |
√ |
Load balancing |
√ |
802.1Q VLANs |
√ |
QinQ |
√ |
Auto negotiation |
√ |
PXE Boot |
√ |
FCoE |
√ |
FCoE Boot |
√* (Only in UEFI mode) |
iSCSI |
√ |
iSCSI Boot |
√* (Only in UEFI mode) |
SR-IOV |
√ |
VMDq |
√* |
Multiple Rx Queues (RSS) |
√* |
TCP/IP Stateless Offloading |
√* |
TCP/IP Offload Engine (TOE) |
√* |
Wake-on-LAN |
× |
RDMA |
× |
NPAR |
√ |
NCSI |
× |
NIC bonding |
√ |
|
NOTE: An asterisk (*) indicates that the feature is not available for VMware ESXi. |
Feature description
PXE
The network adapter supports PXE boot. During booting, the blade server acts as the PXE client, obtains an IP address from the PXE server, and uses TFTP to download and run the PXE boot file.
iSCSI
All the four ports on the network adapter support iSCSI SAN and the first two ports on the network adapter support iSCSI remote boot.
iSCSI is a new storage technology which integrates SCSI interfaces and Ethernet. Based on iSCSI, the device can transmit commands and data through SCSI interfaces on the network so that cross-province and cross-city storage resource sharing can be realized among equipment rooms. iSCSI supports storage capacity expansion without service interruption and provides storage resources for multiple servers.
FCoE
All the four ports on the network adapter support FCoE SAN and the first two ports on the network adapter support FCoE boot from SAN.
FCoE encapsulates FC frames through Ethernet. It maps FCs to Ethernet and inserts FC information into Ethernet information packages, so that FC requests and data stored by SAN on the server can be transmitted through Ethernet. FCoE allows LAN and FC SAN data transmission through only one communication cable, decreasing the number of devices and cables in DC and reducing power supply and refrigeration loads.
NPAR
NPAR divides network adapter ports into multiple PFs
NPAR creates multiple PFs on a port to divide a network adapter into multiple partitions.
Each physical port on the ETH521i network adapter can be divided into two PFs and the entire network adapter can be divided into eight PFs.
SR-IOV
All the four ports on the network adapter support SR-IOV.
SR-IOV allows users to integrate network hardware resources and multiple VMs to operate on the integrated hardware. It provides users with abundant features such as I/O sharing, integration, isolation, migration, and simplified management. Virtualization performance might be lowered because of the VM management program cost. To resolve the issue, PCI-SIG uses SR-IOV to create VFs. A PF is a PCIe physical partition and a VF is a lightweight PCIe logical partition separated from a PF. The implementation of VFs can realize data movement with no cost of the VM management program.
You can assign a VF to an application. A VF shares physical device resources and executes I/O with no cost of the CPU and VM management program.
The network adapter supports a maximum of 64 VFs (numbered 0 through 63) for each PF with a step size of 8.
VLAN
· VLAN (802.1Q VLAN)
Each port on the network adapter supports a maximum of 4094 VLANs.
A network adapter only transmits packets, and does not tag or untag packets. The VLAN ID is in the range of 1 to 4094 and is assigned by the operating system.
VLAN refers to a group of logical devices and users working at Layer 2 and Layer 3. A VLAN is a broadcast domain. Communication between VLANs is supported by Layer 3 routers. Compared with LAN, VLAN has less adding and modification overhead and can control broadcasts to enhance network security and bring flexibility.
· IEEE 802.1ad Provider Bridges (QinQ)
802.1 Q-in-802.1 Q (QinQ), defined by IEEE 802.1ad, expands VLAN space by adding an additional 802.1 Q tag (VLAN ID) to 802.1 Q-tagged packets. It creates VLAN within a VLAN to isolate traffic.
Bonding (Linux)
Bonding has the following modes:
· mode=0, round-robin policy (balance-rr)—Transmits data packets between backup devices in sequence. This mode is used commonly.
· mode=1, active-backup policy (active-backup)—Only the master device is in active state. A backup device takes over the services when the master device fails. This mode is used commonly.
· mode=2, XOR policy (balance-xor)—Transmits data packets based on a specified transmission hash policy.
· mode=3, broadcast policy—Transmits data packets out of each interface of a backup device. This mode is error tolerant.
· mode=4, IEEE 802.3ad dynamic link aggregation (802.3ad)—Creates an aggregation group where group members share the same rated speed and full duplex mode settings. Backup device selection for traffic output is based on transmission hash policy. In this mode, the switch must support IEEE 802.3ad and have specific configurations.
· mode=5, adaptive transmit load balancing (balance-tlb)—Does not require specific switches. This mode allocates outgoing traffic to backup devices according to the device loads. If a backup device that is receiving traffic is faulty, another backup device takes over the MAC address of the faulty backup device.
· mode=6, adaptive load balancing (balance-alb)—Does not require switches. This mode integrates the balance-tlb mode and load balancing of IPv4 packet receiving. It is realized by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local device and changes the source MAC address into a unique MAC address of a backup device in bonding, allowing different peers to communicate with different MAC addresses. This mode is used commonly.
Teaming (Windows)
This section uses the teaming solution for Windows Server 2012 R2 operating system as an example.
Teaming has the following modes:
· Static teaming—A switch-dependent mode in which member NICs must connect to the same physical switch. This mode requires the support of the switch.
· Switch independent—Member NICs can be connected to different switches in active/standby mode. Load balancing aggregation can be realized only when the member NICs connect to the same switch.
· LACP—You must enable LACP on the switch first. This mode integrates multiple NICs into one logical link. Data is transmitted at the fastest speed in LACP mode.
Besides teaming configuration, you must configure the load balancing mode. Load balancing has the following modes:
· Address hash mode—In this mode, when a packet arrives at the team, the device uses the hash algorithm to calculate the packet sending physical NIC based on the destination address information (MAC address, IP address, and port number). This mode cannot control traffic direction. If a large amount of traffic goes to the same destination address, the traffic will be sent by the same physical NIC.
· Hyper-V port mode—Used for the Hyper-V mode. Compared with the address hash mode, this mode has higher traffic distribution efficiency. In this mode, data are transmitted by different physical NICs bound to the vNIC and the binding is based on vNICs instead of VMs. As a best practice, enable this mode when you use a Hyper-V external virtual switch.
· Dynamic mode—Introduced for Windows Server 2012 R2 and later. In this mode, data is evenly distributed to all NICs to make full use of bandwidth resources. This mode is the most optimal load balancing mode.
TCP offloading
TCP offloading is a TCP acceleration technology, which allocates the TCP/IP stack work to NICs and complete the work through hardware. On a high speed Ethernet, for example, 10-GE Ethernet, processing TCP/IP packet headers consumes great CPU resources. Using NIC hardware to process the headers can ease the CPU burden.
Appendix B Hardware and software compatibility
Compatible operating systems
For operating systems compatible with the network adapter, contact Technical Support.
Compatible blade servers
Table 4 Compatible blade servers
Blade server model |
Blade server type |
Network adapter slots |
Applicable slots |
H3C UniServer B5700 G3 |
2-processor half-width |
3 |
Mezz1, Mezz2, Mezz3 |
H3C UniServer B5800 G3 |
2-processor full-width |
3 |
Mezz1, Mezz2, Mezz3 |
H3C UniServer B7800 G3 |
4-processor full-width |
6 |
Mezz1, Mezz2, Mezz3, Mezz4, Mezz5, Mezz6 |
H3C UniServer B5700 G5 |
2-processor half-width |
3 |
Mezz1, Mezz2, Mezz3 |
Figure 99 Network adapter installation positions on a 2-processor half-width blade server
Figure 100 Network adapter installation positions on a 2-processor full-width blade server
Figure 101 Network adapter installation positions on a 4-processor full-width blade server
Compatible ICMs
Network adapters and ICM compatibility
For information about ICM and mezzanine network adapter compatibility, contact Technical Support.
Network adapter and ICM interconnection
For details about ICM and mezzanine network adapter connections, contact Technical Support.
Network adapters connect to ICMs through the mid plane. The mapping relations between a network adapter and ICMs depend on the blade server on which the network adapter resides. For installation locations of ICMs, see Figure 104.
For network adapters installed in a 2-processor half-width or full-width blade server, their mapping relations with ICMs are as shown in Figure 102.
· Network adapter in Mezz1 is connected to ICMs in slots 1 and 4.
· Network adapter in Mezz2 is connected to ICMs in slots 2 and 5.
· Network adapter in Mezz3 is connected to ICMs in slots 3 and 6.
For network adapters installed in a 4-processor full-width blade server, their mapping relations with ICMs are as shown in Figure 103.
· Network adapters in Mezz1 and Mezz4 are connected to ICMs in slots 1 and 4.
· Network adapters in Mezz2 and Mezz5 are connected to ICMs in slots 2 and 5.
· Network adapters in Mezz3 and Mezz6 are connected to ICMs in slots 3 and 6.
Figure 103 Network adapter and ICM mapping relations (4-processor full-width blade server)
Networking applications
As shown in Figure 105, the network adapters are connected to the ICMs. Each internal port of the ICMs support 10GE service applications, and the external ports are connected to the Internet to provide Internet access for the blade server on which the network adapter resides.
Figure 105 Mezzanine network and ICM interconnection
Appendix C Acronyms
Acronym |
Full name |
F |
|
FC |
Fiber Channel |
FCoE |
Fiber Channel Over Ethernet |
I |
|
Internet Small Computer System Interface |
|
N |
|
NCSI |
Network Controller Sideband Interface |
NIC Partitioning |
|
P |
|
PCIe |
Peripheral Component Interconnect Express |
PF |
Physical Function |
PXE |
Preboot Execute Environment |
R |
|
RDMA |
Remote Direct Memory Access |
S |
|
Storage Area Network |
|
SR-IOV |
Single Root I/O Virtualization |
T |
|
TCP |
Transmission Control Protocol |
V |
|
VF |
Virtual Function |
Virtual Machine Data Queue |