03-Layer 2-LAN Switching Configuration Guide

HomeSupportConfigure & DeployConfiguration GuidesH3C S12500-X & S12500X-AF Switch Series Configuration Guides(R115x)-6W10203-Layer 2-LAN Switching Configuration Guide
11-LLDP configuration
Title Size Download
11-LLDP configuration 326.87 KB

Configuring LLDP

You can set an Ethernet port as a Layer 3 interface by using the port link-mode route command (see "Configuring Ethernet interfaces").

Overview

In a heterogeneous network, a standard configuration exchange platform ensures that different types of network devices from different vendors can discover one another and exchange configuration.

The Link Layer Discovery Protocol (LLDP) is specified in IEEE 802.1AB. The protocol operates on the data link layer to exchange device information between directly connected devices. With LLDP, a device sends local device information as TLV (type, length, and value) triplets in LLDP Data Units (LLDPDUs) to the directly connected devices. Local device information includes its system capabilities, management IP address, device ID, port ID, and so on. The device stores the device information in LLDPDUs from the LLDP neighbors in a standard MIB. For more information about MIBs, see Network Management and Monitoring Configuration Guide. LLDP enables a network management system to quickly detect and identify Layer 2 network topology changes.

Basic concepts

LLDP agent

An LLDP agent is a mapping of an entity where LLDP runs. Multiple LLDP agents can run on the same interface.

LLDP agents include the following types:

·          Nearest bridge agent.

·          Nearest customer bridge agent.

·          Nearest non-TPMR bridge agent.

A Two-port MAC Relay (TPMR) is a type of bridge that has only two externally-accessible bridge ports. It supports a subset of the functions of a MAC bridge. A TPMR is transparent to all frame-based media-independent protocols except for the following:

·          Protocols destined to it.

·          Protocols destined to reserved MAC addresses that the relay function of the TPMR is configured not to forward.

LLDP exchanges packets between neighbor agents and creates and maintains neighbor information for them. Figure 1 shows the neighbor relationships for these LLDP agents. LLDP has two bridge modes: customer bridge (CB) and service bridge (SB).

Figure 1 LLDP neighbor relationships

 

LLDP frame formats

LLDP sends device information in LLDP frames. LLDP frames are encapsulated in Ethernet II or SNAP frames.

·          LLDP frame encapsulated in Ethernet II

Figure 2 Ethernet II-encapsulated LLDP frame

 

Table 1 Fields in an Ethernet II-encapsulated LLDP frame

Field

Description

Destination MAC address

MAC address to which the LLDP frame is advertised. LLDP specifies different multicast MAC addresses as destination MAC addresses for LLDP frames destined for agents of different types. This helps distinguish between LLDP frames sent and received by different agent types on the same interface. The destination MAC address is fixed to one of the following multicast MAC addresses:

·         0x0180-C200-000E for LLDP frames destined for nearest bridge agents.

·         0x0180-C200-0000 for LLDP frames destined for nearest customer bridge agents.

·         0x0180-C200-0003 for LLDP frames destined for nearest non-TPMR bridge agents.

Source MAC address

MAC address of the sending port.

Type

Ethernet type for the upper-layer protocol. This field is 0x88CC for LLDP.

Data

LLDPDU.

FCS

Frame check sequence, a 32-bit CRC value used to determine the validity of the received Ethernet frame.

 

·          LLDP frame encapsulated in SNAP

Figure 3 SNAP-encapsulated LLDP frame

 

Table 2 Fields in a SNAP-encapsulated LLDP frame

Field

Description

Destination MAC address

MAC address to which the LLDP frame is advertised. It is the same as that for Ethernet II-encapsulated LLDP frames.

Source MAC address

MAC address of the sending port.

Type

SNAP type for the upper-layer protocol. This field is 0xAAAA-0300-0000-88CC for LLDP.

Data

LLDPDU.

FCS

Frame check sequence, a 32-bit CRC value used to determine the validity of the received Ethernet frame.

 

LLDPDUs

LLDP uses LLDPDUs to exchange information. An LLDPDU comprises multiple TLV. Each TLV carries a type of device information, as shown in Figure 4.

Figure 4 LLDPDU encapsulation format

 

An LLDPDU can carry up to 32 types of TLVs. Mandatory TLVs include Chassis ID TLV, Port ID TLV, Time to Live TLV, and End of LLDPDU TLV. Other TLVs are optional.

TLVs

A TLV is an information element that contains the type, length, and value fields.

LLDPDU TLVs include the following categories:

·          Basic management TLVs

·          Organizationally (IEEE 802.1 and IEEE 802.3) specific TLVs

·          LLDP-MED (media endpoint discovery) TLVs

Basic management TLVs are essential to device management.

Organizationally specific TLVs and LLDP-MED TLVs are used for enhanced device management. They are defined by standardization or other organizations and are optional to LLDPDUs.

·          Basic management TLVs

Table 3 lists the basic management TLV types. Some of them are mandatory to LLDPDUs.

Table 3 Basic management TLVs

Type

Description

Remarks

Chassis ID

Specifies the bridge MAC address of the sending device.

Mandatory.

Port ID

Specifies the ID of the sending port.

·         If the LLDPDU carries LLDP-MED TLVs, the port ID TLV carries the MAC address of the sending port.

·         Otherwise, the port ID TLV carries the port name.

Time to Live

Specifies the life of the transmitted information on the receiving device.

End of LLDPDU

Marks the end of the TLV sequence in the LLDPDU.

Port Description

Specifies the description of the sending port.

Optional.

System Name

Specifies the assigned name of the sending device.

System Description

Specifies the description of the sending device.

System Capabilities

Identifies the primary functions of the sending device and the enabled primary functions.

Management Address

Specifies the following elements:

·         The management address of the local device.

·         The interface number and object identifier (OID) associated with the address.

 

·          IEEE 802.1 organizationally specific TLVs

Table 4 IEEE 802.1 organizationally specific TLVs

Type

Description

Port VLAN ID

Specifies the port's VLAN identifier (PVID).

Port And Protocol VLAN ID

Indicates whether the device supports protocol VLANs and, if so, what VLAN IDs these protocols will be associated with.

VLAN Name

Specifies the textual name of any VLAN to which the port belongs.

Protocol Identity

Indicates protocols supported on the port.

DCBX

Data center bridging exchange protocol.

EVB module

Edge Virtual Bridging module, comprising EVB TLV and CDCP TLV.

NOTE:

The switch does not support EVB TLV and CDCP TLV in the current software version.

Link Aggregation

Indicates whether the port supports link aggregation, and if yes, whether link aggregation is enabled.

Management VID

Management VLAN ID.

VID Usage Digest

VLAN ID usage digest.

ETS Configuration

Enhanced Transmission Selection configuration.

ETS Recommendation

ETS recommendation.

PFC

Priority-based Flow Control.

APP

Application protocol.

 

 

NOTE:

·      H3C devices support only receiving protocol identity TLVs and VID usage digest TLVs.

·      Layer 3 Ethernet ports support only link aggregation TLVs.

 

·          IEEE 802.3 organizationally specific TLVs

Table 5 IEEE 802.3 organizationally specific TLVs

Type

Description

MAC/PHY Configuration/Status

Contains the bit-rate and duplex capabilities of the port, support for autonegotiation, enabling status of autonegotiation, and the current rate and duplex mode.

Power Via MDI

Contains the power supply capability of the port:

·         Port class (PSE or PD).

·         Power supply mode.

·         Whether PSE power supply is supported.

·         Whether PSE power supply is enabled.

·         Whether pair selection can be controllable.

Maximum Frame Size

Indicates the supported maximum frame size. It is now the MTU of the port.

Power Stateful Control

Indicates the power state control configured on the sending port, including the following:

·         Power supply mode of the PSE/PD.

·         PSE/PD priority.

·         PSE/PD power.

 

 

NOTE:

The Power Stateful Control TLV is defined in IEEE P802.3at D1.0 and is not supported in later versions. H3C devices send this type of TLVs only after receiving them.

 

·          LLDP-MED TLVs

LLDP-MED TLVs provide multiple advanced applications for voice over IP (VoIP), such as basic configuration, network policy configuration, and address and directory management. LLDP-MED TLVs provide a cost-effective and easy-to-use solution for deploying voice devices in Ethernet. LLDP-MED TLVs are shown in Table 6.

Table 6 LLDP-MED TLVs

Type

Description

LLDP-MED Capabilities

Allows a network device to advertise the LLDP-MED TLVs that it supports.

Network Policy

Allows a network device or terminal device to advertise the VLAN ID of a port, the VLAN type, and the Layer 2 and Layer 3 priorities for specific applications.

Extended Power-via-MDI

Allows a network device or terminal device to advertise power supply capability. This TLV is an extension of the Power Via MDI TLV.

Hardware Revision

Allows a terminal device to advertise its hardware version.

Firmware Revision

Allows a terminal device to advertise its firmware version.

Software Revision

Allows a terminal device to advertise its software version.

Serial Number

Allows a terminal device to advertise its serial number.

Manufacturer Name

Allows a terminal device to advertise its vendor name.

Model Name

Allows a terminal device to advertise its model name.

Asset ID

Allows a terminal device to advertise its asset ID. The typical case is that the user specifies the asset ID for the endpoint to facilitate directory management and asset tracking.

Location Identification

Allows a network device to advertise the appropriate location identifier information for a terminal device to use in the context of location-based applications.

 

 

NOTE:

·      If the MAC/PHY configuration/status TLV is not advertisable, none of the LLDP-MED TLVs will be advertised even if they are advertisable.

·      If the LLDP-MED capabilities TLV is not advertisable, the other LLDP-MED TLVs will not be advertised even if they are advertisable.

 

Management address

The network management system uses the management address of a device to identify and manage the device for topology maintenance and network management. The management address is encapsulated in the management address TLV.

Work mechanism

LLDP operating modes

An LLDP agent can operate in one of the following modes:

·          TxRx modeAn LLDP agent in this mode can send and receive LLDP frames.

·          Tx modeAn LLDP agent in this mode can only send LLDP frames.

·          Rx modeAn LLDP agent in this mode can only receive LLDP frames.

·          Disable modeAn LLDP agent in this mode cannot send or receive LLDP frames.

Each time the LLDP operating mode of an LLDP agent changes, its LLDP protocol state machine re-initializes. A configurable re-initialization delay prevents frequent initializations because of frequent changes to the operating mode. If you configure the reinitialization delay, an LLDP agent must wait the specified amount of time to initialize LLDP after the LLDP operating mode changes.

Transmitting LLDP frames

An LLDP agent operating in TxRx mode or Tx mode sends LLDP frames to its directly connected devices both periodically and when the local configuration changes. To prevent LLDP frames from overwhelming the network during times of frequent changes to local device information, LLDP uses the token bucket mechanism to rate limit LLDP frames. For more information about the token bucket mechanism, see ACL and QoS Configuration Guide.

LLDP automatically enables the fast LLDP frame transmission mechanism in either of the following cases:

·          A new LLDP frame is received and carries device information new to the local device.

·          The LLDP operating mode of the LLDP agent changes from Disable or Rx to TxRx or Tx.

The fast LLDP frame transmission mechanism successively sends the specified number of LLDP frames at a configurable fast LLDP frame transmission interval. The mechanism helps LLDP neighbors discover the local device as soon as possible. Then, the normal LLDP frame transmission interval resumes.

Receiving LLDP frames

An LLDP agent operating in TxRx mode or Rx mode confirms the validity of TLVs carried in every received LLDP frame. If the TLVs are valid, the LLDP agent saves the information and starts an aging timer. When the TTL value in the Time To Live TLV carried in the LLDP frame becomes zero, the information ages out immediately.

Protocols and standards

·          IEEE 802.1AB-2005, Station and Media Access Control Connectivity Discovery

·          IEEE 802.1AB-2009, Station and Media Access Control Connectivity Discovery

·          ANSI/TIA-1057, Link Layer Discovery Protocol for Media Endpoint Devices

·          DCB Capability Exchange Protocol Specification Rev 1.0

·          DCB Capability Exchange Protocol Base Specification Rev 1.01

·          IEEE Std 802.1Qaz-2011: Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks-Amendment 18: Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes

LLDP configuration task list

Tasks at a glance

Performing basic LLDP configuration:

(Required.) Enabling LLDP

(Optional.) Configuring the LLDP bridge mode

(Optional.) Setting the LLDP operating mode

(Optional.) Setting the LLDP re-initialization delay

(Optional.) Enabling LLDP polling

(Optional.) Configuring the advertisable TLVs

(Optional.) Configuring the management address and its encoding format

(Optional.) Setting other LLDP parameters

(Optional.) Setting an encapsulation format for LLDP frames

(Optional.) Disabling LLDP PVID inconsistency check

(Optional.) Configuring CDP compatibility

(Optional.) Configuring DCBX

(Optional.) Configuring LLDP trapping and LLDP-MED trapping

 

Performing basic LLDP configuration

Enabling LLDP

To make LLDP take effect on specific ports, you must enable LLDP both globally and on these ports.

To enable LLDP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable LLDP globally.

lldp global enable

By default, LLDP is disabled globally.

3.       Enter Layer 2/Layer 3 Ethernet interface view, Layer 2/Layer 3 aggregate interface view, or IRF physical interface view.

interface interface-type interface-number

N/A

4.       (Optional.) Enable LLDP.

lldp enable

By default, LLDP is enabled on a port.

 

 

NOTE:

The switch supports configuring LLDP on IRF physical interfaces for you to check connections and view link status on IRF physical interfaces.

 

Configuring the LLDP bridge mode

The following LLDP bridge modes are available:

·          Service bridge modeIn service bridge mode, LLDP supports nearest bridge agents and nearest non-TPMR bridge agents. LLDP processes the LLDP frames with destination MAC addresses for these agents and transparently transmits the LLDP frames with other destination MAC addresses in the VLAN.

·          Customer bridge mode—In customer bridge mode, LLDP supports nearest bridge agents, nearest non-TPMR bridge agents, and nearest customer bridge agents. LLDP processes the LLDP frames with destination MAC addresses for these agents and transparently transmits the LLDP frames with other destination MAC addresses in the VLAN.

To configure the LLDP bridge mode:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure LLDP to operate in service bridge mode.

lldp mode service-bridge

By default, LLDP operates in customer bridge mode.

 

Setting the LLDP operating mode

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2/Layer 3 Ethernet interface view, Layer 2/Layer 3 aggregate interface view, or IRF physical interface view.

interface interface-type interface-number

N/A

3.       Set the LLDP operating mode.

·         In Layer 2/Layer 3 Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] admin-status { disable | rx | tx | txrx }

·         In Layer 2/Layer 3 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } admin-status { disable | rx | tx | txrx }

·         In IRF physical interface view:
lldp admin-status { disable | rx | tx | txrx }

By default:

·         The nearest bridge agent operates in txrx mode.

·         The nearest customer bridge agent and nearest non-TPMR bridge agent operate in disable mode.

In Ethernet interface view, if no agent type is specified, the command configures the operating mode for nearest bridge agents.

In aggregate interface view, you can configure the operating mode for only nearest customer bridge agents and nearest non-TPMR bridge agents.

In IRF physical interface view, you can configure the operating mode for only nearest bridge agents.

 

Setting the LLDP re-initialization delay

When the LLDP operating mode changes on a port, the port initializes the protocol state machines after an LLDP reinitialization delay. By adjusting the delay, you can avoid frequent initializations caused by frequent changes to the LLDP operating mode on a port.

To set the LLDP re-initialization delay for ports:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Set the LLDP re-initialization delay.

lldp timer reinit-delay delay

The default setting is 2 seconds.

 

Enabling LLDP polling

With LLDP polling enabled, a device periodically searches for local configuration changes. When the device detects a configuration change, it sends LLDPDUs to inform neighboring devices of the change.

To enable LLDP polling:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2/Layer 3 Ethernet interface view, Layer 2/Layer 3 aggregate interface view, or IRF physical interface view.

interface interface-type interface-number

N/A

3.       Enable LLDP polling and set the polling interval.

·         In Layer 2/Layer 3 Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] check-change-interval interval

·         In Layer 2/Layer 3 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } check-change-interval interval

·         In IRF physical interface view:
lldp check-change-interval interval

By default, LLDP polling is disabled.

 

Configuring the advertisable TLVs

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2/Layer 3 Ethernet interface view, Layer 2/Layer 3 aggregate interface view, or IRF physical interface view.

interface interface-type interface-number

N/A

3.       Configure the advertisable TLVs (in Layer 2 Ethernet interface view).

·         lldp tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ip-address ] } | dot1-tlv { all | port-vlan-id | link-aggregation | dcbx | protocol-vlan-id [ vlan-id ] | vlan-name [ vlan-id ] | management-vid [ mvlan-id ] } | dot3-tlv { all | mac-physic | max-frame-size | power } | med-tlv { all | capability | inventory | network-policy | power-over-ethernet | location-id { civic-address device-type country-code { ca-type ca-value }&<1-10> | elin-address tel-number } } }

·         lldp agent nearest-nontpmr tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ip-address ] } | dot1-tlv { all | port-vlan-id | link-aggregation } }

·         lldp agent nearest-customer tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ip-address ] } | dot1-tlv { all | port-vlan-id | link-aggregation } }

By default:

·         Nearest bridge agents can advertise all LLDP TLVs except the DCBX, location identification, port and protocol VLAN ID, VLAN name, and management VLAN ID TLVs.

·         Nearest non-TPMR bridge agents advertise no TLVs.

·         Nearest customer bridge agents can advertise basic TLVs and IEEE 802.1 organizationally specific TLVs.

4.       Configure the advertisable TLVs (in Layer 3 Ethernet interface view).

·         lldp tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ip-address ] } | dot1-tlv { all | link-aggregation } | dot3-tlv { all | mac-physic | max-frame-size | power } | med-tlv { all | capability | inventory | power-over-ethernet | location-id { civic-address device-type country-code { ca-type ca-value }&<1-10> | elin-address tel-number } } }

·         lldp agent { nearest-nontpmr | nearest-customer } tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ip-address ] } | dot1-tlv { all | link-aggregation } }

By default:

·         Nearest bridge agents can advertise all LLDP TLVs (only link aggregation TLV in 802.1 organizationally specific TLVs) except network policy TLVs.

·         Nearest non-TPMR bridge agents advertise no TLVs.

·         Nearest customer bridge agents can advertise basic TLVs and IEEE 802.1 organizationally specific TLVs (only link aggregation TLV).

5.       Configure the advertisable TLVs (in Layer 2 aggregate interface view).

·         lldp agent nearest-nontpmr tlv-enable { basic-tlv { all | management-address-tlv [ ip-address ] | port-description | system-capability | system-description | system-name } | dot1-tlv { all | port-vlan-id } }

·         lldp agent nearest-customer tlv-enable { basic-tlv { all | management-address-tlv [ ip-address ] | port-description | system-capability | system-description | system-name } | dot1-tlv { all | port-vlan-id } }

·         lldp tlv-enable dot1-tlv { protocol-vlan-id [ vlan-id ] | vlan-name [ vlan-id ] | management-vid [ mvlan-id ] }

By default:

·         Nearest non-TPMR bridge agents advertise no TLVs.

·         Nearest customer bridge agents can advertise basic TLVs and IEEE 802.1 organizationally specific TLVs (only port and protocol VLAN ID TLV, VLAN name TLV, and management VLAN ID TLV).

·         Nearest bridge agents are not supported on Layer 2 aggregate interfaces.

6.       Configure the advertisable TLVs (in Layer 3 aggregate interface view).

lldp agent { nearest-customer | nearest-nontpmr } tlv-enable basic-tlv { all | management-address-tlv [ ip-address ] | port-description | system-capability | system-description | system-name }

By default:

·         Nearest non-TPMR bridge agents advertise no TLVs.

·         Nearest customer bridge agents can advertise only basic TLVs.

·         Nearest bridge agents are not supported on Layer 3 aggregate interfaces.

7.       Configure the advertisable TLVs (in IRF physical interface view).

lldp tlv-enable basic-tlv { port-description | system-capability | system-description | system-name }

By default, an agent can advertise all supported TLVs.

 

Configuring the management address and its encoding format

LLDP encodes management addresses in numeric or string format in management address TLVs.

By default, management addresses are encoded in numeric format. If a neighbor encodes its management address in string format, configure the encoding format of the management address as string on the connecting port. This guarantees normal communication with the neighbor.

To configure a management address to be advertised and its encoding format on a port:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2/Layer 3 Ethernet interface view, or Layer 2/Layer 3 aggregate interface view.

interface interface-type interface-number

N/A

3.       Allow LLDP to advertise the management address in LLDP frames and configure the advertised management address.

·         In Layer 2/Layer 3 Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] tlv-enable basic-tlv management-address-tlv [ ip-address ]

·         In Layer 2/Layer 3 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } tlv-enable basic-tlv management-address-tlv [ ip-address ]

By default:

·         Nearest bridge agents and nearest customer bridge agents can advertise the management address in LLDP frames.

·         Nearest non-TPMR bridge agents cannot advertise the management address in LLDP frames.

4.       Configure the encoding format of the management address as string.

·         In Layer 2/Layer 3 Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] management-address-format string

·         In Layer 2/Layer 3 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } management-address-format string

By default, the encoding format of the management address is numeric.

 

Setting other LLDP parameters

The Time to Live TLV carried in an LLDPDU determines how long the device information carried in the LLDPDU can be saved on a recipient device.

By setting the TTL multiplier, you can configure the TTL of locally sent LLDPDUs, which determines how long information about the local device can be saved on a neighboring device. The TTL is expressed by using the following formula:

TTL = Min (65535, (TTL multiplier × LLDP frame transmission interval))

As the expression shows, the TTL can be up to 65535 seconds. TTLs greater than 65535 will be rounded down to 65535 seconds.

To change LLDP parameters:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Set the TTL multiplier.

lldp hold-multiplier value

The default setting is 4.

3.       Set the LLDP frame transmission interval.

lldp timer tx-interval interval

The default setting is 30 seconds.

4.       Set the token bucket size for sending LLDP frames.

lldp max-credit credit-value

The default setting is 5.

5.       Set the LLDP frame transmission delay.

lldp timer tx-delay delay

The default setting is 2 seconds.

6.       Set the number of LLDP frames sent each time fast LLDP frame transmission is triggered.

lldp fast-count count

The default setting is 4.

7.       Set an interval for fast LLDP frame transmission.

lldp timer fast-interval interval

The default setting is 1 second.

 

Setting an encapsulation format for LLDP frames

LLDP frames can be encapsulated in the following formats:

·          Ethernet II—With Ethernet II encapsulation configured, an LLDP port sends LLDP frames in Ethernet II frames.

·          SNAP—With SNAP encapsulation configured, an LLDP port sends LLDP frames in SNAP frames.

To set the encapsulation format for LLDP frames to SNAP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2/Layer 3 Ethernet interface view, Layer 2/Layer 3 aggregate interface view, or IRF physical interface view.

interface interface-type interface-number

N/A

3.       Set the encapsulation format for LLDP frames to SNAP.

·         In Layer 2/Layer 3 Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] encapsulation snap

·         In Layer 2/Layer 3 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } encapsulation snap

·         In IRF physical interface view:
lldp encapsulation snap

By default, Ethernet II encapsulation format applies.

 

 

NOTE:

LLDP of earlier versions requires the same encapsulation format on both ends to process LLDP frames. For this reason, to communicate stably with a neighboring device running LLDP of earlier versions, the local device should be configured with the same encapsulation format.

 

Disabling LLDP PVID inconsistency check

By default, when the system receives an LLDP packet, it compares the PVID value contained in the packet with the PVID configured on the receiving interface. If the two PVIDs do not match, a log message will be printed to notify the user.

You can disable PVID inconsistency check if different PVIDs are required on a link.

To disable LLDP PVID inconsistency check:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Disable LLDP PVID inconsistency check.

lldp ignore-pvid-inconsistency

By default, LLDP PVID inconsistency check is enabled.

 

Configuring CDP compatibility

When the switch is directly connected to a Cisco device that supports only CDP rather than LLDP, you can enable CDP compatibility to enable the switch to exchange information with the directly-connected device.

With CDP compatibility enabled on the switch, the switch can use LLDP to perform the following tasks:

·          Receive and recognize the CDP packets received from the directly-connected device.

·          Send CDP packets to the directly-connected device.

The packets that the switch sends to the neighboring CDP device carry the device ID, the ID of the port connecting to the neighboring device, the port IP address, the PVID, and the TTL. The port IP address is the main IP address of the VLAN interface in up state. The VLAN interface must have the lowest VLAN ID among all VLANs permitted on the port. If none of the VLAN interfaces of the permitted VLANs is assigned an IP address or all VLAN interfaces are down, no port IP address will be advertised.

The CDP neighbor-information-related fields in the output of the display lldp neighbor-information command show the CDP neighboring device information that can be recognized by the switch. For more information about the display lldp neighbor-information command, see Layer 2—LAN Switching Command Reference.

Configuration prerequisites

Before you configure CDP compatibility, complete the following tasks:

·          Globally enable LLDP.

·          Enable LLDP on the port connecting to a device supporting CDP.

·          Configure the port to operate in TxRx mode.

Configuration procedure

CDP-compatible LLDP operates in one of the following modes:

·          TxRxCDP packets can be transmitted and received.

·          DisableCDP packets cannot be transmitted or received.

To make CDP-compatible LLDP take effect on specific ports, follow these steps:

1.        Enable CDP-compatible LLDP globally.

2.        Configure CDP-compatible LLDP to operate in TxRx mode on the port.

The maximum TTL value that CDP allows is 255 seconds. To make CDP-compatible LLDP work correctly with Cisco IP phones, configure the LLDP frame transmission interval to be no more than 1/3 of the TTL value.

To enable LLDP to be compatible with CDP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable CDP compatibility globally.

lldp compliance cdp

By default, CDP compatibility is disabled globally.

3.       Enter Layer 2 or Layer 3 Ethernet interface view.

interface interface-type interface-number

N/A

4.       Configure CDP-compatible LLDP to operate in TxRx mode.

lldp compliance admin-status cdp txrx

By default, CDP-compatible LLDP operates in disable mode.

 

Configuring DCBX

Data Center Ethernet (DCE), also known as Converged Enhanced Ethernet (CEE), is enhancement and expansion of traditional Ethernet local area networks for use in data centers. DCE uses the Data Center Bridging Exchange Protocol (DCBX) to negotiate and remotely configure the bridge capability of network elements.

DCBX has the following self-adaptable versions:

·          DCB Capability Exchange Protocol Specification Rev 1.00.

·          DCB Capability Exchange Protocol Base Specification Rev 1.01.

·          IEEE Std 802.1Qaz-2011 (Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks-Amendment 18: Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes).

DCBX offers the following functions:

·          Discovers the peer devices' capabilities and determines whether devices at both ends support these capabilities.

·          Detects configuration errors on peer devices.

·          Remotely configures the peer device if the peer device accepts the configuration.

 

 

NOTE:

H3C devices support only the remote configuration function.

 

Figure 5 DCBX application scenario

 

DCBX enables lossless packet transmission on DCE networks.

As shown in Figure 5, DCBX applies to an FCoE-based data center network, and operates on an access switch. DCBX enables the switch to control the server adapter, and simplifies the configuration and guarantees configuration consistency. DCBX extends LLDP by using the IEEE 802.1 organizationally specific TLVs (DCBX TLVs) to transmit DCBX data, including:

·          In DCBX Rev 1.00 and DCBX Rev 1.01:

¡  Application Protocol (APP).

¡  Enhanced Transmission Selection (ETS).

¡  Priority-based Flow Control (PFC).

·          In IEEE Std 802.1Qaz-2011:

¡  ETS Configuration.

¡  ETS Recommendation.

¡  PFC.

¡  APP.

H3C devices can send these types of DCBX information to a server adapter supporting FCoE, but they cannot receive the information.

DCBX configuration task list

Tasks at a glance

 

(Required.) Enabling LLDP and DCBX TLV advertising

(Required.) Configuring the DCBX version

(Required.) Configuring APP parameters

 

(Optional.) Configuring ETS parameters:

·         Configuring the 802.1p-to-local priority mapping

·         Configuring group-based WRR queuing

(Required.) Configuring PFC parameters

 

 

Enabling LLDP and DCBX TLV advertising

To enable the device to advertise APP, ETS, and PFC data through an interface, perform the following tasks:

·          Enable LLDP globally.

·          Enable LLDP and DCBX TLV advertising on the interface.

To enable LLDP and DCBX TLV advertising:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable LLDP globally.

lldp global enable

By default, LLDP is disabled globally.

3.       Enter Layer 2 Ethernet interface view.

interface interface-type interface-number

N/A

4.       Enable LLDP.

lldp enable

By default, LLDP is enabled on an interface.

5.       Enable the interface to advertise DCBX TLVs.

lldp tlv-enable dot1-tlv dcbx

By default, DCBX TLV advertising is disabled on an interface.

 

Configuring the DCBX version

When you configure the DCBX version, follow these restrictions and guidelines:

·          For DCBX to work correctly, configure the same DCBX version on the local port and peer port. As a best practice, configure the highest version supported on both ends. IEEE Std 802.1Qaz-2011, DCBX Rev 1.01, and DCBX Rev 1.00 are in descending order.

·          After the configuration, LLDP frames sent by the local port carry information about the configured DCBX version. The local port and peer port do not negotiate the DCBX version.

·          If the DCBX version is autonegotiated, the version IEEE Std 802.1Qaz-2011 is preferably negotiated.

To configure the DCBX version:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view.

interface interface-type interface-number

N/A

3.       Configure the DCBX version.

dcbx version { rev100 | rev101 | standard }

By default, the DCBX version is not configured. It is autonegotiated by the local port and peer port.

 

Configuring APP parameters

The device negotiates with the server adapter by using the APP parameters to achieve the following purposes:

·          Control the 802.1p priority values of the protocol packets that the server adapter sends.

·          Identify traffic based on the 802.1p priority values.

For example, the device can use the APP parameters to negotiate with the server adapter to set 802.1p priority 3 for all FCoE and FIP frames. When the negotiation succeeds, all FCoE and FIP frames that the server adapter sends to the device carry the 802.1p priority 3.

Configuration restrictions and guidelines

When you configure APP parameters, follow these restrictions and guidelines:

·          An Ethernet frame header ACL identifies application protocol packets by protocol number.

·          An IPv4 advanced ACL identifies application protocol packets by IP port number.

·          DCBX Rev 1.00 identifies application protocol packets only by protocol number and advertises TLVs with protocol number 0x8906 (FCoE) only.

·          DCBX Rev 1.01 has the following attributes:

¡  Supports identifying application protocol packets by both protocol number and IP port number.

¡  Does not restrict the protocol number or IP port number for advertising TLVs.

¡  Can advertise up to 77 TLVs according to the remaining length of the current packet.

Configuration procedure

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an Ethernet frame header ACL or an IPv4 advanced ACL and enter ACL view.

acl number acl-number [ name acl-name ] [ match-order { auto | config } ]

An Ethernet frame header ACL number is in the range of 4000 to 4999. An IPv4 advanced ACL number is in the range of 3000 to 3999.

DCBX Rev 1.00 supports only Ethernet frame header ACLs. DCBX Rev 1.01 and IEEE Std 802.1Qaz-2011 support both Ethernet frame header ACLs and IPv4 advanced ACLs.

3.       Create a rule for the ACL.

·         For the Ethernet frame header ACL:
rule [ rule-id ] permit type protocol-type ffff

·         For the IPv4 advanced ACL:
rule [ rule-id ] permit { tcp | udp } destination-port eq port

Create rules according to the type of the ACL previously created.

4.       Return to system view.

quit

N/A

5.       Create a class, specify the operator of the class as OR, and enter class view.

traffic classifier classifier-name operator or

N/A

6.       Use the specified ACL as the match criterion of the class.

if-match acl acl-number

N/A

7.       Return to system view.

quit

N/A

8.       Create a traffic behavior and enter traffic behavior view.

traffic behavior behavior-name

N/A

9.       Configure the behavior to mark packets with an 802.1p priority.

remark dot1p 8021p

N/A

10.     Return to system view.

quit

N/A

11.     Create a QoS policy and enter QoS policy view.

qos policy policy-name

N/A

12.     Associate the class with the traffic behavior in the QoS policy, and apply the association to DCBX.

classifier classifier-name behavior behavior-name mode dcbx

In a QoS policy, you can configure multiple class-behavior associations. A packet might be configured with multiple 802.1p priority marking or mapping actions, and the one configured first takes effect.

13.     Return to system view.

quit

N/A

14.     Apply the QoS policy.

·         (Method 1) To the outgoing traffic of all ports:
qos apply policy policy-name global outbound

·         (Method 2) To the outgoing traffic of a Layer 2 Ethernet interface:

a.    Enter Layer 2 Ethernet interface view:
interface interface-type interface-number

b.    Apply the QoS policy to the outgoing traffic:
qos apply policy policy-name outbound

·         Configurations made in system view take effect on all ports.

·         Configurations made in Layer 2 Ethernet interface view take effect on the interface.

 

For more information about the acl, rule, traffic classifier, if-match, traffic behavior, remark dot1p, qos policy, classifier behavior, qos apply policy global, and qos apply policy commands, see ACL and QoS Command Reference.

Configuring ETS parameters

ETS provides committed bandwidth. To avoid packet loss caused by congestion, the device performs the following tasks:

·          Uses ETS parameters to negotiate with the server adapter.

·          Controls the server adapter's transmission speed of the specified type of traffic.

·          Guarantees that the transmission speed is within the committed bandwidth of the interface.

To configure ETS parameters, you must configure the 802.1p-to-local priority mapping and group-based WRR queuing.

Configuring the 802.1p-to-local priority mapping

You can configure the 802.1p-to-local priority mapping either by using the MQC method or the priority mapping table method. If you configure the 802.1p-to-local priority mapping by using both methods, the configuration made in the former method applies.

To configure the 802.1p-to-local priority mapping by using the MQC method:

 

Step

Command

1.       Enter system view.

system-view

2.       Create a class, specify the operator of the class as OR, and enter class view.

traffic classifier classifier-name operator or

3.       Configure the class to match packets with the specified service provider network 802.1p priority values.

if-match service-dot1p 8021p-list

4.       Return to system view.

quit

5.       Create a traffic behavior and enter traffic behavior view.

traffic behavior behavior-name

6.       Configure the behavior to mark packets with the specified local precedence value.

remark local-precedence local-precedence

7.       Return to system view.

quit

8.       Create a QoS policy and enter QoS policy view.

qos policy policy-name

9.       Associate the class with the traffic behavior in the QoS policy, and apply the association to DCBX.

classifier classifier-name behavior behavior-name mode dcbx

 

For more information about the traffic classifier, if-match, traffic behavior, remark local-precedence, qos policy, and classifier behavior commands, see ACL and QoS Command Reference.

To configure the 802.1p priority mapping by using the priority mapping table method:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter 802.1p-to-local priority mapping table view.

qos map-table dot1p-lp

N/A

3.       Configure the priority mapping table to map the specified 802.1p priority values to a local precedence value.

import import-value-list export export-value

For information about the default priority mapping tables, see ACL and QoS Configuration Guide.

4.       Return to system view.

quit

N/A

5.       Enter Ethernet interface view.

interface interface-type interface-number

N/A

6.       Configure the interface to trust the 802.1p priority carried in packets.

qos trust dot1p

By default, an interface trusts the 802.1p priority carried in packets.

 

For more information about the qos map-table, qos map-table color, and import commands, see ACL and QoS Command Reference.

Configuring group-based WRR queuing

You can configure group-based WRR queuing to allocate bandwidth.

To configure group-based WRR queuing:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view.

interface interface-type interface-number

N/A

3.       Enable WRR queuing.

qos wrr byte-count

By default, WRR queuing is disabled.

4.       Configure a queue.

·         Add a queue to WRR priority group 1 and configure the scheduling weight for the queue:
qos wrr queue-id group 1 byte-count schedule-value

·         Configure a queue to use strict priority queuing:
qos wrr queue-id group sp

Use one or both commands.

 

For more information about the qos wrr, qos wrr byte-count, and qos wrr group sp commands, see ACL and QoS Command Reference.

Configuring PFC parameters

To prevent packets with an 802.1p priority value from being dropped, enable PFC for the 802.1p priority value. This feature reduces the sending rate of packets carrying this priority when network congestion occurs.

The device uses PFC parameters to negotiate with the server adapter and to enable PFC for the specified 802.1p priorities on the server adapter.

To configure PFC parameters:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view.

interface interface-type interface-number

N/A

3.       Enable the Ethernet interface to automatically negotiate with its peer to decide whether to enable PFC.

priority-flow-control auto

By default, PFC is disabled.

To advertise the PFC data, you must enable PFC in autonegotiation mode.

4.       Enable PFC for the specified 802.1p priorities.

priority-flow-control no-drop dot1p dot1p-list

By default, PFC is disabled for all 802.1p priorities.

As a best practice, enable PFC for the 802.1p priority of FCoE traffic. If you enable PFC for multiple 802.1p priorities, packet loss might occur during periods of congestion.

5.       Configure the interface to trust the 802.1p priority carried in packets.

qos trust dot1p

By default, an interface trusts the 802.1p priority carried in packets.

 

For more information about the priority-flow-control and priority-flow-control no-drop dot1p commands, see Interface Command Reference.

Configuring LLDP trapping and LLDP-MED trapping

LLDP trapping or LLDP-MED trapping notifies the network management system of events such as newly detected neighboring devices and link failures.

To prevent excessive LLDP traps from being sent when the topology is unstable, set a trap transmission interval for LLDP.

To configure LLDP trapping and LLDP-MED trapping:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2/Layer 3 Ethernet interface view, Layer 2/Layer 3 aggregate interface view, or IRF physical interface view.

interface interface-type interface-number

N/A

3.       Enable LLDP trapping.

·         In Layer 2/Layer 3 Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] notification remote-change enable

·         In Layer 2/Layer 3 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } notification remote-change enable

·         In IRF physical interface view:
lldp notification remote-change enable

By default, LLDP trapping is disabled.

4.       Enable LLDP-MED trapping (in Layer 2 or Layer 3 Ethernet interface view).

lldp notification med-topology-change enable

By default, LLDP-MED trapping is disabled.

5.       Return to system view.

quit

N/A

6.       (Optional.) Set the LLDP trap transmission interval.

lldp timer notification-interval interval

The default setting is 30 seconds.

 

Displaying and maintaining LLDP

Execute display commands in any view.

 

Task

Command

Display local LLDP information.

display lldp local-information [ global | interface interface-type interface-number ]

Display the information contained in the LLDP TLVs sent from neighboring devices.

display lldp neighbor-information [ [ [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ] [ verbose ] ] | list [ system-name system-name ] ]

Display LLDP statistics.

display lldp statistics [ global | [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ] ]

Display LLDP status of a port.

display lldp status [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ]

Display types of advertisable optional LLDP TLVs.

display lldp tlv-config [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ]

 

LLDP configuration example

Network requirements

As shown in Figure 6, the NMS and Switch A are located in the same Ethernet network. An MED device and Switch B are connected to FortyGigE 1/0/1 and FortyGigE 1/0/2 of Switch A.

Enable LLDP globally on Switch A and Switch B to perform the following tasks:

·          Monitor the link between Switch A and Switch B on the NMS.

·          Monitor the link between Switch A and the MED device on the NMS.

Figure 6 Network diagram

 

Configuration procedure

1.        Configure Switch A:

# Enable LLDP globally.

<SwitchA> system-view

[SwitchA] lldp global enable

# Enable LLDP on FortyGigE 1/0/1. By default, LLDP is enabled on ports.

[SwitchA] interface fortygige 1/0/1

[SwitchA-FortyGigE1/0/1] lldp enable

# Set the LLDP operating mode to Rx on FortyGigE 1/0/1.

[SwitchA-FortyGigE1/0/1] lldp admin-status rx

[SwitchA-FortyGigE1/0/1] quit

# Enable LLDP on FortyGigE 1/0/2. By default, LLDP is enabled on ports.

[SwitchA] interface fortygige 1/0/2

[SwitchA-FortyGigE1/0/2] lldp enable

# Set the LLDP operating mode to Rx on FortyGigE 1/0/2.

[SwitchA-FortyGigE1/0/2] lldp admin-status rx

[SwitchA-FortyGigE1/0/2] quit

2.        Configure Switch B:

# Enable LLDP globally.

<SwitchB> system-view

[SwitchB] lldp global enable

# Enable LLDP on FortyGigE 1/0/1. By default, LLDP is enabled on ports.

[SwitchB] interface fortygige 1/0/1

[SwitchB-FortyGigE1/0/1] lldp enable

# Set the LLDP operating mode to Tx on FortyGigE 1/0/1.

[SwitchB-FortyGigE1/0/1] lldp admin-status tx

[SwitchB-FortyGigE1/0/1] quit

Verifying the configuration

# Verify that:

·          FortyGigE 1/0/1 of Switch A connects to an MED device.

·          FortyGigE 1/0/2 of Switch A connects to a non-MED device.

·          Both ports operate in Rx mode, and they can receive LLDP frames but cannot send LLDP frames.

[SwitchA] display lldp status

Global status of LLDP: Enable

Bridge mode of LLDP: customer-bridge

The current number of LLDP neighbors: 2

The current number of CDP neighbors: 0

LLDP neighbor information last changed time: 0 days, 0 hours, 4 minutes, 40 seconds

Transmit interval              : 30s

Fast transmit interval         : 1s

Transmit credit max            : 5

Hold multiplier                : 4

Reinit delay                   : 2s

Trap interval                  : 30s

Fast start times               : 4

 

LLDP status information of port 1 [FortyGigE1/0/1]:

LLDP agent nearest-bridge:

Port status of LLDP            : Enable

Admin status                   : RX_Only

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 1

Number of MED neighbors        : 1

Number of CDP neighbors        : 0

Number of sent optional TLV    : 21

Number of received unknown TLV : 0

 

LLDP agent nearest-customer:

Port status of LLDP            : Enable

Admin status                   : Disable

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 0

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 16

Number of received unknown TLV : 0

 

LLDP status information of port 2 [FortyGigE1/0/2]:

LLDP agent nearest-bridge:

Port status of LLDP            : Enable

Admin status                   : RX_Only

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 1

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 21

Number of received unknown TLV : 3

 

LLDP agent nearest-nontpmr:

Port status of LLDP            : Enable

Admin status                   : Disable

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 0

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 1

Number of received unknown TLV : 0

 

LLDP agent nearest-customer:

Port status of LLDP            : Enable

Admin status                   : Disable

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 0

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 16

Number of received unknown TLV : 0

# Remove the link between Switch A and Switch B.

# Verify that FortyGigE 1/0/2 of Switch A does not connect to any neighboring devices.

[SwitchA] display lldp status

Global status of LLDP: Enable

The current number of LLDP neighbors: 1

The current number of CDP neighbors: 0

LLDP neighbor information last changed time: 0 days, 0 hours, 5 minutes, 20 seconds

Transmit interval              : 30s

Fast transmit interval         : 1s

Transmit credit max            : 5

Hold multiplier                : 4

Reinit delay                   : 2s

Trap interval                  : 30s

Fast start times               : 4

 

LLDP status information of port 1 [FortyGigE1/0/1]:

LLDP agent nearest-bridge:

Port status of LLDP            : Enable

Admin status                   : RX_Only

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 1

Number of MED neighbors        : 1

Number of CDP neighbors        : 0

Number of sent optional TLV    : 0

Number of received unknown TLV : 5

 

LLDP agent nearest-nontpmr:

Port status of LLDP            : Enable

Admin status                   : Disable

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 0

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 1

Number of received unknown TLV : 0

 

LLDP status information of port 2 [FortyGigE1/0/2]:

LLDP agent nearest-bridge:

Port status of LLDP            : Enable

Admin status                   : RX_Only

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 0

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 0

Number of received unknown TLV : 0

 

LLDP agent nearest-nontpmr:

Port status of LLDP            : Enable

Admin status                   : Disable

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 0

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 1

Number of received unknown TLV : 0

 

LLDP agent nearest-customer:

Port status of LLDP            : Enable

Admin status                   : Disable

Trap flag                      : No

MED trap flag                  : No

Polling interval               : 0s

Number of LLDP neighbors       : 0

Number of MED neighbors        : 0

Number of CDP neighbors        : 0

Number of sent optional TLV    : 16

Number of received unknown TLV : 0

DCBX configuration example

Network requirements

As shown in Figure 7, in a data center network, interface Ten-GigabitEthernet 1/0/1 of the access switch (Switch A) connects to the FCoE adapter of the data center server (DC server).

Configure Switch A to implement lossless FCoE and FIP frame transmission to DC server.

 

 

NOTE:

In this example, both Switch A and the DC server support DCBX Rev 1.01.

 

Figure 7 Network diagram

 

Configuration procedure

1.        Enable LLDP and DCBX TLV advertising:

# Enable LLDP globally.

<SwitchA> system-view

[SwitchA] lldp global enable

# Enable LLDP and DCBX TLV advertising on interface Ten-GigabitEthernet 1/0/1.

[SwitchA] interface ten-gigabitethernet 1/0/1

[SwitchA-Ten-GigabitEthernet1/0/1] lldp enable

[SwitchA-Ten-GigabitEthernet1/0/1] lldp tlv-enable dot1-tlv dcbx

2.        Configure the DCBX version as Rev. 1.01 on interface Ten-GigabitEthernet 1/0/1.

[SwitchA-Ten-GigabitEthernet1/0/1] dcbx version rev101

[SwitchA-Ten-GigabitEthernet1/0/1] quit

3.        Configure APP parameters:

# Create Ethernet frame header ACL 4000.

[SwitchA] acl number 4000

# Configure ACL 4000 to permit FCoE frames (protocol number is 0x8906) and FIP frames (protocol number is 0x8914) to pass through.

[SwitchA-acl-ethernetframe-4000] rule permit type 8906 ffff

[SwitchA-acl-ethernetframe-4000] rule permit type 8914 ffff

[SwitchA-acl-ethernetframe-4000] quit

# Create a class named app_c, specify the operator of the class as OR, and use ACL 4000 as the match criterion of the class.

[SwitchA] traffic classifier app_c operator or

[SwitchA-classifier-app_c] if-match acl 4000

[SwitchA-classifier-app_c] quit

# Create a traffic behavior named app_b, and configure the traffic behavior to mark packets with 802.1p priority value 3.

[SwitchA] traffic behavior app_b

[SwitchA-behavior-app_b] remark dot1p 3

[SwitchA-behavior-app_b] quit

# Create a QoS policy named plcy, associate class app_c with traffic behavior app_b in the QoS policy, and apply the association to DCBX.

[SwitchA] qos policy plcy

[SwitchA-qospolicy-plcy] classifier app_c behavior app_b mode dcbx

[SwitchA-qospolicy-plcy] quit

# Apply the policy named plcy to the outgoing traffic of interface Ten-GigabitEthernet 1/0/1.

[SwitchA] interface ten-gigabitethernet 1/0/1

[SwitchA-Ten-GigabitEthernet1/0/1] qos apply policy plcy outbound

[SwitchA-Ten-GigabitEthernet1/0/1] quit

4.        Configure ETS parameters:

# Configure the 802.1p-to-local priority mapping table to map 802.1p priority value 3 to local precedence 3. (This is the default mapping table. You can modify this configuration as needed.)

[SwitchA] qos map-table dot1p-lp

[SwitchA-maptbl-dot1p-lp] import 3 export 3

[SwitchA-maptbl-dot1p-lp] quit

# Configure interface Ten-GigabitEthernet 1/0/1 to trust the 802.1p priority carried in packets.

[SwitchA] interface ten-gigabitethernet 1/0/1

[SwitchA-Ten-GigabitEthernet1/0/1] qos trust dot1p

# Enable byte-count WRR queuing on interface Ten-GigabitEthernet 1/0/1, and configure queue 3 on the interface to use strict priority queuing.

[SwitchA-Ten-GigabitEthernet1/0/1] qos wrr byte-count

[SwitchA-Ten-GigabitEthernet1/0/1] qos wrr 3 group sp

5.        Configure PFC:

# Enable PFC in auto mode on interface Ten-GigabitEthernet 1/0/1.

[SwitchA-Ten-GigabitEthernet1/0/1] priority-flow-control auto

# Enable PFC for 802.1 priority 3.

[SwitchA-Ten-GigabitEthernet1/0/1] priority-flow-control no-drop dot1p 3

Verifying the configuration

# Display the data exchange result on the DC server through the software interface. This example uses the data exchange result for a QLogic adapter on the DC server.

------------------------------------------------------

DCBX Parameters Details for CNA Instance 0 - QLE8142

------------------------------------------------------

 

Mon May 17 10:00:50 2010

 

DCBX TLV (Type-Length-Value) Data

=================================

DCBX Parameter Type and Length

        DCBX Parameter Length: 13

        DCBX Parameter Type: 2

 

DCBX Parameter Information

        Parameter Type: Current

        Pad Byte Present: Yes

        DCBX Parameter Valid: Yes

        Reserved: 0

 

DCBX Parameter Data

        Priority Group ID of Priority 1: 0

        Priority Group ID of Priority 0: 2

 

        Priority Group ID of Priority 3: 15

        Priority Group ID of Priority 2: 1

 

        Priority Group ID of Priority 5: 5

        Priority Group ID of Priority 4: 4

 

        Priority Group ID of Priority 7: 7

        Priority Group ID of Priority 6: 6

 

        Priority Group 0 Percentage: 2

        Priority Group 1 Percentage: 4

        Priority Group 2 Percentage: 6

        Priority Group 3 Percentage: 0

        Priority Group 4 Percentage: 10

        Priority Group 5 Percentage: 18

        Priority Group 6 Percentage: 27

        Priority Group 7 Percentage: 31

 

        Number of Traffic Classes Supported: 8

 

DCBX Parameter Information

        Parameter Type: Remote

        Pad Byte Present: Yes

        DCBX Parameter Valid: Yes

        Reserved: 0

 

DCBX Parameter Data

        Priority Group ID of Priority 1: 0

        Priority Group ID of Priority 0: 2

 

        Priority Group ID of Priority 3: 15

        Priority Group ID of Priority 2: 1

 

        Priority Group ID of Priority 5: 5

        Priority Group ID of Priority 4: 4

 

        Priority Group ID of Priority 7: 7

        Priority Group ID of Priority 6: 6

 

        Priority Group 0 Percentage: 2

        Priority Group 1 Percentage: 4

        Priority Group 2 Percentage: 6

        Priority Group 3 Percentage: 0

        Priority Group 4 Percentage: 10

        Priority Group 5 Percentage: 18

        Priority Group 6 Percentage: 27

        Priority Group 7 Percentage: 31

 

        Number of Traffic Classes Supported: 8

 

DCBX Parameter Information

        Parameter Type: Local

        Pad Byte Present: Yes

        DCBX Parameter Valid: Yes

        Reserved: 0

 

DCBX Parameter Data

        Priority Group ID of Priority 1: 0

        Priority Group ID of Priority 0: 0

 

        Priority Group ID of Priority 3: 1

        Priority Group ID of Priority 2: 0

 

        Priority Group ID of Priority 5: 0

        Priority Group ID of Priority 4: 0

 

        Priority Group ID of Priority 7: 0

        Priority Group ID of Priority 6: 0

 

        Priority Group 0 Percentage: 50

        Priority Group 1 Percentage: 50

        Priority Group 2 Percentage: 0

        Priority Group 3 Percentage: 0

        Priority Group 4 Percentage: 0

        Priority Group 5 Percentage: 0

        Priority Group 6 Percentage: 0

        Priority Group 7 Percentage: 0

 

        Number of Traffic Classes Supported: 2

The output shows that the DC server will use SP queuing (priority group ID 15) for 802.1p priority 3.

DCBX Parameter Type and Length

        DCBX Parameter Length: 2

        DCBX Parameter Type: 3

 

DCBX Parameter Information

        Parameter Type: Current

        Pad Byte Present: No

        DCBX Parameter Valid: Yes

        Reserved: 0

 

DCBX Parameter Data

        PFC Enabled on Priority 0: No

        PFC Enabled on Priority 1: No

        PFC Enabled on Priority 2: No

        PFC Enabled on Priority 3: Yes

        PFC Enabled on Priority 4: No

        PFC Enabled on Priority 5: No

        PFC Enabled on Priority 6: No

        PFC Enabled on Priority 7: No

 

        Number of Traffic Classes Supported: 6

 

DCBX Parameter Information

        Parameter Type: Remote

        Pad Byte Present: No

        DCBX Parameter Valid: Yes

        Reserved: 0

 

DCBX Parameter Data

        PFC Enabled on Priority 0: No

        PFC Enabled on Priority 1: No

        PFC Enabled on Priority 2: No

        PFC Enabled on Priority 3: Yes

        PFC Enabled on Priority 4: No

        PFC Enabled on Priority 5: No

        PFC Enabled on Priority 6: No

        PFC Enabled on Priority 7: No

 

        Number of Traffic Classes Supported: 6

 

DCBX Parameter Information

        Parameter Type: Local

        Pad Byte Present: No

        DCBX Parameter Valid: Yes

        Reserved: 0

 

DCBX Parameter Data

        PFC Enabled on Priority 0: No

        PFC Enabled on Priority 1: No

        PFC Enabled on Priority 2: No

        PFC Enabled on Priority 3: Yes

        PFC Enabled on Priority 4: No

        PFC Enabled on Priority 5: No

        PFC Enabled on Priority 6: No

        PFC Enabled on Priority 7: No

 

        Number of Traffic Classes Supported: 1

The output shows that the DC server will use PFC for 802.1p priority 3.

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网