07-MPLS Configuration Guide

HomeSupportResource CenterConfigure & DeployConfiguration GuidesH3C S12500-X & S12500X-AF Switch Series Configuration Guides-Release 113x-6W10107-MPLS Configuration Guide
Table of Contents
Related Documents
01-Text
Title Size Download
01-Text 3.39 MB

Contents

Configuring basic MPLS· 1

Overview·· 1

Basic concepts· 1

MPLS network architecture· 2

LSP establishment 3

MPLS forwarding· 4

PHP· 4

Protocols and standards· 5

Feature and software version compatibility· 5

MPLS configuration task list 5

Enabling MPLS· 5

Configuring MPLS MTU·· 6

Specifying the label type advertised by the egress· 6

Configuring TTL propagation· 7

Enabling sending of MPLS TTL-expired messages· 9

Enabling SNMP notifications for MPLS· 9

Displaying and maintaining MPLS· 9

Configuring a static LSP· 11

Overview·· 11

Feature and software version compatibility· 11

Configuration prerequisites· 11

Configuration procedure· 11

Displaying static LSPs· 12

Static LSP configuration example· 12

Network requirements· 12

Configuration restrictions and guidelines· 12

Configuration procedure· 13

Verifying the configuration· 14

Configuring LDP· 15

Overview·· 15

Terminology· 15

LDP messages· 15

LDP operation· 16

Label distribution and control 17

LDP GR·· 18

LDP NSR·· 20

LDP-IGP synchronization· 20

LDP FRR·· 21

Protocols· 22

Feature and software version compatibility· 22

LDP configuration task list 22

Enabling LDP· 23

Enabling LDP globally· 23

Enabling LDP on an interface· 23

Configuring Hello parameters· 24

Configuring LDP session parameters· 24

Configuring LDP backoff 25

Configuring LDP MD5 authentication· 25

Configuring an LSP generation policy· 25

Configuring the LDP label distribution control mode· 26

Configuring a label advertisement policy· 26

Configuring a label acceptance policy· 27

Configuring LDP loop detection· 28

Configuring LDP GR·· 29

Configuring LDP NSR·· 29

Configuring LDP-IGP synchronization· 30

Configuring LDP-OSPF synchronization· 30

Configuring LDP-ISIS synchronization· 31

Configuring LDP FRR·· 32

Resetting LDP sessions· 32

Enabling SNMP notifications for LDP· 32

Displaying and maintaining LDP· 32

LDP configuration examples· 33

LDP LSP configuration example· 33

Label acceptance control configuration example· 37

Label advertisement control configuration example· 41

LDP FRR configuration example· 46

Configuring MPLS TE· 50

Overview·· 50

TE and MPLS TE· 50

MPLS TE basic concepts· 50

Static CRLSP establishment 50

Dynamic CRLSP establishment 50

Traffic forwarding· 52

Make-before-break· 53

Route pinning· 54

Tunnel reoptimization· 54

Automatic bandwidth adjustment 54

CRLSP backup· 54

FRR·· 55

DiffServ-aware TE· 56

Bidirectional MPLS TE tunnel 58

Protocols and standards· 58

Feature and software version compatibility· 58

MPLS TE configuration task list 58

Enabling MPLS TE· 59

Configuring a tunnel interface· 60

Configuring DS-TE· 60

Configuring an MPLS TE tunnel to use a static CRLSP· 61

Configuring an MPLS TE tunnel to use a dynamic CRLSP· 62

Configuration task list 62

Configuring MPLS TE attributes for a link· 62

Advertising link TE attributes by using IGP TE extension· 63

Configuring MPLS TE tunnel constraints· 64

Establishing an MPLS TE tunnel by using RSVP-TE· 66

Controlling CRLSP path selection· 66

Controlling MPLS TE tunnel setup· 68

Configuring traffic forwarding· 70

Configuring static routing to direct traffic to an MPLS TE tunnel 71

Configuring automatic route advertisement to direct traffic to an MPLS TE tunnel 71

Configuring a bidirectional MPLS TE tunnel 72

Configuring CRLSP backup· 73

Configuring MPLS TE FRR·· 73

Enabling FRR·· 74

Configuring a bypass tunnel on the PLR·· 74

Configuring node fault detection· 78

Configuring the optimal bypass tunnel selection interval 78

Displaying and maintaining MPLS TE· 79

MPLS TE configuration examples· 79

Establishing an MPLS TE tunnel over a static CRLSP· 79

Establishing an MPLS TE tunnel with RSVP-TE· 84

Establishing an inter-AS MPLS TE tunnel with RSVP-TE· 90

Bidirectional MPLS TE tunnel configuration example· 97

CRLSP backup configuration example· 103

Manual bypass tunnel for FRR configuration example· 107

Auto FRR configuration example· 113

IETF DS-TE configuration example· 119

Troubleshooting MPLS TE· 126

No TE LSA generated· 126

Configuring a static CRLSP· 127

Overview·· 127

Feature and software version compatibility· 127

Configuration procedure· 127

Displaying static CRLSPs· 128

Static CRLSP configuration example· 128

Configuring RSVP· 133

Overview·· 133

RSVP messages· 133

CRLSP setup procedure· 134

RSVP refresh mechanism·· 134

RSVP authentication· 135

RSVP GR·· 135

Protocols and standards· 136

Feature and software version compatibility· 136

RSVP configuration task list 136

Enabling RSVP· 136

Configuring RSVP refresh· 136

Configuring RSVP Srefresh and reliable RSVP message delivery· 137

Configuring RSVP hello extension· 137

Configuring RSVP authentication· 138

Specifying a DSCP value for outgoing RSVP packets· 140

Configuring RSVP GR·· 140

Enabling BFD for RSVP· 140

Displaying and maintaining RSVP· 141

RSVP configuration examples· 141

Establishing an MPLS TE tunnel with RSVP-TE· 141

RSVP GR configuration example· 146

Configuring tunnel policies· 149

Overview·· 149

Feature and software version compatibility· 149

Configuring a tunnel policy· 149

Configuration guidelines· 149

Configuration procedure· 150

Displaying tunnel information· 151

Tunnel policy configuration examples· 151

Preferred tunnel configuration example· 151

Exclusive tunnel configuration example· 151

Tunnel selection order configuration example· 152

Preferred tunnel and tunnel selection order configuration example· 152

Configuring MPLS L3VPN· 154

Overview·· 154

Basic MPLS L3VPN architecture· 154

MPLS L3VPN concepts· 154

MPLS L3VPN route advertisement 156

MPLS L3VPN packet forwarding· 157

MPLS L3VPN networking schemes· 158

Inter-AS VPN·· 160

Carrier's carrier 164

Nested VPN·· 166

HoVPN·· 167

OSPF VPN extension· 168

BGP AS number substitution· 171

MPLS L3VPN FRR·· 172

Protocols and standards· 174

Feature and software version compatibility· 174

MPLS L3VPN configuration task list 174

Configuring basic MPLS L3VPN·· 174

Configuration prerequisites· 174

Configuring VPN instances· 175

Configuring routing between a PE and a CE· 176

Configuring routing between PEs· 181

Configuring BGP VPNv4 route control 182

Configuring inter-AS VPN·· 183

Configuring inter-AS option A· 184

Configuring inter-AS option B· 184

Configuring inter-AS option C·· 185

Configuring nested VPN·· 188

Configuring HoVPN·· 189

Configuring an OSPF sham link· 190

Configuring a loopback interface· 190

Redistributing the loopback interface route· 191

Creating a sham link· 191

Specifying the VPN label processing mode on the egress PE· 191

Configuring BGP AS number substitution· 192

Configuring MPLS L3VPN FRR·· 192

Enabling SNMP notifications for MPLS L3VPN·· 194

Enabling logging for BGP route flapping· 194

Displaying and maintaining MPLS L3VPN·· 195

MPLS L3VPN configuration examples· 197

Configuring basic MPLS L3VPN·· 197

Configuring a hub-spoke network· 202

Configuring MPLS L3VPN inter-AS option A· 209

Configuring MPLS L3VPN inter-AS option B· 214

Configuring MPLS L3VPN inter-AS option C·· 219

Configuring MPLS L3VPN carrier's carrier 226

Configuring nested VPN·· 234

Configuring HoVPN·· 243

Configuring an OSPF sham link· 250

Configuring BGP AS number substitution· 254

Configuring MPLS L3VPN FRR through VPNv4 route backup for a VPNv4 route· 258

Configuring MPLS L3VPN FRR through VPNv4 route backup for an IPv4 route· 260

Configuring MPLS L3VPN FRR through IPv4 route backup for a VPNv4 route· 262

Configuring MCE· 265

MPLS L3VPN overview·· 265

Basic MPLS L3VPN architecture· 265

MPLS L3VPN concepts· 265

MCE overview·· 267

MCE configuration task list 268

Configuring VPN instances· 269

Creating a VPN instance· 269

Associating a VPN instance with an interface· 269

Configuring route related attributes for a VPN instance· 269

Configuring routing on an MCE· 270

Configuring routing between an MCE and a VPN site· 271

Configuring routing between an MCE and a PE· 276

Displaying and maintaining MCE· 280

MCE configuration examples· 280

Configuring the MCE that uses OSPF to advertise VPN routes to the PE· 280

Configuring the MCE that uses EBGP to advertise VPN routes to the PE· 286

Index· 290

 


Configuring basic MPLS

Multiprotocol Label Switching (MPLS) provides connection-oriented label switching over connectionless IP backbone networks. It integrates both the flexibility of IP routing and the simplicity of Layer 2 switching.

Unless otherwise specified, the term "interface" in this chapter refers to a Layer 3 interface. It can be a VLAN interface or a Layer 3 Ethernet interface. Layer 3 Ethernet interfaces refer to the Ethernet interfaces that operate in Layer 3 mode. For information about switching the Ethernet interface operating mode, see Layer 2—LAN Switching Configuration Guide.

Overview

MPLS has the following advantages:

·          High speed and efficiency—MPLS uses short- and fixed-length labels to forward packets, avoiding complicated routing table lookups.

·          Multiprotocol support—MPLS resides between the link layer and the network layer. It can work over various link layer protocols (for example, PPP, ATM, frame relay, and Ethernet) to provide connection-oriented services for various network layer protocols (for example, IPv4 and IPX).

·          Good scalabilityThe connection-oriented switching and multilayer label stack features enable MPLS to deliver various extended services, such as VPN, traffic engineering, and QoS.

Basic concepts

FEC

MPLS groups packets with the same characteristics (such as packets with the same destination or service class) into a forwarding equivalence class (FEC). Packets of the same FEC are handled in the same way on an MPLS network.

Label

A label uniquely identifies an FEC and has local significance.

Figure 1 Format of a label

 

A label is encapsulated between the Layer 2 header and Layer 3 header of a packet. It is four bytes long and consists of the following fields:

·          Label—20-bit label value.

·          TC3-bit traffic class, used for QoS. It is also called Exp.

·          S—1-bit bottom of stack flag. A label stack can have multiple labels. The label nearest to the Layer 2 header is called the top label, and the label nearest to the Layer 3 header is called the bottom label. The S field is set to 1 if the label is the bottom label and set to 0 if not.

·          TTL—8-bit time to live field used for routing loop prevention.

LSR

A router that performs MPLS forwarding is a label switching router (LSR).

LSP

A label switched path (LSP) is the path along which packets of an FEC travel through an MPLS network.

An LSP is a unidirectional packet forwarding path. Two neighboring LSRs are called the upstream LSR and downstream LSR along the direction of an LSP. As shown in Figure 2, LSR B is the downstream LSR of LSR A, and LSR A is the upstream LSR of LSR B.

Figure 2 Label switched path

 

LFIB

The Label Forwarding Information Base (LFIB) on an MPLS network functions like the Forwarding Information Base (FIB) on an IP network. When an LSR receives a labeled packet, it searches the LFIB to obtain information for forwarding the packet, such as the label operation type, the outgoing label value, and the next hop.

Control plane and forwarding plane

An MPLS node consists of a control plane and a forwarding plane.

·          Control plane—Assigns labels, distributes FEC-label mappings to neighbor LSRs, creates the LFIB, and establishes and removes LSPs.

·          Forwarding plane—Forwards packets according to the LFIB.

MPLS network architecture

Figure 3 MPLS network architecture

 

An MPLS network has the following types of LSRs:

·          Ingress LSR—Ingress LSR of packets. It labels packets entering into the MPLS network.

·          Transit LSR—Intermediate LSRs in the MPLS network. The transit LSRs on an LSP forward packets to the egress LSR according to labels.

·          Egress LSR—Egress LSR of packets. It removes labels from packets and forwards the packets to their destination networks.

LSP establishment

LSPs include static and dynamic LSPs.

·          Static LSP—To establish a static LSP, you must configure an LFIB entry on each LSR along the LSP. Establishing static LSPs consumes fewer resources than establishing dynamic LSPs, but static LSPs cannot automatically adapt to network topology changes. Therefore, static LSPs are suitable for small-scale networks with simple, stable topologies.

·          Dynamic LSPEstablished by a label distribution protocol (also called an MPLS signaling protocol). A label distribution protocol classifies FECs, distributes FEC-label mappings, and establishes and maintains LSPs. Label distribution protocols include protocols designed specifically for label distribution, such as the Label Distribution Protocol (LDP), and protocols extended to support label distribution, such as MP-BGP and RSVP-TE.

In this document, the term "label distribution protocols" refers to all protocols for label distribution. The term "LDP" refers to the RFC 5036 LDP.

A dynamic LSP is established in the following steps:

1.        A downstream LSR classifies FECs according to destination addresses.

2.        The downstream LSR assigns a label for each FEC, and distributes the FEC-label binding to its upstream LSR.

3.        The upstream LSR establishes an LFIB entry for the FEC according to the binding information.

After all LSRs along the LSP establish an LFIB entry for the FEC, a dynamic LSP is established for the packets of this FEC.

Figure 4 Dynamic LSP establishment

 

MPLS forwarding

Figure 5 MPLS forwarding

 

As shown in Figure 5, a packet is forwarded over the MPLS network in the following steps:

1.        Router B (the ingress LSR) receives a packet with no label. Then, it performs the following operations:

a.    Identifies the FIB entry that matches the destination address of the packet.

b.    Adds the outgoing label (40, in this example) to the packet.

c.    Forwards the labeled packet out of the interface VLAN-interface 20 to the next hop LSR Router C.

2.        When receiving the labeled packet, Router C processes the packet as follows:

a.    Identifies the LFIB entry that has an incoming label of 40.

b.    Uses the outgoing label 50 of the entry to replace label 40 in the packet.

c.    Forwards the labeled packet out of the outgoing interface VLAN-interface 30 to the next hop LSR Router D.

3.        When receiving the labeled packet, Router D (the egress) processes the packet as follows:

a.    Identifies the LFIB entry that has an incoming label of 50.

b.    Removes the label from the packet.

c.    Forwards the packet out of the outgoing interface VLAN-interface 40 to the next hop LSR Router E.

If the LFIB entry records no outgoing interface or next hop information, Router D performs the following operations:

a.    Identifies the FIB entry by the IP header.

b.    Forwards the packet according to the FIB entry.

PHP

An egress node must perform two forwarding table lookups to forward a packet:

·          Two LFIB lookups (if the packet has more than one label).

·          One LFIB lookup and one FIB lookup (if the packet has only one label).

The penultimate hop popping (PHP) feature can pop the label at the penultimate node, so the egress node only performs one table lookup.

A PHP-capable egress node sends the penultimate node an implicit null label of 3. This label never appears in the label stack of packets. If an incoming packet matches an LFIB entry comprising the implicit null label, the penultimate node pops the top label of the packet and forwards the packet to the egress LSR. The egress LSR directly forwards the packet.

Sometimes, the egress node must use the TC field in the label to perform QoS. To keep the TC information, you can configure the egress node to send the penultimate node an explicit null label of 0. If an incoming packet matches an LFIB entry comprising the explicit null label, the penultimate hop replaces the value of the top label with value 0, and forwards the packet to the egress node. The egress node gets the TC information, pops the label of the packet, and forwards the packet.

Protocols and standards

·          RFC 3031, Multiprotocol Label Switching Architecture

·          RFC 3032, MPLS Label Stack Encoding

·          RFC 5462, Multiprotocol Label Switching (MPLS) Label Stack Entry: "EXP" Field Renamed to "Traffic Class" Field

Feature and software version compatibility

The basic MPLS feature is available in Release 1138P01 and later versions.

MPLS configuration task list

Tasks at a glance

(Required.) Enabling MPLS

(Optional.) Configuring MPLS MTU

(Optional.) Specifying the label type advertised by the egress

(Optional.) Configuring TTL propagation

(Optional.) Enabling sending of MPLS TTL-expired messages

(Optional.) Enabling SNMP notifications for MPLS

 

Enabling MPLS

Before you enable MPLS, perform the following tasks:

·          Configure link layer protocols to ensure connectivity at the link layer.

·          Configure IP addresses for interfaces to ensure IP connectivity between neighboring nodes.

·          Configure static routes or an IGP protocol to ensure IP connectivity among LSRs.

To enable MPLS:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure an LSR ID for the local node.

mpls lsr-id lsr-id

By default, no LSR ID is configured.

An LSR ID must be unique in an MPLS network and in IP address format. As a best practice, use the IP address of a loopback interface as an LSR ID.

3.       Enter the view of the interface that needs to perform MPLS forwarding.

interface interface-type interface-number

N/A

4.       Enable MPLS for the interface.

mpls enable

By default, MPLS is disabled on an interface.

 

Configuring MPLS MTU

MPLS inserts the label stack between the link layer header and network layer header of each packet. To make sure the size of MPLS labeled packets is smaller than the MTU of an interface, configure an MPLS MTU on the interface.

MPLS compares each MPLS packet against the interface MPLS MTU. When the packet exceeds the MPLS MTU:

·          If fragmentation is allowed, MPLS does the following:

a.    Removes the label stack from the packet.

b.    Fragments the IP packet. The length of a fragment is the MPLS MTU minus the length of the label stack.

c.    Adds the label stack to each fragment, and forwards the fragments.

·          If fragmentation is not allowed, the LSR drops the packet.

To configure an MPLS MTU for an interface:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure an MPLS MTU for the interface.

mpls mtu value

By default, no MPLS MTU is configured on an interface.

 

The following applies when an interface handles MPLS packets:

·          If the MPLS MTU of an interface is greater than the MTU of the interface, data forwarding might fail on the interface.

·          If you do not configure the MPLS MTU of an interface, fragmentation of MPLS packets is based on the MTU of the interface without considering MPLS labels. An MPLS fragment might be larger than the interface MTU and be dropped.

Specifying the label type advertised by the egress

In an MPLS network, an egress can advertise the following types of labels:

·          Implicit null label with a value of 3.

·          Explicit null label with a value of 0.

·          Non-null label. The value range for a non-null label is 16 to 1048575.

For LSPs established by a label distribution protocol, the label advertised by the egress determines how the penultimate hop processes a labeled packet.

·          If the egress advertises an implicit null label, the penultimate hop directly pops the top label of a matching packet.

·          If the egress advertises an explicit null label, the penultimate hop swaps the top label value of a matching packet with the explicit null label.

·          If the egress advertises a non-null label (normal label), the penultimate hop swaps the top label of a matching packet with the specific label assigned by the egress.

Configuration guidelines

As a best practice, configure the egress to advertise an implicit null label to the penultimate hop if the penultimate hop supports PHP. If you want to simplify packet forwarding on the egress but keep labels to determine QoS policies, configure the egress to advertise an explicit null label to the penultimate hop. Use non-null labels only in particular scenarios. For example, when OAM is configured on the egress, the egress can get the OAM function entity status only through non-null labels.

As a penultimate hop, the device accepts the implicit null label, explicit null label, or normal label advertised by the egress device.

For LDP LSPs, the mpls label advertise command triggers LDP to delete the LSPs established before the command is executed and re-establishes new LSPs.

For BGP LSPs, the mpls label advertise command takes effect only for the BGP LSPs established after the command is executed. To apply the new setting to BGP LSPs established before the command is executed, delete the routes corresponding to the BGP LSPs, and then redistribute the routes.

Configuration procedure

To specify the type of label that the egress node will advertise to the penultimate hop:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Specify the label type advertised by the egress to the penultimate hop.

mpls label advertise { explicit-null | implicit-null | non-null }

By default, an egress advertises an implicit null label to the penultimate hop.

 

Configuring TTL propagation

When TTL propagation is enabled, the ingress node copies the TTL value of an IP packet to the TTL field of the label. Each LSR on the LSP decreases the label TTL value by 1. The LSR that pops the label copies the remaining label TTL value back to the IP TTL of the packet, so the IP TTL value can reflect how many hops the packet has traversed in the MPLS network. The IP tracert facility can show the real path along which the packet has traveled.

Figure 6 TTL propagation

 

When TTL propagation is disabled, the ingress node sets the label TTL to 255. Each LSR on the LSP decreases the label TTL value by 1. The LSR that pops the label does not change the IP TTL value when popping the label. Therefore, the MPLS backbone nodes are invisible to user networks, and the IP tracert facility cannot show the real path in the MPLS network.

Figure 7 Without TTL propagation

 

Follow these guidelines when you configure TTL propagation:

·          As a best practice, set the same TTL processing mode on all LSRs of an LSP.

·          To enable TTL propagation for a VPN, you must enable it on all PE devices in the VPN, so that you can get the same traceroute result (hop count) from those PEs.

·          After TTL propagation is disabled, the device cannot cannot perform correct DSCP-to-EXP mapping for IP packets entering the MPLS network.

·          After TTL propagation is enabled or disabled, execute the reset mpls ldp command to make the configuration take effect. For more information about the reset mpls ldp command, see MPLS Command Reference.

To enable TTL propagation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable TTL propagation.

mpls ttl propagate { public | vpn }

By default, TTL propagation is enabled only for public-network packets.

This command affects only the propagation between IP TTL and label TTL. Within an MPLS network, TTL is always copied between the labels of an MPLS packet.

 

Enabling sending of MPLS TTL-expired messages

This feature enables an LSR to generate an ICMP TTL-expired message upon receiving an MPLS packet with a TTL of 1. If the MPLS packet has only one label, the LSR sends the ICMP TTL-expired message back to the source through IP routing. If the MPLS packet has multiple labels, the LSR sends it along the LSP to the egress, which then sends the message back to the source.

To enable sending of MPLS TTL-expired messages:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable sending of MPLS TTL-expired messages.

mpls ttl expiration enable

By default, this feature is enabled.

 

Enabling SNMP notifications for MPLS

This feature enables MPLS to generate SNMP notifications. The generated SNMP notifications are sent to the SNMP module.

For more information about SNMP notifications, see Network Management and Monitoring Configuration Guide.

To enable SNMP notifications for MPLS:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable SNMP notifications for MPLS.

snmp-agent trap enable mpls

By default, SNMP notifications for MPLS are enabled.

 

Displaying and maintaining MPLS

Execute display commands in any view.

 

Task

Command

Display MPLS interface information.

display mpls interface [ interface-type interface-number ]

Display usage information about MPLS labels.

display mpls label { label-value1 [ to label-value2 ] | all }

Display LSP information.

display mpls lsp [ egress | in-label label-value | ingress | outgoing-interface interface-type interface-number | protocol { bgp | ldp | local | rsvp-te | static | static-cr } | transit  ] [ vpn-instance vpn-instance-name ] [ ipv4-dest mask-length ] [ verbose ]

Display MPLS Nexthop Information Base (NIB) information.

display mpls nib [ nib-id ]

Display usage information about NIDs.

display mpls nid [ nid-value1 [ to nid-value2 ] ]

Display LSP statistics.

display mpls lsp statistics

Display MPLS summary information.

display mpls summary

Display ILM entries (in standalone mode).

display mpls forwarding ilm [ label ] [ slot slot-number ]

Display ILM entries (in IRF mode).

display mpls forwarding ilm [ label ] [ chassis chassis-number slot slot-number ]

Display NHLFE entries (in standalone mode).

display mpls forwarding nhlfe [ nid ] [ slot slot-number ]

Display NHLFE entries (in IRF mode).

display mpls forwarding nhlfe [ nid ] [ chassis chassis-number slot slot-number ]

 


Configuring a static LSP

Overview

A static label switched path (LSP) is established by manually specifying the incoming label and outgoing label on each node (ingress, transit, or egress node) of the forwarding path.

Static LSPs consume fewer resources, but they cannot automatically adapt to network topology changes. Therefore, static LSPs are suitable for small and stable networks with simple topologies.

Follow these guidelines to establish a static LSP:

·          The ingress node performs the following operations:

a.    Determines an FEC for a packet according to the destination address.

b.    Adds the label for that FEC into the packet.

c.    Forwards the packet to the next hop or out of the outgoing interface.

Therefore, on the ingress node, you must specify the outgoing label for the destination address (the FEC) and the next hop or the outgoing interface.

·          A transit node swaps the label carried in a received packet with a specific label, and forwards the packet to the next hop or out of the outgoing interface. Therefore, on each transit node, you must specify the incoming label, the outgoing label, and the next hop or the outgoing interface.

·          If the penultimate hop popping function is not configured, an egress node pops the incoming label of a packet, and performs label forwarding according to the inner label or IP forwarding. Therefore, on the egress node, you only need to specify the incoming label.

·          The outgoing label specified on an LSR must be the same as the incoming label specified on the directly connected downstream LSR.

Feature and software version compatibility

The static LSP feature is available in Release 1138P01 and later versions.

Configuration prerequisites

Before you configure a static LSP, perform the following tasks:

·          Identify the ingress node, transit nodes, and egress node of the LSP.

·          Enable MPLS on all interfaces that participate in MPLS forwarding. For more information, see "Configuring basic MPLS."

·          Make sure the ingress node has a route to the destination address of the LSP. This is not required on transit and egress nodes.

Configuration procedure

To configure a static LSP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the ingress node of the static LSP.

static-lsp ingress lsp-name destination dest-addr { mask | mask-length } { nexthop next-hop-addr | outgoing-interface interface-type interface-number } out-label out-label

If you specify a next hop for the static LSP, make sure the ingress node has an active route to the specified next hop address.

3.       Configure the transit node of the static LSP.

static-lsp transit lsp-name in-label in-label { nexthop next-hop-addr | outgoing-interface interface-type interface-number } out-label out-label

If you specify a next hop for the static LSP, make sure the transit node has an active route to the specified next hop address.

4.       Configure the egress node of the static LSP.

static-lsp egress lsp-name in-label in-label

You do not need to configure this command if the outgoing label configured on the penultimate hop of the static LSP is 0 or 3.

 

Displaying static LSPs

Execute display commands in any view.

 

Task

Command

Display static LSP information.

display mpls static-lsp [ lsp-name lsp-name ]

 

Static LSP configuration example

Network requirements

Switch A, Switch B, and Switch C all support MPLS.

Establish static LSPs between Switch A and Switch C, so that subnets 11.1.1.0/24 and 21.1.1.0/24 can access each other over MPLS.

Figure 8 Network diagram

 

Configuration restrictions and guidelines

·          For an LSP, the outgoing label specified on an LSR must be identical with the incoming label specified on the downstream LSR.

·          LSPs are unidirectional. You must configure an LSP for each direction of the data forwarding path.

·          A route to the destination address of the LSP must be available on the ingress node and the egress node, but it is not needed on transit nodes. Therefore, you do not need to configure a routing protocol to ensure IP connectivity among all switches.

Configuration procedure

1.        Create VLANs and configure IP addresses for all interfaces, including the loopback interfaces, as shown in Figure 8. (Details not shown.)

2.        Configure a static route to the destination address of each LSP:

# On Switch A, configure a static route to network 21.1.1.0/24.

<SwitchA> system-view

[SwitchA] ip route-static 21.1.1.0 24 10.1.1.2

# On Switch C, configure a static route to network 11.1.1.0/24.

<SwitchC> system-view

[SwitchC] ip route-static 11.1.1.0 255.255.255.0 20.1.1.1

3.        Configure basic MPLS on the switches:

# Configure Switch A.

[SwitchA] mpls lsr-id 1.1.1.9

[SwitchA] interface vlan-interface 2

[SwitchA-Vlan-interface2] mpls enable

[SwitchA-Vlan-interface2] quit

# Configure Switch B.

[SwitchB] mpls lsr-id 2.2.2.9

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls enable

[SwitchB-Vlan-interface2] quit

[SwitchB] interface vlan-interface 3

[SwitchB-Vlan-interface3] mpls enable

[SwitchB-Vlan-interface3] quit

# Configure Switch C.

[SwitchC] mpls lsr-id 3.3.3.9

[SwitchC] interface vlan-interface 3

[SwitchC-Vlan-interface3] mpls enable

[SwitchC-Vlan-interface3] quit

4.        Configure a static LSP from Switch A to Switch C:

# Configure the LSP ingress node, Switch A.

[SwitchA] static-lsp ingress AtoC destination 21.1.1.0 24 nexthop 10.1.1.2 out-label 30

# Configure the LSP transit node, Switch B.

[SwitchB] static-lsp transit AtoC in-label 30 nexthop 20.1.1.2 out-label 50

# Configure the LSP egress node, Switch C.

[SwitchC] static-lsp egress AtoC in-label 50

5.        Configure a static LSP from Switch C to Switch A:

# Configure the LSP ingress node, Switch C.

[SwitchC] static-lsp ingress CtoA destination 11.1.1.0 24 nexthop 20.1.1.1 out-label 40

# Configure the LSP transit node, Switch B.

[SwitchB] static-lsp transit CtoA in-label 40 nexthop 10.1.1.1 out-label 70

# Configure the LSP egress node, Switch A.

[SwitchA] static-lsp egress CtoA in-label 70

Verifying the configuration

# Display static LSP information on switches. This example uses Switch A.

[SwitchA] display mpls static-lsp

Total: 2

Name            FEC                In/Out Label Nexthop/Out Interface    State

AtoC            21.1.1.0/24        NULL/30      10.1.1.2                 Up

CtoA            -/-                70/NULL      -                        Up

# Test the connectivity of the LSP from Switch A to Switch C.

[SwitchA] ping mpls -a 11.1.1.1 ipv4 21.1.1.0 24

MPLS Ping FEC: 21.1.1.0/24 : 100 data bytes

100 bytes from 20.1.1.2: Sequence=1 time=4 ms

100 bytes from 20.1.1.2: Sequence=2 time=1 ms

100 bytes from 20.1.1.2: Sequence=3 time=1 ms

100 bytes from 20.1.1.2: Sequence=4 time=1 ms

100 bytes from 20.1.1.2: Sequence=5 time=1 ms

 

--- FEC: 21.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/1/4 ms

# Test the connectivity of the LSP from Switch C to Switch A.

[SwitchC] ping mpls -a 21.1.1.1 ipv4 11.1.1.0 24

MPLS Ping FEC: 11.1.1.0/24 : 100 data bytes

100 bytes from 10.1.1.1: Sequence=1 time=5 ms

100 bytes from 10.1.1.1: Sequence=2 time=1 ms

100 bytes from 10.1.1.1: Sequence=3 time=1 ms

100 bytes from 10.1.1.1: Sequence=4 time=1 ms

100 bytes from 10.1.1.1: Sequence=5 time=1 ms

 

--- FEC: 11.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/1/5 ms


Configuring LDP

Overview

The Label Distribution Protocol (LDP) dynamically distributes FEC-label mapping information between LSRs to establish LSPs.

Terminology

LDP session

Two LSRs establish a TCP-based LDP session to exchange FEC-label mappings.

LDP peer

Two LSRs that use LDP to exchange FEC-label mappings are LSR peers.

Label spaces and LDP identifiers

Label spaces include the following types:

·          Per-interface label space—Each interface uses a single, independent label space. Different interfaces can use the same label values.

·          Per-platform label spaceEach LSR uses a single label space. The device only supports the per-platform label space.

A six-byte LDP Identifier (LDP ID) identifies a label space on an LSR. It is in the format of <LSR ID>:<label space number>, where:

·          The LSR ID takes four bytes to identity the LSR.

·          The label space number takes two bytes to identify a label space within the LSR.

A label space number of 0 indicates that the label space is a per-platform label space. A label space number other than 0 indicates a per-interface label space.

FECs and FEC-label mappings

MPLS groups packets with the same characteristics (such as the same destination or service class) into a class, called an "FEC." The packets of the same FEC are handled in the same way on an MPLS network.

LDP can classify FECs by destination IP address.

An LSR assigns a label for a FEC and advertises the FEC-label mapping, or FEC-label binding, to its peers in a Label Mapping message.

LDP messages

LDP mainly uses the following types of messages:

·          Discovery messagesDeclare and maintain the presence of LSRs, such as Hello messages.

·          Session messagesEstablish, maintain, and terminate sessions between LDP peers, such as Initialization messages used for parameter negotiation and Keepalive messages used to maintain sessions.

·          Advertisement messagesCreate, alter, and remove FEC-label mappings, such as Label Mapping messages used to advertise FEC-label mappings.

·          Notification messagesProvide advisory information and notify errors, such as Notification messages.

LDP uses UDP to transport discovery messages for efficiency, and uses TCP to transport session, advertisement, and notification messages for reliability.

LDP operation

LDP operates in the following phases:

Discovering and maintaining LDP peers

The device supports only the Basic Discovery mechanism in the current software release. Using Basic Discovery, an LSR enabled with LDP sends Link Hello messages to multicast address 224.0.0.2 that identifies all routers on the subnet. All directly-connected LSRs can discover the LSR and establish a hello adjacency.

LDP peers send Hello messages at the hello interval to maintain a hello adjacency. If LDP receives no Hello message from a hello adjacency before the hello hold timer expires, it removes the hello adjacency.

Establishing and maintaining LDP sessions

LDP establishes a session with a peer in the following steps:

1.        Establishes a TCP connection with the neighbor.

2.        Negotiates session parameters such as LDP version, label distribution method, and Keepalive timer, and establishes an LDP session with the neighbor if the negotiation succeeds.

After a session is established, LDP sends LDP PDUs (an LDP PDU carries one or more LDP messages) to maintain the session. If no information is exchanged between the LDP peers within the Keepalive interval, LDP sends Keepalive messages at the Keepalive interval to maintain the session. If LDP receives no LDP PDU from a neighbor before the keepalive hold timer expires, or the last hello adjacency with the neighbor is removed, LDP terminates the session.

LDP can also send a Shutdown message to a neighbor to terminate the LDP session.

Establishing LSPs

LDP classifies FECs according to destination IP addresses in IP routing entries, creates FEC-label mappings, and advertises the mappings to LDP peers through LDP sessions. After an LDP peer receives a FEC-label mapping, it uses the received label and the label locally assigned to that FEC to create an LFIB entry for that FEC. When all LSRs (from the Ingress to the Egress) establish an LFIB entry for the FEC, an LSP is established exclusively for the FEC.

Figure 9 Dynamically establishing an LSP

 

Label distribution and control

Label advertisement modes

Figure 10 Label advertisement modes

 

LDP advertises label-FEC mappings in one of the following ways:

·          Downstream Unsolicited (DU) modeDistributes FEC-label mappings to the upstream LSR, without waiting for label requests. The device supports only the DU mode.

·          Downstream on Demand (DoD) modeSends a label request for a FEC to the downstream LSR. After receiving the label request, the downstream LSR distributes the FEC-label mapping for that FEC to the upstream LSR.

 

 

NOTE:

A pair of upstream and downstream LSRs must use the same label advertisement mode. Otherwise, the LSP cannot be established.

 

Label distribution control

LDP controls label distribution in one of the following ways:

·          Independent label distribution—Distributes a FEC-label mapping to an upstream LSR at any time. An LSR might distribute a mapping for a FEC to its upstream LSR before it receives a label mapping for that FEC from its downstream LSR. As shown in Figure 11, in DU mode, each LSR distributes a label mapping for a FEC to its upstream LSR whenever it is ready to label-switch the FEC, without waiting for a label mapping for the FEC from its downstream LSR. In DoD mode, an LSR distributes a label mapping for a FEC to its upstream LSR after it receives a label request for the FEC, without waiting for a label mapping for the FEC from its downstream LSR.

Figure 11 Independent label distribution control mode

 

·          Ordered label distribution—Distributes a label mapping for a FEC to its upstream LSR only after it receives a label mapping for that FEC from its downstream LSR unless the local node is the egress node of the FEC. As shown in Figure 10, in DU mode, an LSR distributes a label mapping for a FEC to its upstream LSR only if it receives a label mapping for the FEC from its downstream LSR. In DoD mode, when an LSR (Transit) receives a label request for a FEC from its upstream LSR (Ingress), it continues to send a label request for the FEC to its downstream LSR (Egress). After the transit LSR receives a label mapping for the FEC from the egress LSR, it distributes a label mapping for the FEC to the ingress.

Label retention mode

The label retention mode specifies whether an LSR maintains a label mapping for a FEC learned from a neighbor that is not its next hop.

·          Liberal label retention—Retains a received label mapping for a FEC regardless of whether the advertising LSR is the next hop of the FEC. This mechanism allows for quicker adaptation to topology changes, but it wastes system resources because LDP has to keep useless labels. The device only supports liberal label retention.

·          Conservative label retention—Retains a received label mapping for a FEC only when the advertising LSR is the next hop of the FEC. This mechanism saves label resources, but it cannot quickly adapt to topology changes.

LDP GR

LDP GR overview

LDP Graceful Restart enables an LSR to retain MPLS forwarding entries during an LDP restart, ensuring continuous MPLS forwarding.

Figure 12 LDP GR

 

As shown in Figure 12, GR defines the following roles:

·          GR restarterAn LSR that performs GR. It must be GR-capable.

·          GR helperA neighbor LSR that helps the GR restarter to complete GR.

The device can act as a GR restarter or a GR helper.

Figure 13 LDP GR operation

 

As shown in Figure 13, LDP GR works in the following steps:

1.        LSRs establish an LDP session. The L flag of the Fault Tolerance TLV in their Initialization messages is set to 1 to indicate that they support LDP GR.

2.        When LDP restarts, the GR restarter starts the MPLS Forwarding State Holding timer, and marks the MPLS forwarding entries as stale. When the GR helper detects that the LDP session with the GR restarter goes down, it marks the FEC-label mappings learned from the session as stale and starts the Reconnect timer received from the GR restarter.

3.        After LDP completes restart, the GR restarter re-establishes an LDP session with the GR helper. If the LDP session is not set up before the Reconnect timer expires, the GR helper deletes the stale FEC-label mappings and the corresponding MPLS forwarding entries. If the LDP session is successfully set up before the Reconnect timer expires, the GR restarter sends the remaining time of the MPLS Forwarding State Holding timer as the LDP Recovery time to the GR helper.

4.        After the LDP session is re-established, the GR helper starts the LDP Recovery timer.

5.        The GR restarter and the GR helper exchange label mappings and update their MPLS forwarding tables.

The GR restarter compares each received label mapping against stale MPLS forwarding entries. If a match is found, the restarter deletes the stale mark for the matching entry. Otherwise, it adds a new entry for the label mapping.

The GR helper compares each received label mapping against stale FEC-label mappings. If a match is found, the helper deletes the stale mark for the matching mapping. Otherwise, it adds the received FEC-label mapping and a new MPLS forwarding entry for the mapping.

6.        When the MPLS Forwarding State Holding timer expires, the GR restarter deletes all stale MPLS forwarding entries.

7.        When the LDP Recovery timer expires, the GR helper deletes all stale FEC-label mappings.

LDP NSR

LDP nonstop routing (NSR) backs up protocol states and data (including LDP session and LSP information) from the active process to the standby process. When the LDP active process fails, the standby process becomes active and takes over processing seamlessly. The LDP peers are not notified of the LDP interruption. The LDP session stays in Operational state, and the forwarding is not interrupted.

The LDP active process fails when one of the following events occurs:

·          The active process restarts.

·          The MPU where the active process resides fails.

·          The MPU where the active process resides performs an ISSU.

Choose either LDP NSR or LDP GR to ensure continuous traffic forwarding.

·          Device requirements

?  To use LDP NSR, the device must have two or more MPUs, and the active and standby processes for LDP reside on different MPUs.

?  To use LDP GR, the device can have only one MPU on the device.

·          LDP peer requirements

?  With LDP NSR, LDP peers of the local device are not notified of any switchover event on the local device. The local device does not require help from a peer to restore the MPLS forwarding information.

?  With LDP GR, the LDP peer must be able to identify the GR capability flag (in the Initialization message) of the GR restarter. The LDP peer acts as a GR helper to help the GR restarter to restore MPLS forwarding information.

LDP-IGP synchronization

Basic operating mechanism

LDP establishes LSPs based on the IGP optimal route. If LDP is not synchronized with IGP, MPLS traffic forwarding might be interrupted.

LDP is not synchronized with IGP when one of the following occurs:

·          A link is up, and IGP advertises and uses this link. However, LDP LSPs on this link have not been established.

·          An LDP session on a link is down, and LDP LSPs on the link have been removed. However, IGP still uses this link.

·          The Ordered label distribution control mode is used. IGP used the link before the local device received the label mappings from the downstream LSR to establish LDP LSPs.

After LDP-IGP synchronization is enabled, IGP advertises the actual cost of a link only when LDP convergence on the link is completed. Before LDP convergence is completed, IGP advertises the maximum cost of the link. In this way, the link is visible on the IGP topology, but IGP does not select this link as the optimal route when other links are available. Therefore, the device can avoid discarding MPLS packets when there is not an LDP LSP established on the optimal route.

LDP convergence on a link is completed when all the followings occur:

·          The local device establishes an LDP session to at least one peer, and the LDP session is already in Operational state.

·          The local device has distributed the label mappings to at least one peer.

Notification delay for LDP convergence completion

By default, LDP immediately sends a notification to IGP that LDP convergence has completed. However, immediate notifications might cause MPLS traffic forwarding interruptions in one of the following scenarios:

·          LDP peers use the Ordered label distribution control mode. The device has not received a label mapping from downstream at the time LDP notifies IGP that LDP convergence has completed.

·          A large number of label mappings are distributed from downstream. Label advertisement is not completed when LDP notifies IGP that LDP convergence has completed.

To avoid traffic forwarding interruptions in these scenarios, configure the notification delay. When LDP convergence on a link is completed, LDP waits before notifying IGP.

Notification delay for LDP restart or active/standby switchover

When an LDP restart or an active/standby switchover occurs, LDP takes time to converge, and LDP notifies IGP of the LDP-IGP synchronization status as follows:

·          If a notification delay is not configured, LDP immediately notifies IGP of the current synchronization states during convergence, and then updates the states after LDP convergence. This could impact IGP processing.

·          If a notification delay is configured, LDP notifies IGP of the LDP-IGP synchronization states in bulk when one of the following events occurs:

?  LDP recovers to the state before the restart or switchover.

?  The maximum delay timer expires.

LDP FRR

A link or router failure on a path can cause packet loss until LDP completes LSP establishment on the new path. LDP FRR enables fast rerouting to minimize the failover time. LDP FRR bases on IP FRR and is enabled automatically after IP FRR is enabled.

You can use one of the following methods to enable IP FRR:

·          Configure an IGP to automatically calculate a backup next hop.

·          Configure an IGP to specify a backup next hop by using a routing policy.

Figure 14 Network diagram for LDP FRR

 

As shown in Figure 14, configure IP FRR on LSR A. The IGP automatically calculates a backup next hop or it specifies a backup next hop through a routing policy. LDP creates a primary LSP and a backup LSP according to the primary route and the backup route calculated by IGP. When the primary LSP operates correctly, it forwards the MPLS packets. When the primary LSP fails, LDP directs packets to the backup LSP.

When packets are forwarded through the backup LSP, IGP calculates the optimal path based on the new network topology. When IGP route convergence occurs, LDP establishes a new LSP according to the optimal path. If a new LSP is not established after IGP route convergence, traffic forwarding might be interrupted. As a best practice, enable LDP IGP synchronization to work with LDP FRR to reduce the traffic interruption time.

Protocols

RFC 5036, LDP Specification

Feature and software version compatibility

The LDP feature is available in Release 1138P01 and later versions.

LDP configuration task list

Tasks at a glance

Enable LDP:

1.       (Required.) Enabling LDP globally

2.       (Required.) Enabling LDP on an interface

(Optional.) Configuring Hello parameters

(Optional.) Configuring LDP session parameters

(Optional.) Configuring LDP backoff

(Optional.) Configuring LDP MD5 authentication

(Optional.) Configuring an LSP generation policy

(Optional.) Configuring the LDP label distribution control mode

(Optional.) Configuring a label advertisement policy

(Optional.) Configuring a label acceptance policy

(Optional.) Configuring LDP loop detection

(Optional.) Configuring LDP GR

(Optional.) Configuring LDP NSR

(Optional.) Configuring LDP-IGP synchronization

(Optional.) Configuring LDP FRR

(Optional.) Resetting LDP sessions

(Optional.) Enabling SNMP notifications for LDP

 

Enabling LDP

To enable LDP, you must first enable LDP globally. Then, enable LDP on relevant interfaces or configure IGP to automatically enable LDP on those interfaces.

Enabling LDP globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable LDP for the local node or for a VPN.

·         Enable LDP for the local node and enter LDP view:
mpls ldp

·         Enable LDP for a VPN and enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

By default, LDP is disabled.

3.       Configure an LDP LSR ID.

lsr-id lsr-id

By default, the LDP LSR ID is the same as the MPLS LSR ID.

 

Enabling LDP on an interface

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

If the interface is bound to a VPN instance, you must enable LDP for the VPN instance by using the vpn-instance command in LDP view.

3.       Enable LDP on the interface.

mpls ldp enable

By default, LDP is disabled on an interface.

 

Configuring Hello parameters

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter the view of the interface where you want to establish an LDP session.

interface interface-type interface-number

N/A

3.       Configure the Link Hello hold time.

mpls ldp timer hello-hold timeout

By default, the Link Hello hold time is 15 seconds.

4.       Configure the Link Hello interval.

mpls ldp timer hello-interval interval

By default, the Link Hello interval is 5 seconds.

 

Configuring LDP session parameters

This task configures the following LDP session parameters:

·          Keepalive hold time and Keepalive interval.

·          LDP transport address—IP address for establishing TCP connections.

When you configure LDP session parameters, follow these guidelines:

·          The configured LDP transport address must be the IP address of an up interface on the device. Otherwise, no LDP session can be established.

·          Make sure the LDP transport addresses of the local and peer LSRs can reach each other. Otherwise, no TCP connection can be established.

To configure LDP session parameters:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the Keepalive hold time.

mpls ldp timer keepalive-hold timeout

By default, the Keepalive hold time is 45 seconds.

4.       Configure the Keepalive interval.

mpls ldp timer keepalive-interval interval

By default, the Keepalive interval is 15 seconds.

5.       Configure the LDP transport address.

mpls ldp transport-address { ip-address | interface }

By default, the LDP transport address is the LSR ID of the local device if the interface where you want to establish an LDP session belongs to the public network. If the interface belongs to a VPN, the LDP transport address is the primary IP address of the interface.

If the interface where you want to establish an LDP session is bound to a VPN instance, the interface with the IP address specified with this command must be bound to the same VPN instance.

 

Configuring LDP backoff

If LDP session parameters (for example, the label advertisement mode) are incompatible, two LDP peers cannot establish a session, and they will keep negotiating with each other.

The LDP backoff mechanism can mitigate this problem by using an initial delay timer and a maximum delay timer. After LDP fails to establish a session with a peer LSR for the first time, LDP does not start an attempt until the initial delay timer expires. If the session setup fails again, LDP waits for two times the initial delay before the next attempt, and so forth until the maximum delay time is reached. After that, the maximum delay time will always take effect.

To configure LDP backoff:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view or enter LDP-VPN instance view.

·         Enter LDP view:
mpls ldp

·         Enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

N/A

3.       Configure the initial delay time and maximum delay time.

backoff initial initial-time maximum maximum-time

By default, the initial delay time is 15 seconds and the maximum delay time is 120 seconds.

 

Configuring LDP MD5 authentication

To improve security for LDP sessions, you can configure MD5 authentication for the underlying TCP connections to check the integrity of LDP messages.

For two LDP peers to establish an LDP session successfully, make sure the LDP MD5 authentication configurations on the LDP peers are consistent.

To configure LDP MD5 authentication:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view or enter LDP-VPN instance view.

·         Enter LDP view:
mpls ldp

·         Enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

N/A

3.       Enable LDP MD5 authentication.

md5-authentication peer-lsr-id { cipher | plain } password

By default, LDP MD5 authentication is disabled.

 

Configuring an LSP generation policy

An LSP generation policy controls the number of LSPs generated by LDP in one of the following ways:

·          Use all routes to establish LSPs.

·          Use the routes permitted by an IP prefix list to establish LSPs. For information about IP prefix list configuration, see Layer 3—IP Routing Configuration Guide.

·          Use only host routes with a 32-bit mask to establish LSPs.

By default, LDP uses only host routes with a 32-bit mask to establish LSPs. The other two methods can result in more LSPs than the default policy. To change the policy, be sure that the system resources and bandwidth resources are sufficient.

To configure an LSP generation policy:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view or enter LDP-VPN instance view.

·         Enter LDP view:
mpls ldp

·         Enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

N/A

3.       Configure an LSP generation policy.

lsp-trigger { all | prefix-list prefix-list-name }

By default, LDP uses only host routes with a 32-bit mask to establish LSPs.

 

Configuring the LDP label distribution control mode

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view or enter LDP-VPN instance view.

·         Enter LDP view:
mpls ldp

·         Enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

N/A

3.       Configure the label distribution control mode.

label-distribution { independent | ordered }

By default, the Ordered label distribution mode is used.

To apply the new setting to LDP sessions established before the command is configured, you must reset the LDP sessions.

 

Configuring a label advertisement policy

A label advertisement policy uses IP prefix lists to control the FEC-label mappings advertised to peers.

As shown in Figure 15, LSR A advertises label mappings for FECs permitted by IP prefix list B to LSR B and advertises label mappings for FECs permitted by IP prefix list C to LSR C.

Figure 15 Label advertisement control diagram

 

A label advertisement policy on an LSR and a label acceptance policy on its upstream LSR can achieve the same purpose. As a best practice, use label advertisement policies to reduce network load if downstream LSRs support label advertisement control.

Before you configure an LDP label advertisement policy, create an IP prefix list. For information about IP prefix list configuration, see Layer 3—IP Routing Configuration Guide.

To configure a label advertisement policy:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view or enter LDP-VPN instance view.

·         Enter LDP view:
mpls ldp

·         Enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

N/A

3.       Configure a label advertisement policy.

advertise-label prefix-list prefix-list-name [ peer peer-prefix-list-name ]

By default, LDP advertises all label mappings permitted by the LSP generation policy to all peers.

 

Configuring a label acceptance policy

A label acceptance policy uses an IP prefix list to control the label mappings received from a peer.

As shown in Figure 16, LSR A uses an IP prefix list to filter label mappings from LSR B, and it does not filter label mappings from LSR C.

Figure 16 Label acceptance control diagram

 

A label advertisement policy on an LSR and a label acceptance policy on its upstream LSR can achieve the same purpose. As a best practice, use the label advertisement policy to reduce network load.

You must create an IP prefix list before you configure a label acceptance policy. For information about IP prefix list configuration, see Layer 3—IP Routing Configuration Guide.

To configure a label acceptance policy:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view or enter LDP-VPN instance view.

·         Enter LDP view:
mpls ldp

·         Enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

N/A

3.       Configure a label acceptance policy.

accept-label peer peer-lsr-id prefix-list prefix-list-name

By default, LDP accepts all label mappings.

 

Configuring LDP loop detection

LDP detects and terminates LSP loops in the following ways:

·          Maximum hop count—LDP adds a hop count in a label request or label mapping message. The hop count value increments by 1 on each LSR. When the maximum hop count is reached, LDP considers that a loop has occurred and terminates the establishment of the LSP.

·          Path vector—LDP adds LSR ID information in a label request or label mapping message. Each LSR checks whether its LSR ID is contained in the message. If it is not, the LSR adds its own LSR ID into the message. If it is, the LSR considers that a loop has occurred and terminates LSP establishment. In addition, when the number of LSR IDs in the message reaches the path vector limit, LDP also considers that a loop has occurred and terminates LSP establishment.

To configure LDP loop detection:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view or enter LDP-VPN instance view.

·         Enter LDP view:
mpls ldp

·         Enter LDP-VPN instance view:

a.    mpls ldp

b.    vpn-instance vpn-instance-name

N/A

3.       Enable loop detection.

loop-detect

By default, loop detection is disabled.

After loop detection is enabled, the device uses both the maximum hop count and the path vector methods to detect loops.

4.       Specify the maximum hop count.

maxhops hop-number

By default, the maximum hop count is 32.

5.       Specify the path vector limit.

pv-limit pv-number

By default, the path vector limit is 32.

 

 

NOTE:

The LDP loop detection feature is applicable only in networks comprised of devices that do not support TTL mechanism, such as ATM switches. Do not use LDP loop detection on other networks because it only results in extra LDP overhead.

 

Configuring LDP GR

Before you configure LDP GR, enable LDP on the GR restarter and GR helpers.

To configure LDP GR:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view.

mpls ldp

N/A

3.       Enable LDP GR.

graceful-restart

By default, LDP GR is disabled.

4.       Configure the Reconnect timer for LDP GR.

graceful-restart timer reconnect reconnect-time

By default, the Reconnect time is 120 seconds.

5.       Configure the MPLS Forwarding State Holding timer for LDP GR.

graceful-restart timer forwarding-hold hold-time

By default, the MPLS Forwarding State Holding time is 180 seconds.

 

Configuring LDP NSR

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter LDP view.

mpls ldp

N/A

3.       Enable LDP NSR.

non-stop-routing

By default, LDP NSR is disabled.

 

Configuring LDP-IGP synchronization

After you enable LDP-IGP synchronization for an OSPF process, OSPF area, or an IS-IS process, LDP-IGP synchronization is enabled on the OSPF process interfaces or the IS-IS process interfaces.

You can execute the mpls ldp igp sync disable command to disable LDP-IGP synchronization on interfaces where LDP-IGP synchronization is not required.

Configuring LDP-OSPF synchronization

LDP-IGP synchronization is not supported for an OSPF process and its OSPF areas if the OSPF process belongs to a VPN instance.

To configure LDP-OSPF synchronization for an OSPF process:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter OSPF view.

ospf [ process-id | router-id router-id ] *

N/A

3.       Enable LDP-OSPF synchronization.

mpls ldp sync

By default, LDP-OSPF synchronization is disabled.

4.       Return to system view.

quit

N/A

5.       Enter interface view.

interface interface-type interface-number

N/A

6.       (Optional.) Disable LDP-IGP synchronization on the interface.

mpls ldp igp sync disable

By default, LDP-IGP synchronization is not disabled on an interface.

7.       Return to system view.

quit

N/A

8.       Enter LDP view.

mpls ldp

N/A

9.       (Optional.) Set the delay for LDP to notify IGP of the LDP convergence.

igp sync delay time

By default, LDP immediately notifies IGP of the LDP convergence completion.

10.     (Optional.) Set the maximum delay for LDP to notify IGP of the LDP-IGP synchronization status after an LDP restart or active/standby switchover.

igp sync delay on-restart time

By default, the maximum notification delay is 90 seconds.

 

To configure LDP-OSPF synchronization for an OSPF area:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter OSPF view.

ospf [ process-id | router-id router-id ] *

N/A

3.       Enter area view.

area area-id

N/A

4.       Enable LDP-OSPF synchronization.

mpls ldp sync

By default, LDP-OSPF synchronization is disabled.

5.       Return to system view.

quit

N/A

6.       Enter interface view.

interface interface-type interface-number

N/A

7.       (Optional.) Disable LDP-IGP synchronization on the interface.

mpls ldp igp sync disable

By default, LDP-IGP synchronization is not disabled on an interface.

8.       Return to system view.

quit

N/A

9.       Enter LDP view.

mpls ldp

N/A

10.     (Optional.) Set the delay for LDP to notify IGP of the LDP convergence.

igp sync delay time

By default, LDP immediately notifies IGP of the LDP convergence completion.

11.     (Optional.) Set the maximum delay for LDP to notify IGP of the LDP-IGP synchronization status after an LDP restart or active/standby switchover.

igp sync delay on-restart time

By default, the maximum notification delay is 90 seconds.

 

Configuring LDP-ISIS synchronization

LDP-IGP synchronization is not supported for an IS-IS process that belongs to a VPN instance.

To configure LDP-ISIS synchronization for an IS-IS process:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IS-IS view.

isis [ process-id ]

N/A

3.       Enable LDP-ISIS synchronization.

mpls ldp sync [ level-1 | level-2 ]

By default, LDP-ISIS synchronization is disabled.

4.       Return to system view.

quit

N/A

5.       Enter interface view.

interface interface-type interface-number

N/A

6.       (Optional.) Disable LDP-IGP synchronization on the interface.

mpls ldp igp sync disable

By default, LDP-IGP synchronization is not disabled on an interface.

7.       Return to system view.

quit

N/A

8.       Enter LDP view.

mpls ldp

N/A

9.       (Optional.) Set the delay for LDP to notify IGP of the LDP convergence completion.

igp sync delay time

By default, LDP immediately notifies IGP of the LDP convergence completion.

10.     (Optional.) Set the maximum delay for LDP to notify IGP of the LDP-IGP synchronization status after an LDP restart or an active/standby switchover occurs.

igp sync delay on-restart time

By default, the maximum notification delay is 90 seconds.

 

Configuring LDP FRR

LDP FRR is based on IP FRR, and is enabled automatically after IP FRR is enabled. For information about configuring IP FRR, see Layer 3—IP Routing Configuration Guide.

Resetting LDP sessions

Changes to LDP session parameters take effect only on new LDP sessions. To apply the changes to an existing LDP session, you must reset all LDP sessions by executing the reset mpls ldp command.

Execute the reset mpls ldp command in user view.

 

Task

Command

Remarks

Reset LDP sessions.

reset mpls ldp [ vpn-instance vpn-instance-name ] [ peer peer-id ]

If you specify the peer keyword, this command resets the LDP session to the specified peer without validating the session parameter changes.

 

Enabling SNMP notifications for LDP

This feature enables generating SNMP notifications for LDP upon LDP session changes, as defined in RFC 3815. The generated SNMP notifications are sent to the SNMP module.

To enable SNMP notifications for LDP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable SNMP notifications for LDP.

snmp-agent trap enable ldp

By default, SNMP notifications for LDP are enabled.

 

For more information about SNMP notifications, see Network Management and Monitoring Configuration Guide.

Displaying and maintaining LDP

Execute display commands in any view.

 

Task

Command

Display LDP discovery information (in standalone mode).

display mpls ldp discovery [ vpn-instance vpn-instance-name ] [ interface interface-type interface-number | peer peer-lsr-id ] [ verbose ] [ standby slot slot-number ]

Display LDP discovery information (in IRF mode).

display mpls ldp discovery [ vpn-instance vpn-instance-name ] [ interface interface-type interface-number | peer peer-lsr-id ] [ verbose ] [ standby chassis chassis-number slot slot-number ]

Display LDP FEC-label mapping information (in standalone mode).

display mpls ldp fec [ vpn-instance vpn-instance-name ] [ destination-address mask-length | summary ] [ standby slot slot-number ]

Display LDP FEC-label mapping information (in IRF mode).

display mpls ldp fec [ vpn-instance vpn-instance-name ] [ destination-address mask-length | summary ] [ standby chassis chassis-number slot slot-number ]

Display LDP interface information.

display mpls ldp interface [ interface-type interface-number ]

Display LDP-IGP synchronization information.

display mpls ldp igp sync [ interface interface-type interface-number ]

Display LDP LSP information.

display mpls ldp lsp [ vpn-instance vpn-instance-name ] [ destination-address mask-length ]

Display LDP running parameters.

display mpls ldp parameter [ vpn-instance vpn-instance-name ]

Display LDP peer and session information (in standalone mode).

display mpls ldp peer [ vpn-instance vpn-instance-name ] [ peer-lsr-id ] [ verbose ] [ standby slot slot-number ]

Display LDP peer and session information (in IRF mode).

display mpls ldp peer [ vpn-instance vpn-instance-name ] [ peer-lsr-id ] [ verbose ] [ standby chassis chassis-number slot slot-number ]

Display LDP summary information (in standalone mode).

display mpls ldp summary [ all | vpn-instance vpn-instance-name ] [ standby slot slot-number ]

Display LDP summary information (in IRF mode).

display mpls ldp summary [ all | vpn-instance vpn-instance-name ] [ standby chassis chassis-number slot slot-number ]

 

LDP configuration examples

LDP LSP configuration example

Network requirements

Switch A, Switch B, and Switch C all support MPLS.

Configure LDP to establish LSPs between Switch A and Switch C, so subnets 11.1.1.0/24 and 21.1.1.0/24 can reach each other over MPLS.

Configure LDP to establish LSPs only for destinations 1.1.1.9/32, 2.2.2.9/32, 3.3.3.9/32, 11.1.1.0/24, and 21.1.1.0/24 on Switch A, Switch B, and Switch C.

Figure 17 Network diagram

 

Requirements analysis

·          To ensure that the LSRs establish LSPs automatically, enable LDP on each LSR.

·          To establish LDP LSPs, configure a routing protocol to ensure IP connectivity between the LSRs. This example uses OSPF.

·          To control the number of LSPs, configure an LSP generation policy on each LSR.

Configuration procedure

1.        Configure IP addresses and masks for interfaces, including the loopback interfaces, as shown in Figure 17. (Details not shown.)

2.        Configure OSPF on each switch to ensure IP connectivity between them:

# Configure Switch A.

<SwitchA> system-view

[SwitchA] ospf

[SwitchA-ospf-1] area 0

[SwitchA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0

[SwitchA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255

[SwitchA-ospf-1-area-0.0.0.0] network 11.1.1.0 0.0.0.255

[SwitchA-ospf-1-area-0.0.0.0] quit

[SwitchA-ospf-1] quit

# Configure Switch B.

<SwitchB> system-view

[SwitchB] ospf

[SwitchB-ospf-1] area 0

[SwitchB-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0

[SwitchB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255

[SwitchB-ospf-1-area-0.0.0.0] network 20.1.1.0 0.0.0.255

[SwitchB-ospf-1-area-0.0.0.0] quit

[SwitchB-ospf-1] quit

# Configure Switch C.

<SwitchC> system-view

[SwitchC] ospf

[SwitchC-ospf-1] area 0

[SwitchC-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0

[SwitchC-ospf-1-area-0.0.0.0] network 20.1.1.0 0.0.0.255

[SwitchC-ospf-1-area-0.0.0.0] network 21.1.1.0 0.0.0.255

[SwitchC-ospf-1-area-0.0.0.0] quit

[SwitchC-ospf-1] quit

# Display routing tables on the switches, for example, on Switch A, to verify that the switches have learned the routes to each other.

[SwitchA] display ip routing-table

 

Destinations : 21        Routes : 21

 

Destination/Mask    Proto  Pre  Cost         NextHop         Interface

0.0.0.0/32          Direct 0    0            127.0.0.1       InLoop0

1.1.1.9/32          Direct 0    0            127.0.0.1       InLoop0

2.2.2.9/32          OSPF   10   1            10.1.1.2        Vlan2

3.3.3.9/32          OSPF   10   2            10.1.1.2        Vlan2

10.1.1.0/24         Direct 0    0            10.1.1.1        Vlan2

10.1.1.0/32         Direct 0    0            10.1.1.1        Vlan2

10.1.1.1/32         Direct 0    0            127.0.0.1       InLoop0

10.1.1.255/32       Direct 0    0            10.1.1.1        Vlan2

11.1.1.0/24         Direct 0    0            11.1.1.1        Vlan4

11.1.1.0/32         Direct 0    0            11.1.1.1        Vlan4

11.1.1.1/32         Direct 0    0            127.0.0.1       InLoop0

11.1.1.255/32       Direct 0    0            11.1.1.1        Vlan4

20.1.1.0/24         OSPF   10   2            10.1.1.2        Vlan2

21.1.1.0/24         OSPF   10   3            10.1.1.2        Vlan2

127.0.0.0/8         Direct 0    0            127.0.0.1       InLoop0

127.0.0.0/32        Direct 0    0            127.0.0.1       InLoop0

127.0.0.1/32        Direct 0    0            127.0.0.1       InLoop0

127.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

224.0.0.0/4         Direct 0    0            0.0.0.0         NULL0

224.0.0.0/24        Direct 0    0            0.0.0.0         NULL0

255.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

3.        Enable MPLS and LDP:

# Configure Switch A.

[SwitchA] mpls lsr-id 1.1.1.9

[SwitchA] mpls ldp

[SwitchA-ldp] quit

[SwitchA] interface vlan-interface 2

[SwitchA-Vlan-interface2] mpls enable

[SwitchA-Vlan-interface2] mpls ldp enable

[SwitchA-Vlan-interface2] quit

# Configure Switch B.

[SwitchB] mpls lsr-id 2.2.2.9

[SwitchB] mpls ldp

[SwitchB-ldp] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls enable

[SwitchB-Vlan-interface2] mpls ldp enable

[SwitchB-Vlan-interface2] quit

[SwitchB] interface vlan-interface 3

[SwitchB-Vlan-interface3] mpls enable

[SwitchB-Vlan-interface3] mpls ldp enable

[SwitchB-Vlan-interface3] quit

# Configure Switch C.

[SwitchC] mpls lsr-id 3.3.3.9

[SwitchC] mpls ldp

[SwitchC-ldp] quit

[SwitchC] interface vlan-interface 3

[SwitchC-Vlan-interface3] mpls enable

[SwitchC-Vlan-interface3] mpls ldp enable

[SwitchC-Vlan-interface3] quit

4.        Configure LSP generation policies:

# On Switch A, create IP prefix list switcha, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchA] ip prefix-list switcha index 10 permit 1.1.1.9 32

[SwitchA] ip prefix-list switcha index 20 permit 2.2.2.9 32

[SwitchA] ip prefix-list switcha index 30 permit 3.3.3.9 32

[SwitchA] ip prefix-list switcha index 40 permit 11.1.1.0 24

[SwitchA] ip prefix-list switcha index 50 permit 21.1.1.0 24

[SwitchA] mpls ldp

[SwitchA-ldp] lsp-trigger prefix-list switcha

[SwitchA-ldp] quit

# On Switch B, create IP prefix list switchb, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchB] ip prefix-list switchb index 10 permit 1.1.1.9 32

[SwitchB] ip prefix-list switchb index 20 permit 2.2.2.9 32

[SwitchB] ip prefix-list switchb index 30 permit 3.3.3.9 32

[SwitchB] ip prefix-list switchb index 40 permit 11.1.1.0 24

[SwitchB] ip prefix-list switchb index 50 permit 21.1.1.0 24

[SwitchB] mpls ldp

[SwitchB-ldp] lsp-trigger prefix-list switchb

[SwitchB-ldp] quit

# On Switch C, create IP prefix list switchc, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchC] ip prefix-list switchc index 10 permit 1.1.1.9 32

[SwitchC] ip prefix-list switchc index 20 permit 2.2.2.9 32

[SwitchC] ip prefix-list switchc index 30 permit 3.3.3.9 32

[SwitchC] ip prefix-list switchc index 40 permit 11.1.1.0 24

[SwitchC] ip prefix-list switchc index 50 permit 21.1.1.0 24

[SwitchC] mpls ldp

[SwitchC-ldp] lsp-trigger prefix-list switchc

[SwitchC-ldp] quit

Verifying the configuration

# Display LDP LSP information on switches, for example, on Switch A.

[SwitchA] display mpls ldp lsp

Status Flags: * - stale, L - liberal, B - backup

Statistics:

  FECs: 5      Ingress LSPs: 3     Transit LSPs: 3     Egress LSPs: 2

 

FEC                In/Out Label        Nexthop         OutInterface

1.1.1.9/32         3/-

                   -/1279(L)

2.2.2.9/32         -/3                 10.1.1.2        Vlan-int2

                   1279/3              10.1.1.2        Vlan-int2

3.3.3.9/32         -/1278              10.1.1.2        Vlan-int2

                   1278/1278           10.1.1.2        Vlan-int2

11.1.1.0/24        1277/-

                   -/1277(L)

21.1.1.0/24        -/1276              10.1.1.2        Vlan-int2

                   1276/1276           10.1.1.2        Vlan-int2

# Test the connectivity of the LDP LSP from Switch A to Switch C.

[SwitchA] ping mpls -a 11.1.1.1 ipv4 21.1.1.0 24

MPLS Ping FEC: 21.1.1.0/24 : 100 data bytes

100 bytes from 20.1.1.2: Sequence=1 time=1 ms

100 bytes from 20.1.1.2: Sequence=2 time=1 ms

100 bytes from 20.1.1.2: Sequence=3 time=8 ms

100 bytes from 20.1.1.2: Sequence=4 time=2 ms

100 bytes from 20.1.1.2: Sequence=5 time=1 ms

 

--- FEC: 21.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/2/8 ms

# Test the connectivity of the LDP LSP from Switch C to Switch A.

[SwitchC] ping mpls -a 21.1.1.1 ipv4 11.1.1.0 24

MPLS Ping FEC: 11.1.1.0/24 : 100 data bytes

100 bytes from 10.1.1.1: Sequence=1 time=1 ms

100 bytes from 10.1.1.1: Sequence=2 time=1 ms

100 bytes from 10.1.1.1: Sequence=3 time=1 ms

100 bytes from 10.1.1.1: Sequence=4 time=1 ms

100 bytes from 10.1.1.1: Sequence=5 time=1 ms

 

--- FEC: 11.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/1/1 ms

Label acceptance control configuration example

Network requirements

Two links, Switch A—Switch B—Switch C and Switch A—Switch D—Switch C, exist between subnets 11.1.1.0/24 and 21.1.1.0/24.

Configure LDP to establish LSPs only for routes to subnets 11.1.1.0/24 and 21.1.1.0/24.

Configure LDP to establish LSPs only on the link Switch A—Switch B—Switch C to forward traffic between subnets 11.1.1.0/24 and 21.1.1.0/24.

Figure 18 Network diagram

 

Requirements analysis

·          To ensure that the LSRs establish LSPs automatically, enable LDP on each LSR.

·          To establish LDP LSPs, configure a routing protocol to ensure IP connectivity between the LSRs. This example uses OSPF.

·          To ensure that LDP establishes LSPs only for the routes 11.1.1.0/24 and 21.1.1.0/24, configure LSP generation policies on each LSR.

·          To ensure that LDP establishes LSPs only over the link Switch A—Switch B—Switch C, configure label acceptance policies as follows:

?  Switch A accepts only the label mapping for FEC 21.1.1.0/24 received from Switch B. Switch A denies the label mapping for FEC 21.1.1.0/24 received from Switch D.

?  Switch C accepts only the label mapping for FEC 11.1.1.0/24 received from Switch B. Switch C denies the label mapping for FEC 11.1.1.0/24 received from Switch D.

Configuration procedure

1.        Configure IP addresses and masks for interfaces, including the loopback interfaces, as shown in Figure 18. (Details not shown.)

2.        Configure OSPF on each switch to ensure IP connectivity between them. (Details not shown.)

3.        Enable MPLS and LDP:

# Configure Switch A.

<SwitchA> system-view

[SwitchA] mpls lsr-id 1.1.1.9

[SwitchA] mpls ldp

[SwitchA-ldp] quit

[SwitchA] interface vlan-interface 2

[SwitchA-Vlan-interface2] mpls enable

[SwitchA-Vlan-interface2] mpls ldp enable

[SwitchA-Vlan-interface2] quit

[SwitchA] interface vlan-interface 6

[SwitchA-Vlan-interface6] mpls enable

[SwitchA-Vlan-interface6] mpls ldp enable

[SwitchA-Vlan-interface6] quit

# Configure Switch B.

<SwitchB> system-view

[SwitchB] mpls lsr-id 2.2.2.9

[SwitchB] mpls ldp

[SwitchB-ldp] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls enable

[SwitchB-Vlan-interface2] mpls ldp enable

[SwitchB-Vlan-interface2] quit

[SwitchB] interface vlan-interface 3

[SwitchB-Vlan-interface3] mpls enable

[SwitchB-Vlan-interface3] mpls ldp enable

[SwitchB-Vlan-interface3] quit

# Configure Switch C.

<SwitchC> system-view

[SwitchC] mpls lsr-id 3.3.3.9

[SwitchC] mpls ldp

[SwitchC-ldp] quit

[SwitchC] interface vlan-interface 3

[SwitchC-Vlan-interface3] mpls enable

[SwitchC-Vlan-interface3] mpls ldp enable

[SwitchC-Vlan-interface3] quit

[SwitchC] interface vlan-interface 7

[SwitchC-Vlan-interface7] mpls enable

[SwitchC-Vlan-interface7] mpls ldp enable

[SwitchC-Vlan-interface7] quit

# Configure Switch D.

<SwitchD> system-view

[SwitchD] mpls lsr-id 4.4.4.9

[SwitchD] mpls ldp

[SwitchD-ldp] quit

[SwitchD] interface vlan-interface 6

[SwitchD-Vlan-interface6] mpls enable

[SwitchD-Vlan-interface6] mpls ldp enable

[SwitchD-Vlan-interface6] quit

[SwitchD] interface vlan-interface 7

[SwitchD-Vlan-interface7] mpls enable

[SwitchD-Vlan-interface7] mpls ldp enable

[SwitchD-Vlan-interface7] quit

4.        Configure LSP generation policies:

# On Switch A, create IP prefix list switcha, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchA] ip prefix-list switcha index 10 permit 11.1.1.0 24

[SwitchA] ip prefix-list switcha index 20 permit 21.1.1.0 24

[SwitchA] mpls ldp

[SwitchA-ldp] lsp-trigger prefix-list switcha

[SwitchA-ldp] quit

# On Switch B, create IP prefix list switchb, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchB] ip prefix-list switchb index 10 permit 11.1.1.0 24

[SwitchB] ip prefix-list switchb index 20 permit 21.1.1.0 24

[SwitchB] mpls ldp

[SwitchB-ldp] lsp-trigger prefix-list switchb

[SwitchB-ldp] quit

# On Switch C, create IP prefix list switchc, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchC] ip prefix-list switchc index 10 permit 11.1.1.0 24

[SwitchC] ip prefix-list switchc index 20 permit 21.1.1.0 24

[SwitchC] mpls ldp

[SwitchC-ldp] lsp-trigger prefix-list switchc

[SwitchC-ldp] quit

# On Switch D, create IP prefix list switchd, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchD] ip prefix-list switchd index 10 permit 11.1.1.0 24

[SwitchD] ip prefix-list switchd index 20 permit 21.1.1.0 24

[SwitchD] mpls ldp

[SwitchD-ldp] lsp-trigger prefix-list switchd

[SwitchD-ldp] quit

5.        Configure label acceptance policies:

# On Switch A, create an IP prefix list prefix-from-b that permits subnet 21.1.1.0/24. Switch A uses this list to filter FEC-label mappings received from Switch B.

[SwitchA] ip prefix-list prefix-from-b index 10 permit 21.1.1.0 24

# On Switch A, create an IP prefix list prefix-from-d that denies subnet 21.1.1.0/24. Switch A uses this list to filter FEC-label mappings received from Switch D.

[SwitchA] ip prefix-list prefix-from-d index 10 deny 21.1.1.0 24

# On Switch A, configure label acceptance policies to filter FEC-label mappings received from Switch B and Switch D.

[SwitchA] mpls ldp

[SwitchA-ldp] accept-label peer 2.2.2.9 prefix-list prefix-from-b

[SwitchA-ldp] accept-label peer 4.4.4.9 prefix-list prefix-from-d

[SwitchA-ldp] quit

# On Switch C, create an IP prefix list prefix-from-b that permits subnet 11.1.1.0/24. Switch C uses this list to filter FEC-label mappings received from Switch B.

[SwitchC] ip prefix-list prefix-from-b index 10 permit 11.1.1.0 24

# On Switch C, create an IP prefix list prefix-from-d that denies subnet 11.1.1.0/24. Switch A uses this list to filter FEC-label mappings received from Switch D.

[SwitchC] ip prefix-list prefix-from-d index 10 deny 11.1.1.0 24

# On Switch C, configure label acceptance policies to filter FEC-label mappings received from Switch B and Switch D.

[SwitchC] mpls ldp

[SwitchC-ldp] accept-label peer 2.2.2.9 prefix-list prefix-from-b

[SwitchC-ldp] accept-label peer 4.4.4.9 prefix-list prefix-from-d

[SwitchC-ldp] quit

Verifying the configuration

# Display LDP LSP information on switches, for example, on Switch A.

[SwitchA] display mpls ldp lsp

Status Flags: * - stale, L - liberal, B - backup

Statistics:

  FECs: 2      Ingress LSPs: 1     Transit LSPs: 1     Egress LSPs: 1

 

FEC                In/Out Label        Nexthop         OutInterface

11.1.1.0/24        1277/-

                   -/1148(L)

21.1.1.0/24        -/1149(L)

                   -/1276              10.1.1.2        Vlan-int2

                   1276/1276           10.1.1.2        Vlan-int2

The output shows that the next hop of the LSP for FEC 21.1.1.0/24 is Switch B (10.1.1.2). The LSP has been established over the link Switch A—Switch B—Switch C, not over the link Switch A—Switch D—Switch C.

# Test the connectivity of the LDP LSP from Switch A to Switch C.

[SwitchA] ping mpls -a 11.1.1.1 ipv4 21.1.1.0 24

MPLS Ping FEC: 21.1.1.0/24 : 100 data bytes

100 bytes from 20.1.1.2: Sequence=1 time=1 ms

100 bytes from 20.1.1.2: Sequence=2 time=1 ms

100 bytes from 20.1.1.2: Sequence=3 time=8 ms

100 bytes from 20.1.1.2: Sequence=4 time=2 ms

100 bytes from 20.1.1.2: Sequence=5 time=1 ms

 

--- FEC: 21.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/2/8 ms

# Test the connectivity of the LDP LSP from Switch C to Switch A.

[SwitchC] ping mpls -a 21.1.1.1 ipv4 11.1.1.0 24

MPLS Ping FEC: 11.1.1.0/24 : 100 data bytes

100 bytes from 10.1.1.1: Sequence=1 time=1 ms

100 bytes from 10.1.1.1: Sequence=2 time=1 ms

100 bytes from 10.1.1.1: Sequence=3 time=1 ms

100 bytes from 10.1.1.1: Sequence=4 time=1 ms

100 bytes from 10.1.1.1: Sequence=5 time=1 ms

 

--- FEC: 11.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/1/1 ms

Label advertisement control configuration example

Network requirements

Two links, Switch A—Switch B—Switch C and Switch A—Switch D—Switch C, exist between subnets 11.1.1.0/24 and 21.1.1.0/24.

Configure LDP to establish LSPs only for routes to subnets 11.1.1.0/24 and 21.1.1.0/24.

Configure LDP to establish LSPs only on the link Switch A—Switch B—Switch C to forward traffic between subnets 11.1.1.0/24 and 21.1.1.0/24.

Figure 19 Network diagram

 

Requirements analysis

·          To ensure that the LSRs establish LSPs automatically, enable LDP on each LSR.

·          To establish LDP LSPs, configure a routing protocol to ensure IP connectivity between the LSRs. This example uses OSPF.

·          To ensure that LDP establishes LSPs only for the routes 11.1.1.0/24 and 21.1.1.0/24, configure LSP generation policies on each LSR.

·          To ensure that LDP establishes LSPs only over the link Switch A—Switch B—Switch C, configure label advertisement policies as follows:

?  Switch A advertises only the label mapping for FEC 11.1.1.0/24 to Switch B.

?  Switch C advertises only the label mapping for FEC 21.1.1.0/24 to Switch B.

?  Switch D does not advertise label mapping for FEC 21.1.1.0/24 to Switch A. Switch D does not advertise label mapping for FEC 11.1.1.0/24 to Switch C.

Configuration procedure

1.        Configure IP addresses and masks for interfaces, including the loopback interfaces, as shown in Figure 19. (Details not shown.)

2.        Configure OSPF on each switch to ensure IP connectivity between them. (Details not shown.)

3.        Enable MPLS and LDP:

# Configure Switch A.

<SwitchA> system-view

[SwitchA] mpls lsr-id 1.1.1.9

[SwitchA] mpls ldp

[SwitchA-ldp] quit

[SwitchA] interface vlan-interface 2

[SwitchA-Vlan-interface2] mpls enable

[SwitchA-Vlan-interface2] mpls ldp enable

[SwitchA-Vlan-interface2] quit

[SwitchA] interface vlan-interface 6

[SwitchA-Vlan-interface6] mpls enable

[SwitchA-Vlan-interface6] mpls ldp enable

[SwitchA-Vlan-interface6] quit

# Configure Switch B.

<SwitchB> system-view

[SwitchB] mpls lsr-id 2.2.2.9

[SwitchB] mpls ldp

[SwitchB-ldp] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls enable

[SwitchB-Vlan-interface2] mpls ldp enable

[SwitchB-Vlan-interface2] quit

[SwitchB] interface vlan-interface 3

[SwitchB-Vlan-interface3] mpls enable

[SwitchB-Vlan-interface3] mpls ldp enable

[SwitchB-Vlan-interface3] quit

# Configure Switch C.

<SwitchC> system-view

[SwitchC] mpls lsr-id 3.3.3.9

[SwitchC] mpls ldp

[SwitchC-ldp] quit

[SwitchC] interface vlan-interface 3

[SwitchC-Vlan-interface3] mpls enable

[SwitchC-Vlan-interface3] mpls ldp enable

[SwitchC-Vlan-interface3] quit

[SwitchC] interface vlan-interface 7

[SwitchC-Vlan-interface7] mpls enable

[SwitchC-Vlan-interface7] mpls ldp enable

[SwitchC-Vlan-interface7] quit

# Configure Switch D.

<SwitchD> system-view

[SwitchD] mpls lsr-id 4.4.4.9

[SwitchD] mpls ldp

[SwitchD-ldp] quit

[SwitchD] interface vlan-interface 6

[SwitchD-Vlan-interface6] mpls enable

[SwitchD-Vlan-interface6] mpls ldp enable

[SwitchD-Vlan-interface6] quit

[SwitchD] interface vlan-interface 7

[SwitchD-Vlan-interface7] mpls enable

[SwitchD-Vlan-interface7] mpls ldp enable

[SwitchD-Vlan-interface7] quit

4.        Configure LSP generation policies:

# On Switch A, create IP prefix list switcha, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchA] ip prefix-list switcha index 10 permit 11.1.1.0 24

[SwitchA] ip prefix-list switcha index 20 permit 21.1.1.0 24

[SwitchA] mpls ldp

[SwitchA-ldp] lsp-trigger prefix-list switcha

[SwitchA-ldp] quit

# On Switch B, create IP prefix list switchb, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchB] ip prefix-list switchb index 10 permit 11.1.1.0 24

[SwitchB] ip prefix-list switchb index 20 permit 21.1.1.0 24

[SwitchB] mpls ldp

[SwitchB-ldp] lsp-trigger prefix-list switchb

[SwitchB-ldp] quit

# On Switch C, create IP prefix list switchc, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchC] ip prefix-list switchc index 10 permit 11.1.1.0 24

[SwitchC] ip prefix-list switchc index 20 permit 21.1.1.0 24

[SwitchC] mpls ldp

[SwitchC-ldp] lsp-trigger prefix-list switchc

[SwitchC-ldp] quit

# On Switch D, create IP prefix list switchd, and configure LDP to use only the routes permitted by the prefix list to establish LSPs.

[SwitchD] ip prefix-list switchd index 10 permit 11.1.1.0 24

[SwitchD] ip prefix-list switchd index 20 permit 21.1.1.0 24

[SwitchD] mpls ldp

[SwitchD-ldp] lsp-trigger prefix-list switchd

[SwitchD-ldp] quit

5.        Configure label advertisement policies:

# On Switch A, create an IP prefix list prefix-to-b that permits subnet 11.1.1.0/24. Switch A uses this list to filter FEC-label mappings advertised to Switch B.

[SwitchA] ip prefix-list prefix-to-b index 10 permit 11.1.1.0 24

# On Switch A, create an IP prefix list peer-b that permits 2.2.2.9/32. Switch A uses this list to filter peers.

[SwitchA] ip prefix-list peer-b index 10 permit 2.2.2.9 32

# On Switch A, configure a label advertisement policy to advertise only the label mapping for FEC 11.1.1.0/24 to Switch B.

[SwitchA] mpls ldp

[SwitchA-ldp] advertise-label prefix-list prefix-to-b peer peer-b

[SwitchA-ldp] quit

# On Switch C, create an IP prefix list prefix-to-b that permits subnet 21.1.1.0/24. Switch C uses this list to filter FEC-label mappings advertised to Switch B.

[SwitchC] ip prefix-list prefix-to-b index 10 permit 21.1.1.0 24

# On Switch C, create an IP prefix list peer-b that permits 2.2.2.9/32. Switch C uses this list to filter peers.

[SwitchC] ip prefix-list peer-b index 10 permit 2.2.2.9 32

# On Switch C, configure a label advertisement policy to advertise only the label mapping for FEC 21.1.1.0/24 to Switch B.

[SwitchC] mpls ldp

[SwitchC-ldp] advertise-label prefix-list prefix-to-b peer peer-b

[SwitchC-ldp] quit

# On Switch D, create an IP prefix list prefix-to-a that denies subnet 21.1.1.0/24. Switch D uses this list to filter FEC-label mappings to be advertised to Switch A.

[SwitchD] ip prefix-list prefix-to-a index 10 deny 21.1.1.0 24

[SwitchD] ip prefix-list prefix-to-a index 20 permit 0.0.0.0 0 less-equal 32

# On Switch D, create an IP prefix list peer-a that permits 1.1.1.9/32. Switch D uses this list to filter peers.

[SwitchD] ip prefix-list peer-a index 10 permit 1.1.1.9 32

# On Switch D, create an IP prefix list prefix-to-c that denies subnet 11.1.1.0/24. Switch D uses this list to filter FEC-label mappings to be advertised to Switch C.

[SwitchD] ip prefix-list prefix-to-c index 10 deny 11.1.1.0 24

[SwitchD] ip prefix-list prefix-to-c index 20 permit 0.0.0.0 0 less-equal 32

# On Switch D, create an IP prefix list peer-c that permits subnet 3.3.3.9/32. Switch D uses this list to filter peers.

[SwitchD] ip prefix-list peer-c index 10 permit 3.3.3.9 32

# On Switch D, configure a label advertisement policy, so Switch D does not advertise label mappings for FEC 21.1.1.0/24 to Switch A, and does not advertise label mappings for FEC 11.1.1.0/24 to Switch C.

[SwitchD] mpls ldp

[SwitchD-ldp] advertise-label prefix-list prefix-to-a peer peer-a

[SwitchD-ldp] advertise-label prefix-list prefix-to-c peer peer-c

[SwitchD-ldp] quit

Verifying the configuration

# Display LDP LSP information on each switch.

[SwitchA] display mpls ldp lsp

Status Flags: * - stale, L - liberal, B - backup

Statistics:

  FECs: 2      Ingress LSPs: 1     Transit LSPs: 1     Egress LSPs: 1

 

FEC                In/Out Label        Nexthop         OutInterface

11.1.1.0/24        1277/-

                   -/1151(L)

                   -/1277(L)

21.1.1.0/24        -/1276              10.1.1.2        Vlan-int2

                   1276/1276           10.1.1.2        Vlan-int2

[SwitchB] display mpls ldp lsp

Status Flags: * - stale, L - liberal, B - backup

Statistics:

  FECs: 2      Ingress LSPs: 2     Transit LSPs: 2     Egress LSPs: 0

 

FEC                In/Out Label        Nexthop         OutInterface

11.1.1.0/24        -/1277              10.1.1.1        Vlan-int2

                   1277/1277           10.1.1.1        Vlan-int2

21.1.1.0/24        -/1149              20.1.1.2        Vlan-int3

                   1276/1149           20.1.1.2        Vlan-int3

[SwitchC] display mpls ldp lsp

Status Flags: * - stale, L - liberal, B - backup

Statistics:

  FECs: 2      Ingress LSPs: 1     Transit LSPs: 1     Egress LSPs: 1

 

FEC                In/Out Label        Nexthop         OutInterface

11.1.1.0/24        -/1277              20.1.1.1        Vlan-int3

                   1148/1277           20.1.1.1        Vlan-int3

21.1.1.0/24        1149/-

                   -/1276(L)

                   -/1150(L)

[SwitchD] display mpls ldp lsp

Status Flags: * - stale, L - liberal, B - backup

Statistics:

  FECs: 2      Ingress LSPs: 0     Transit LSPs: 0     Egress LSPs: 2

 

FEC                In/Out Label        Nexthop         OutInterface

11.1.1.0/24        1151/-

                   -/1277(L)

21.1.1.0/24        1150/-

The output shows that Switch A and Switch C has received FEC-label mappings only from Switch B. Switch B has received FEC-label mappings from both Switch A and Switch C. Switch D does not receive FEC-label mappings from Switch A or Switch C. LDP has established an LSP only over the link Switch A—Switch B—Switch C.

# Test the connectivity of the LDP LSP from Switch A to Switch C.

[SwitchA] ping mpls -a 11.1.1.1 ipv4 21.1.1.0 24

MPLS Ping FEC: 21.1.1.0/24 : 100 data bytes

100 bytes from 20.1.1.2: Sequence=1 time=1 ms

100 bytes from 20.1.1.2: Sequence=2 time=1 ms

100 bytes from 20.1.1.2: Sequence=3 time=8 ms

100 bytes from 20.1.1.2: Sequence=4 time=2 ms

100 bytes from 20.1.1.2: Sequence=5 time=1 ms

 

--- FEC: 21.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/2/8 ms

# Test the connectivity of the LDP LSP from Switch C to Switch A.

[SwitchC] ping mpls -a 21.1.1.1 ipv4 11.1.1.0 24

MPLS Ping FEC: 11.1.1.0/24 : 100 data bytes

100 bytes from 10.1.1.1: Sequence=1 time=1 ms

100 bytes from 10.1.1.1: Sequence=2 time=1 ms

100 bytes from 10.1.1.1: Sequence=3 time=1 ms

100 bytes from 10.1.1.1: Sequence=4 time=1 ms

100 bytes from 10.1.1.1: Sequence=5 time=1 ms

 

--- FEC: 11.1.1.0/24 ping statistics ---

5 packets transmitted, 5 packets received, 0.0% packet loss

round-trip min/avg/max = 1/1/1 ms

LDP FRR configuration example

Network requirements

Switch S, Switch A, and Switch D reside in the same OSPF domain. Configure OSPF FRR so LDP can establish a primary LSP and a backup LSP on the Switch S—Switch D and the Switch S—Switch A—Switch D links, respectively.

When the primary LSP operates correctly, traffic between subnets 11.1.1.0/24 and 21.1.1.0/24 is forwarded through the LSP.

When the primary LSP fails, traffic between the two subnets can be immediately switched to the backup LSP.

Figure 20 Network diagram

 

Requirements analysis

·          To ensure that the LSRs establish LSPs automatically, enable LDP on each LSR.

·          To establish LDP LSPs, configure a routing protocol to ensure IP connectivity between the LSRs. This example uses OSPF.

·          To ensure that LDP establishes LSPs only for the routes 11.1.1.0/24 and 21.1.1.0/24, configure LSP generation policies on each LSR.

·          To allow LDP to establish backup LSRs, configure OSPF FRR on Switch S and Switch D.

Configuration procedure

1.        Configure IP addresses and masks for interfaces, including the loopback interfaces, as shown in Figure 20. (Details not shown.)

2.        Configure OSPF on each switch to ensure IP connectivity between them. (Details not shown.)

3.        Configure OSPF FRR by using one of the following methods:

?  (Method 1.) Enable OSPF FRR to calculate a backup next hop by using the LFA algorithm:

# Configure Switch S.

<SwitchS> system-view

[SwitchS] bfd echo-source-ip 10.10.10.10

[SwitchS] ospf 1

[SwitchS-ospf-1] fast-reroute lfa

[SwitchS-ospf-1] quit

# Configure Switch D.

<SwitchD> system-view

[SwitchD] bfd echo-source-ip 11.11.11.11

[SwitchD] ospf 1

[SwitchD-ospf-1] fast-reroute lfa

[SwitchD-ospf-1] quit

?  (Method 2.) Enable OSPF FRR to specify a backup next hop by using a routing policy:

# Configure Switch S.

<SwitchS> system-view

[SwitchS] bfd echo-source-ip 10.10.10.10

[SwitchS] ip prefix-list abc index 10 permit 21.1.1.0 24

[SwitchS] route-policy frr permit node 10

[SwitchS-route-policy] if-match ip address prefix-list abc

[SwitchS-route-policy] apply fast-reroute backup-interface vlan-interface 12 backup-nexthop 12.12.12.2

[SwitchS-route-policy] quit

[SwitchS] ospf 1

[SwitchS-ospf-1] fast-reroute route-policy frr

[SwitchS-ospf-1] quit

# Configure Switch D.

<SwitchD> system-view

[SwitchD] bfd echo-source-ip 10.10.10.10

[SwitchD] ip prefix-list abc index 10 permit 11.1.1.0 24

[SwitchD] route-policy frr permit node 10

[SwitchD-route-policy] if-match ip address prefix-list abc

[SwitchD-route-policy] apply fast-reroute backup-interface vlan-interface 24 backup-nexthop 24.24.24.2

[SwitchD-route-policy] quit

[SwitchD] ospf 1

[SwitchD-ospf-1] fast-reroute route-policy frr

[SwitchD-ospf-1] quit

4.        Enable MPLS and LDP:

# Configure Switch S.

[SwitchS] mpls lsr-id 1.1.1.1

[SwitchS] mpls ldp

[SwitchS-mpls-ldp] quit

[SwitchS] interface vlan-interface 12

[SwitchS-Vlan-interface12] mpls enable

[SwitchS-Vlan-interface12] mpls ldp enable

[SwitchS-Vlan-interface12] quit

[SwitchS] interface vlan-interface 13

[SwitchS-Vlan-interface13] mpls enable

[SwitchS-Vlan-interface13] mpls ldp enable

[SwitchS-Vlan-interface13] quit

# Configure Switch D.

[SwitchD] mpls lsr-id 3.3.3.3

[SwitchD] mpls ldp

[SwitchD-mpls-ldp] quit

[SwitchD] interface vlan-interface 13

[SwitchD-Vlan-interface13] mpls enable

[SwitchD-Vlan-interface13] mpls ldp enable

[SwitchD-Vlan-interface13] quit

[SwitchD] interface vlan-interface 24

[SwitchD-Vlan-interface24] mpls enable

[SwitchD-Vlan-interface24] mpls ldp enable

[SwitchD-Vlan-interface24] quit

# Configure Switch A.

[SwitchA] mpls lsr-id 2.2.2.2

[SwitchA] mpls ldp

[SwitchA-mpls-ldp] quit

[SwitchA] interface vlan-interface 12

[SwitchA-Vlan-interface12] mpls enable

[SwitchA-Vlan-interface12] mpls ldp enable

[SwitchA-Vlan-interface12] quit

[SwitchA] interface vlan-interface 24

[SwitchA-Vlan-interface24] mpls enable

[SwitchA-Vlan-interface24] mpls ldp enable

[SwitchA-Vlan-interface24] quit

5.        Configure LSP generation policies so LDP using all static routes and IGP routes to establish LSPs:

# Configure Switch S.

[SwitchS] mpls ldp

[SwitchS-ldp] lsp-trigger all

[SwitchS-ldp] quit

# Configure Switch D.

[SwitchD] mpls ldp

[SwitchD-ldp] lsp-trigger all

[SwitchD-ldp] quit

# Configure Switch A.

[SwitchA] mpls ldp

[SwitchA-ldp] lsp-trigger all

[SwitchA-ldp] quit

Verifying the configuration

# Verify that primary and backup LSPs have been established on Switch S.

[SwitchS] display mpls ldp lsp 21.1.1.0 24

Status Flags: * - stale, L - liberal, B - backup

Statistics:

  FECs: 1      Ingress LSPs: 2     Transit LSPs: 2     Egress LSPs: 0

 

FEC                In/Out Label        Nexthop         OutInterface

21.1.1.0/24        -/3                 13.13.13.2      Vlan-int13

                   2174/3              13.13.13.2      Vlan-int13

                   -/3(B)              12.12.12.2      Vlan-int12

                   2174/3(B)           12.12.12.2      Vlan-int12

 


Configuring MPLS TE

Overview

TE and MPLS TE

Network congestion can degrade the network backbone performance. It might occur when network resources are inadequate or when load distribution is unbalanced. Traffic engineering (TE) is intended to avoid the latter situation where partial congestion might occur because of improper resource allocation.

TE can make the best use of network resources and avoid uneven load distribution by the following:

·          Real-time monitoring of traffic and traffic load on network elements.

·          Dynamic tuning of traffic management attributes, routing parameters, and resources constraints.

MPLS TE combines the MPLS technology and traffic engineering. It reserves resources by establishing LSP tunnels along the specified paths, allowing traffic to bypass congested nodes to achieve appropriate load distribution.

With MPLS TE, a service provider can deploy traffic engineering on the existing MPLS backbone to provide various services and optimize network resources management.

MPLS TE basic concepts

·          CRLSP—Constraint-based Routed Label Switched Path. To establish a CRLSP, you must configure routing, and specify constrains, such as the bandwidth and explicit paths.

·          MPLS TE tunnel—A virtual point-to-point connection from the ingress node to the egress node. Typically, an MPLS TE tunnel consists of one CRLSP. To deploy CRLSP backup or transmit traffic over multiple paths, you need to establish multiple CRLSPs for one class of traffic. In this case, an MPLS TE tunnel consists of a set of CRLSPs. An MPLS TE tunnel is identified by an MPLS TE tunnel interface on the ingress node. When the outgoing interface of a traffic flow is an MPLS TE tunnel interface, the traffic flow is forwarded through the CRLSP of the MPLS TE tunnel.

Static CRLSP establishment

A static CRLSP is established by manually specifying the incoming label, outgoing label, and other constraints on each hop along the path that the traffic travels. Static CRLSPs feature simple configuration, but they cannot automatically adapt to network changes.

For more information about static CRLSPs, see "Configuring a static CRLSP."

Dynamic CRLSP establishment

Dynamic CRLSPs are dynamically established as follows:

1.        An IGP advertises TE attributes for links.

2.        MPLS TE uses the CSPF algorithm to calculate the shortest path to the tunnel destination. The path must meet constraints such as bandwidth and explicit routing.

3.        A label distribution protocol (such as RSVP-TE) advertises labels to establish CRLSPs and reserve bandwidth resources on each node along the calculated path.

Dynamic CRLSPs adapt to network changes and support CRLSP backup and fast reroute, but they require complicated configurations.

Advertising TE attributes

MPLS TE uses extended link state IGPs, such as OSPF and IS-IS, to advertise TE attributes for links.

TE attributes include the maximum bandwidth, maximum reservable bandwidth, non-reserved bandwidth for each priority, and the link attribute. The IGP floods TE attributes on the network. Each node collects the TE attributes of all links on all routers within the local area or at the same level to build up a TE database (TEDB).

Calculating paths

Based on the TEDB, MPLS TE uses the Constraint-based Shortest Path First (CSPF) algorithm, an improved SPF algorithm, to calculate the shortest, TE constraints-compliant path to the tunnel destination.

CSPF first prunes TE constraints-incompliant links from the TEDB. Then it performs SPF calculation to identify the shortest path (a set of LSR addresses) to an egress. CSPF calculation is usually performed on the ingress node of an MPLS TE tunnel.

TE constraints include the bandwidth, affinity, setup and holding priorities, and explicit path. They are configured on the ingress node of an MPLS TE tunnel.

·          Bandwidth

Bandwidth constraints specify the class of service and the required bandwidth for the traffic to be forwarded along the MPLS TE tunnel. A link complies with the bandwidth constraints when the reservable bandwidth for the class type is greater than or equal to the bandwidth required by the class type.

·          Affinity

Affinity determines which links a tunnel can use. The affinity attribute and its mask, and the link attribute are all 32-bit long. A link is available for a tunnel if the link attribute meets the following requirements:

?  The link attribute bits corresponding to the affinity attribute's 1 bits whose mask bits are 1 must have at least one bit set to 1.

?  The link attribute bits corresponding to the affinity attribute's 0 bits whose mask bits are 1 must have no bit set to 1.

The link attribute bits corresponding to the 0 bits in the affinity mask are not checked.

For example, if the affinity attribute is 0xFFFFFFF0 and its mask is 0x0000FFFF, a link is available for the tunnel when its link attribute bits meet the following requirements: the highest 16 bits each can be 0 or 1 (no requirements), the 17th through 28th bits must have at least one bit whose value is 1, and the lowest four bits must be 0.

·          Setup priority and holding priority

If MPLS TE cannot find a qualified path for an MPLS TE tunnel, it can remove an existing MPLS TE tunnel and preempt its bandwidth to set up the new MPLS TE tunnel.

MPLS TE uses the setup priority and holding priority to make preemption decisions. For a new MPLS TE tunnel to preempt an existing MPLS TE tunnel, the setup priority of the new tunnel must be higher than the holding priority of the existing tunnel. Both setup and holding priorities are in the range of 0 to 7. A smaller value indicates a higher priority.

To avoid flapping caused by improper preemptions, the setup priority of a tunnel must not be higher than its holding priority, namely, the setup priority value must be equal to or greater than the holding priority value.

·          Explicit path

Explicit path specifies the nodes to pass and the nodes to not pass for a tunnel.

Explicit paths include the following types:

?  Strict explicit path—Among the nodes that the path must traverse, a node and its previous hop must be connected directly.

?  Loose explicit path—Among the nodes that the path must traverse, a node and its previous hop can be connected indirectly.

Strict explicit path precisely specifies the path that an MPLS TE tunnel must traverse. Loose explicit path vaguely specifies the path that an MPLS TE tunnel must traverse. Strict explicit path and loose explicit path can be used together to specify that some nodes are directly connected and some nodes have other nodes in between.

Setting up a CRLSP through RSVP-TE

After calculating a path by using CSPF, MPLS TE uses a label distribution protocol to set up the CRLSP and reserves resources on each node of the path.

The device supports the label distribution protocol of RSVP-TE for MPLS TE. Resource Reservation Protocol (RSVP) reserves resources on each node along a path. Extended RSVP can support MPLS label distribution and allow resource reservation information to be transmitted with label bindings. This extended RSVP is called RSVP-TE.

For more information about RSVP, see "Configuring RSVP."

Traffic forwarding

After an MPLS TE tunnel is established, traffic is not forwarded on the tunnel automatically. You must direct the traffic to the tunnel by using one of the following methods.

Static routing

You can direct traffic to an MPLS TE tunnel by creating a static route that reaches the destination through the tunnel interface. This is the easiest way to implement MPLS TE tunnel forwarding. When the traffic to multiple networks is to be forwarded through the MPLS TE tunnel, you must configure multiple static routes, which are complicated to configure and difficult to maintain.

For more information about static routing, see Layer 3—IP Routing Configuration Guide.

Automatic route advertisement

You can also configure automatic route advertisement to forward traffic through an MPLS TE tunnel. Automatic route advertisement distributes the MPLS TE tunnel to the IGP (OSPF or IS-IS), so the MPLS TE tunnel can participate in IGP routing calculation. Automatic route advertisement is easy to configure and maintain.

Automatic route advertisement can be implemented by using the following methods:

·          IGP shortcut—Also known as AutoRoute Announce. It considers the MPLS TE tunnel as a link that directly connects the tunnel ingress node and the egress node. Only the ingress node uses the MPLS TE tunnel during IGP route calculation.

·          Forwarding adjacency—Considers the MPLS TE tunnel as a link that directly connects the tunnel ingress node and the egress node and advertises the link to the network through an IGP, so every node in the network uses the MPLS TE tunnel during IGP route calculation.

Figure 21 IGP shortcut and forwarding adjacency diagram

 

As shown in Figure 21, an MPLS TE tunnel is present from Router D to Router C. IGP shortcut enables only the ingress node Router D to use the MPLS TE tunnel in the IGP route calculation. Router A cannot use this tunnel to reach Router C. With forwarding adjacency enabled, Router A can learn this MPLS TE tunnel and transfer traffic to Router C by forwarding the traffic to Router D.

Make-before-break

Make-before-break is a mechanism to change an MPLS TE tunnel with minimum data loss and without using extra bandwidth.

In cases of tunnel reoptimization and automatic bandwidth adjustment, traffic forwarding is interrupted if the existing CRLSP is removed before a new CRLSP is established. The make-before-break mechanism makes sure that the existing CRLSP is removed after the new CRLSP is established and the traffic is switched to the new CRLSP. However, this wastes bandwidth resources if some links on the old and new CRLSPs are the same. It is because you need to reserve bandwidth on these links for the old and new CRLSPs separately. The make-before-break mechanism uses the SE resource reservation style to address this problem.

The resource reservation style refers to the style in which RSVP-TE reserves bandwidth resources during CRLSP establishment. The resource reservation style used by an MPLS TE tunnel is determined by the ingress node, and is advertised to other nodes through RSVP.

The device supports the following resource reservation styles:

·          FF—Fixed-filter, where resources are reserved for individual senders and cannot be shared among senders on the same session.

·          SE—Shared-explicit, where resources are reserved for senders on the same session and shared among them. SE is mainly used for make-before-break.

Figure 22 Diagram for make-before-break

 

As shown in Figure 22, a CRLSP with 30 M reserved bandwidth has been set up from Router A to Router D through the path Router A—Router B—Router C—Router D.

To increase the reserved bandwidth to 40 M, a new CRLSP must be set up through the path Router A—Router E—Router C—Router D. To achieve this purpose, RSVP-TE needs to reserve 30 M bandwidth for the old CRLSP and 40 M bandwidth for the new CRLSP on the link Router C—Router D, but the link bandwidth is not enough.

Using the make-before-break mechanism, the new CRLSP can share the bandwidth reserved for the old CRLSP. After the new CRLSP is set up, traffic is switched to the new CRLSP without service interruption, and then the old CRLSP is removed.

Route pinning

Route pinning enables CRLSPs to always use the original optimal path even if a new optimal route has been learned.

On a network where route changes frequently occur, you can use route pinning to avoid re-establishing CRLSPs upon route changes.

Tunnel reoptimization

Tunnel reoptimization allows you to manually or dynamically trigger the ingress node to recalculate a path. If the ingress node recalculates a better path, it creates a new CRLSP, switches traffic from the old CRLSP to the new, and then deletes the old CRLSP.

MPLS TE uses the tunnel reoptimization function to implement dynamic CRLSP optimization. For example, when MPLS TE sets up a tunnel, if a link on the optimal path does not have enough reservable bandwidth, MPLS TE sets up the tunnel on another path. When the link has enough bandwidth, the tunnel optimization function can switch the MPLS TE tunnel to the optimal path.

Automatic bandwidth adjustment

Because users cannot estimate accurately how much traffic they need to transmit through a service provider network, the service provider should be able to do the following:

·          Create MPLS TE tunnels with the bandwidth initially requested by the users.

·          Automatically tune the bandwidth resources when user traffic increases.

MPLS TE uses the automatic bandwidth adjustment function to meet this requirement. After the automatic bandwidth adjustment is enabled, the device periodically samples the output rate of the tunnel and computes the average output rate within the sampling interval. When the auto bandwidth adjustment frequency timer expires, MPLS TE resizes the tunnel bandwidth to the maximum average output rate sampled during the adjustment time to set up a new CRLSP. If the new CRLSP is set up successfully, MPLS TE switches traffic to the new CRLSP and clears the old CRLSP.

You can use a command to limit the maximum and minimum bandwidth. If the tunnel bandwidth calculated by auto bandwidth adjustment is greater than the maximum bandwidth, MPLS TE uses the maximum bandwidth to set up the new CRLSP. If it is smaller than the minimum bandwidth, MPLS TE uses the minimum bandwidth to set up the new CRLSP.

CRLSP backup

CRLSP backup uses a CRLSP to back up a primary CRLSP. When the ingress detects that the primary CRLSP fails, it switches traffic to the backup CRLSP. When the primary CRLSP recovers, the ingress switches traffic back.

CRLSP backup has the following modes:

·          Hot standby—A backup CRLSP is created immediately after a primary CRLSP is created.

·          Ordinary—A backup CRLSP is created after the primary CR-LSP fails.

FRR

Fast reroute (FRR) protects CRLSPs from link and node failures. FRR can implement 50-millisecond CRLSP failover.

After FRR is enabled for an MPLS TE tunnel, once a link or node fails on the primary CRLSP, FRR reroutes the traffic to a bypass tunnel, and the ingress node attempts to set up a new CRLSP. After the new CRLSP is set up, traffic is forwarded on the new CRLSP.

CRLSP backup provides end-to-end path protection for a CRLSP without time limitation. FRR provides quick but temporary protection for a link or node on a CRLSP.

Basic concepts

·          Primary CRLSPProtected CRLSP.

·          Bypass tunnelAn MPLS TE tunnel used to protect a link or node of the primary CRLSP.

·          Point of local repair—A PLR is the ingress node of the bypass tunnel. It must be located on the primary CRLSP but must not be the egress node of the primary CRLSP.

·          Merge point—An MP is the egress node of the bypass tunnel. It must be located on the primary CRLSP but must not be the ingress node of the primary CRLSP.

Protection modes

FRR provides the following protection modes:

·          Link protection—The PLR and the MP are connected through a direct link and the primary CRLSP traverses this link. When the link fails, traffic is switched to the bypass tunnel. As shown in Figure 23, the primary CRLSP is Router ARouter BRouter CRouter D, and the bypass tunnel is Router BRouter FRouter C. This mode is also called next-hop (NHOP) protection.

Figure 23 FRR link protection

 

·          Node protection—The PLR and the MP are connected through a device and the primary CRLSP traverses this device. When the device fails, traffic is switched to the bypass tunnel. As shown in Figure 24, the primary CRLSP is Router ARouter BRouter CRouter DRouter E, and the bypass tunnel is Router BRouter FRouter D. Router C is the protected device. This mode is also called next-next-hop (NNHOP) protection.

Figure 24 FRR node protection

 

DiffServ-aware TE

DiffServ is a model that provides differentiated QoS guarantees based on class of service. MPLS TE is a traffic engineering solution that focuses on optimizing network resources allocation.

DiffServ-aware TE (DS-TE) combines DiffServ and TE to optimize network resources allocation on a per-service class basis. DS-TE defines different bandwidth constraints for class types. It maps each traffic class type to the CRLSP that is constraint-compliant for the class type.

The device supports these DS-TE modes:

·          Prestandard mode—H3C proprietary DS-TE.

·          IETF mode—Complies with RFC 4124, RFC 4125, and RFC 4127.

Basic concepts

·          CT—Class Type. DS-TE allocates link bandwidth, implements constraint-based routing, and performs admission control on a per class type basis. A given traffic flow belongs to the same CT on all links.

·          BC—Bandwidth Constraint. BC restricts the bandwidth for one or more CTs.

·          Bandwidth constraint model—Algorithm for implementing bandwidth constraints on different CTs. A BC model comprises two factors, the maximum number of BCs (MaxBC) and the mappings between BCs and CTs. DS-TE supports two BC models, Russian Dolls Model (RDM) and Maximum Allocation Model (MAM).

·          TE class—Defines a CT and a priority. The setup priority or holding priority of an MPLS TE tunnel for a CT must be the same as the priority of the TE class.

The prestandard and IETF modes of DS-TE have the following differences:

·          The prestandard mode supports two CTs (CT 0 and CT 1), eight priorities, and up to 16 TE classes. The IETF mode supports four CTs (CT 0 through CT 3), eight priorities, and up to eight TE classes.

·          The prestandard mode does not allow you to configure TE classes. The IETF mode allows for TE class configuration.

·          The prestandard mode supports only RDM. The IETF mode supports both RDM and MAM.

·          A device operating in prestandard mode cannot communicate with devices from some vendors. A device operating in IETF mode can communicate with devices from other vendors.

How DS-TE operates

A device takes the following steps to establish an MPLS TE tunnel for a CT:

1.        Determines the CT.

A device classifies traffic according to your configuration:

?  When configuring a dynamic MPLS TE tunnel, you can use the mpls te bandwidth command on the tunnel interface to specify a CT for the traffic to be forwarded by the tunnel.

?  When configuring a static MPLS TE tunnel, you can use the bandwidth keyword to specify a CT for the traffic to be forwarded along the tunnel.

2.        Checks whether bandwidth is enough for the CT.

You can use the mpls te max-reservable-bandwidth command on an interface to configure the bandwidth constraints of the interface. The device determines whether the bandwidth is enough to establish an MPLS TE tunnel for the CT.

The relation between BCs and CTs varies with different BC models:

In RDM model, a BC constrains the total bandwidth of multiple CTs, as shown in Figure 25:

·          BC 2 is for CT 2. The total bandwidth for CT 2 cannot exceed BC 2.

·          BC 1 is for CT 2 and CT 1. The total bandwidth for CT 2 and CT 1 cannot exceed BC 1.

·          BC 0 is for CT 2, CT 1, and CT 0. The total bandwidth for CT 2, CT 1, and CT 0 cannot exceed BC 0. In this model, BC 0 equals the maximum reservable bandwidth of the link.

In cooperation with priority preemption, the RDM model can also implement bandwidth isolation between CTs. RDM is suitable for networks where traffic is unstable and traffic bursts might occur.

Figure 25 RDM bandwidth constraints model

 

In MAM model, a BC constrains the bandwidth for only one CT. This ensures bandwidth isolation among CTs no matter whether preemption is used or not. Compared with RDM, MAM is easier to configure. MAM is suitable for networks where traffic of each CT is stable and no traffic bursts occur. Figure 26 shows an example:

·          BC 0 is for CT 0. The bandwidth occupied by the traffic of CT 0 cannot exceed BC 0.

·          BC 1 is for CT 1. The bandwidth occupied by the traffic of CT 1 cannot exceed BC 1.

·          BC 2 is for CT 2. The bandwidth occupied by the traffic of CT 2 cannot exceed BC 2.

·          The total bandwidth occupied by CT 0, CT 1, and CT 2 cannot exceed the maximum reservable bandwidth.

Figure 26 MAM bandwidth constraints model

 

3.        Checks whether the CT and the LSP setup/holding priority match an existing TE class.

An MPLS TE tunnel can be established for the CT only when the following conditions are met:

?  Every node along the tunnel has a TE class that matches the CT and the LSP setup priority.

?  Every node along the tunnel has a TE class that matches the CT and the LSP holding priority.

Bidirectional MPLS TE tunnel

MPLS Transport Profile (MPLS-TP) uses bidirectional MPLS TE tunnels to implement 1:1 and 1+1 protection switching and support in-band detection tools and signaling protocols such as OAM and PSC.

A bidirectional MPLS TE tunnel includes a pair of CRLSPs in opposite directions. It can be established in the following modes:

·          Co-routed modeUses the extended RSVP-TE protocol to establish a bidirectional MPLS TE tunnel. RSVP-TE uses a Path message to advertise the labels assigned by the upstream LSR to the downstream LSR and a Resv message to advertise the labels assigned by the downstream LSR to the upstream LSR. During the delivery of the path message, a CRLSP in one direction is established. During the delivery of the Resv message, a CRLSP in the other direction is established. The CRLSPs of a bidirectional MPLS TE tunnel established in co-routed mode use the same path.

·          Associated modeIn this mode, you establish a bidirectional MPLS TE tunnel by binding two unidirectional CRLSPs in opposite directions. The two CRLSPs can be established in different modes and use different paths. For example, one CRLSP is established statically and the other CRLSP is established dynamically by RSVP-TE.

For more information about establishing MPLS TE tunnel through RSVP-TE, the Path message, and the Resv message, see "Configuring RSVP."

Protocols and standards

·          RFC 2702, Requirements for Traffic Engineering Over MPLS

·          RFC 3564, Requirements for Support of Differentiated Service-aware MPLS Traffic Engineering

·          RFC 4124, Protocol Extensions for Support of Diffserv-aware MPLS Traffic Engineering

·          RFC 4125, Maximum Allocation Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering

·          RFC 4127, Russian Dolls Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering

·          ITU-T Recommendation Y.1720, Protection switching for MPLS networks

Feature and software version compatibility

The MPLS TE feature is available in Release 1138P01 and later versions.

MPLS TE configuration task list

To configure an MPLS TE tunnel to use a static CRLSP, complete the following tasks:

1.        Enable MPLS TE on each node and interface that the MPLS TE tunnel traverses.

2.        Create a tunnel interface on the ingress node of the MPLS TE tunnel, and specify the tunnel destination address (the address of the egress node).

3.        Create a static CRLSP on each node that the MPLS TE tunnel traverses.

For information about creating a static CRLSP, see "Configuring a static CRLSP."

4.        On the ingress node of the MPLS TE tunnel, configure the tunnel interface to reference the created static CRLSP.

5.        On the ingress node of the MPLS TE tunnel, configure static routing or automatic route advertisement to direct traffic to the MPLS TE tunnel.

To configure an MPLS TE tunnel to use a CRLSP dynamically established by RSVP-TE, complete the following tasks:

1.        Enable MPLS TE and RSVP on each node and interface that the MPLS TE tunnel traverses.

For information about enabling RSVP, see "Configuring RSVP."

2.        Create a tunnel interface on the ingress node of the MPLS TE tunnel, specify the tunnel destination address (the address of the egress node), and configure the MPLS TE tunnel constraints (such as the tunnel bandwidth constraints and affinity) on the tunnel interface.

3.        Configure the link TE attributes (such as the maximum link bandwidth and link attribute) on each interface that the MPLS TE tunnel traverses.

4.        Configure an IGP on each node that the MPLS TE tunnel traverses, and configure the IGP to support MPLS TE, so that the nodes advertise the link TE attributes through the IGP.

5.        On the ingress node of the MPLS TE tunnel, configure RSVP-TE to establish a CRLSP based on the tunnel constraints and link TE attributes.

6.        On the ingress node of the MPLS TE tunnel, configure static routing or automatic route advertisement to direct traffic to the MPLS TE tunnel.

You can also configure other MPLS TE functions such as the DS-TE, automatic bandwidth adjustment, and FRR as needed.

To configure MPLS TE, perform the following tasks:

 

Tasks at a glance

(Required.) Enabling MPLS TE

(Required.) Configuring a tunnel interface

(Optional.) Configuring DS-TE

(Required.) Perform at least one of the following tasks to configure an MPLS TE tunnel:

·         Configuring an MPLS TE tunnel to use a static CRLSP

·         Configuring an MPLS TE tunnel to use a dynamic CRLSP

(Required.) Configuring traffic forwarding:

·         Configuring static routing to direct traffic to an MPLS TE tunnel

·         Configuring automatic route advertisement to direct traffic to an MPLS TE tunnel

(Optional.) Configuring a bidirectional MPLS TE tunnel

(Optional.) Configuring CRLSP backup

Only MPLS TE tunnels established by RSVP-TE support this configuration.

(Optional.) Configuring MPLS TE FRR

Only MPLS TE tunnels established by RSVP-TE support this configuration.

 

Enabling MPLS TE

Enable MPLS TE on each node and interface that the MPLS TE tunnel traverses.

Before you enable MPLS TE, complete the following tasks:

·          Configure static routing or IGP to make sure all LSRs can reach each other.

·          Enable MPLS. For information about enabling MPLS, see "Configuring basic MPLS."

To enable MPLS TE:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable MPLS TE and enter MPLS TE view.

mpls te

By default, MPLS TE is disabled.

3.       Return to system view.

quit

N/A

4.       Enter interface view.

interface interface-type interface-number

N/A

5.       Enable MPLS TE for the interface.

mpls te enable

By default, MPLS TE is disabled on an interface.

 

Configuring a tunnel interface

To configure an MPLS TE tunnel, you must create an MPLS TE tunnel interface and enter tunnel interface view. All MPLS TE tunnel attributes are configured in tunnel interface view.

Perform this task on the ingress node of the MPLS TE tunnel.

To configure a tunnel interface:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an MPLS TE tunnel interface and enter tunnel interface view.

interface tunnel tunnel-number mode mpls-te

By default, no tunnel interface is created.

3.       Configure an IP address for the tunnel interface.

ip address ip-address { mask-length | mask }

By default, a tunnel interface does not have an IP address.

4.       Specify the tunnel destination address.

destination ip-address

By default, no tunnel destination address is specified.

 

Configuring DS-TE

DS-TE is configurable on any node that an MPLS TE tunnel traverses.

To configure DS-TE:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE view.

mpls te

N/A

3.       (Optional.) Configure the DS-TE mode as IETF.

ds-te mode ietf

By default, the DS-TE mode is prestandard.

4.       (Optional.) Configure the BC model of IETF DS-TE as MAM.

ds-te bc-model mam

By default, the BC model of IETF DS-TE is RDM.

5.       Configure a TE class.

ds-te te-class te-class-index class-type class-type-number priority pri-number

The default TE classes for IETF mode are shown in Table 1.

In prestandard mode, you cannot configure TE classes.

 

Table 1 Default TE classes in IETF mode

TE Class

CT

Priority

0

0

7

1

1

7

2

2

7

3

3

7

4

0

0

5

1

0

6

2

0

7

3

0

 

Configuring an MPLS TE tunnel to use a static CRLSP

To configure an MPLS TE tunnel to use a static CRLSP, perform the following tasks:

·          Establish the static CRLSP.

·          Specify the MPLS TE tunnel establishment mode as static.

·          Configure the MPLS TE tunnel to reference the static CRLSP.

Other configurations, such as tunnel constraints and IGP extension, are not needed.

To configure an MPLS TE tunnel to use a static CRLSP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a static CRLSP.

See "Configuring a static CRLSP."

N/A

3.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

Execute this command on the ingress node.

4.       Specify the MPLS TE tunnel establishment mode as static.

mpls te signaling static

By default, MPLS TE uses RSVP-TE to establish a tunnel.

5.       Apply the static CRLSP to the tunnel interface.

mpls te static-cr-lsp lsp-name

By default, a tunnel does not reference any static CRLSP.

 

Configuring an MPLS TE tunnel to use a dynamic CRLSP

To configure an MPLS TE tunnel to use a CRLSP dynamically established by RSVP-TE, complete the following tasks:

·          Configure MPLS TE attributes for the links.

·          Configure IGP TE extension to advertise link TE attributes, so as to generate a TEDB on each node.

·          Configure tunnel constraints.

·          Establish the CRLSP by using the signaling protocol RSVP-TE.

You must configure the IGP TE extension to form a TEDB. Otherwise, the path is created based on IGP routing rather than computed by CSPF.

Configuration task list

To establish an MPLS TE tunnel by using a dynamic CRLSP:

 

Tasks at a glance

(Required.) Configuring MPLS TE attributes for a link

(Required.) Advertising link TE attributes by using IGP TE extension

(Required.) Configuring MPLS TE tunnel constraints

(Required.) Establishing an MPLS TE tunnel by using RSVP-TE

(Optional.) Controlling CRLSP path selection

(Optional.) Controlling MPLS TE tunnel setup

 

Configuring MPLS TE attributes for a link

MPLS TE attributes for a link include the maximum link bandwidth, the maximum reservable bandwidth, and the link attribute.

Perform this task on each interface that the MPLS TE tunnel traverses.

To configure the link TE attributes:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the maximum link bandwidth for MPLS TE traffic.

mpls te max-link-bandwidth bandwidth-value

By default, the maximum link bandwidth for MPLS TE traffic is 0.

4.       Configure the maximum reservable bandwidth.

·         Configure the maximum reservable bandwidth of the link (BC 0) and BC 1 in RDM model of the prestandard DS-TE:
mpls te max-reservable-bandwidth bandwidth-value [ bc1 bc1-bandwidth ]

·         Configure the maximum reservable bandwidth of the link and the BCs in MAM model of the IETF DS-TE:
mpls te max-reservable-bandwidth mam bandwidth-value { bc0 bc0-bandwidth | bc1 bc1-bandwidth | bc2 bc2-bandwidth | bc3 bc3-bandwidth } *

·         Configure the maximum reservable bandwidth of the link and the BCs in RDM model of the IETF DS-TE:
mpls te max-reservable-bandwidth rdm bandwidth-value [ bc1 bc1-bandwidth ] [ bc2 bc2-bandwidth ] [ bc3 bc3-bandwidth ]

Use one command according to the DS-TE mode and BC model configured in "Configuring DS-TE."

By default, the maximum reservable bandwidth of a link is 0 kbps and each BC is 0 kbps.

In RDM model, BC 0 is the maximum reservable bandwidth of a link.

5.       Configure the link attribute.

mpls te link-attribute attribute-value

By default, the link attribute value is 0x00000000.

 

Advertising link TE attributes by using IGP TE extension

Both OSPF and IS-IS are extended to advertise link TE attributes. The extensions are called OSPF TE and IS-IS TE. If both OSPF TE and IS-IS TE are available, OSPF TE takes precedence.

Configuring OSPF TE

OSPF TE uses Type-10 opaque LSAs to carry the TE attributes for a link. Before you configure OSPF TE, you must enable opaque LSA advertisement and reception by using the opaque-capability enable command. For more information about opaque LSA advertisement and reception, see Layer 3—IP Routing Configuration Guide.

MPLS TE cannot reserve resources and distribute labels for an OSPF virtual link, and cannot establish a CRLSP through an OSPF virtual link. Therefore, make sure no virtual link exists in an OSPF area before you configure MPLS TE.

To configure OSPF TE:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter OSPF view.

ospf [ process-id ]

N/A

3.       Enable opaque LSA advertisement and reception.

opaque-capability enable

By default, opaque LSA advertisement and reception are enabled.

For more information about this command, see Layer 3—IP Routing Command Reference.

4.       Enter area view.

area area-id

N/A

5.       Enable MPLS TE for the OSPF area.

mpls te enable

By default, an OSPF area does not support MPLS TE.

 

Configuring IS-IS TE

IS-IS TE uses a sub-TLV of the extended IS reachability TLV (type 22) to carry TE attributes. Because the extended IS reachability TLV carries wide metrics, specify a wide metric-compatible metric style for the IS-IS process before enabling IS-IS TE. Available metric styles for IS-IS TE include wide, compatible, or wide-compatible. For more information about IS-IS, see Layer 3—IP Routing Configuration Guide.

To make sure IS-IS LSPs can be flooded on the network, specify an MTU that is equal to or greater than 512 bytes on each IS-IS enabled interface, because of the following:

·          The length of the extended IS reachability TLV might reach the maximum of 255 bytes.

·          The LSP header takes 27 bytes and the TLV header takes two bytes.

·          The LSP might also carry the authentication information.

To configure IS-IS TE:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an IS-IS process and enter IS-IS view.

isis [ process-id ]

By default, no IS-IS process exists.

3.       Specify a metric style.

cost-style { wide | wide-compatible | compatible [ relax-spf-limit ] }

By default, only narrow metric style packets can be received and sent.

For more information about this command, see Layer 3—IP Routing Command Reference.

4.       Enable MPLS TE for the IS-IS process.

mpls te enable [ Level-1 | Level-2 ]

By default, an IS-IS process does not support MPLS TE.

5.       Specify the types of the sub-TLVs for carrying DS-TE parameters.

te-subtlv { bw-constraint value | unreserved-bw-sub-pool value } *

By default, the bw-constraint parameter is carried in sub-TLV 252, and the unreserved-bw-sub-pool parameter is carried in sub-TLV 251.

 

Configuring MPLS TE tunnel constraints

Perform this task on the ingress node of the MPLS TE tunnel.

Configuring bandwidth constraints for an MPLS TE tunnel

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure bandwidth required for the tunnel, and specify a CT for the tunnel's traffic.

mpls te bandwidth [ ct0 | ct1 | ct2 | ct3 ] bandwidth

By default, no bandwidth is assigned, and the class type is CT 0.

 

Configuring the affinity attribute for an MPLS TE tunnel

The associations between the link attribute and the affinity attribute might vary by vendor. To ensure the successful establishment of a tunnel between two devices from different vendors, correctly configure their respective link attribute and affinity attribute.

To configure the affinity attribute for an MPLS TE tunnel:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure an affinity for the MPLS TE tunnel.

mpls te affinity-attribute attribute-value [ mask mask-value ]

By default, the affinity is 0x00000000, and the mask is 0x00000000. The default affinity matches all link attributes.

 

Configuring a setup priority and a holding priority for an MPLS TE tunnel

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure a setup priority and a holding priority for the MPLS TE tunnel.

mpls te priority setup-priority [ hold-priority ]

By default, the setup priority and the holding priority are both 7 for an MPLS TE tunnel.

 

Configuring an explicit path for an MPLS TE tunnel

An explicit path is a set of nodes. The relationship between any two neighboring nodes on an explicit path can be either strict or loose.

·          Strict—The two nodes must be directly connected.

·          Loose—The two nodes can have devices in between.

When establishing an MPLS TE tunnel between areas or ASs, you must do the following:

·          Use a loose explicit path.

·          Specify the ABR or ASBR as the next hop of the path.

·          Make sure the tunnel's ingress node and the ABR or ASBR can reach each other.

To configure an explicit path for a MPLS TE tunnel:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an explicit path and enter its view.

explicit-path path-name

By default, no explicit path exists on the device.

3.       Enable the explicit path.

undo disable

By default, an explicit path is enabled.

4.       Add or modify a node in the explicit path.

nexthop [ index index-number ] ip-address [ exclude | include [ loose | strict ] ]

By default, an explicit path does not include any node.

You can specify the include keyword to have the CRLSP traverse the specified node or the exclude keyword to have the CRLSP bypass the specified node.

5.       Return to system view.

quit

N/A

6.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

7.       Configure the MPLS TE tunnel interface to use the explicit path, and specify a preference value for the explicit path.

mpls te path preference value explicit-path path-name [ no-cspf ]

By default, MPLS TE uses the calculated path to establish a CRLSP.

 

Establishing an MPLS TE tunnel by using RSVP-TE

Before you configure this task, you must use the rsvp command and the rsvp enable command to enable RSVP on all nodes and interfaces that the MPLS TE tunnel traverses.

Perform this task on the ingress node of the MPLS TE tunnel.

To configure RSVP-TE to establish an MPLS TE tunnel:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure MPLS TE to use RSVP-TE to establish the tunnel.

mpls te signaling rsvp-te

By default, MPLS TE uses RSVP-TE to establish a tunnel.

4.       Specify an explicit path for the MPLS TE tunnel, and specify the path preference value.

mpls te path preference value { dynamic | explicit-path path-name } [ no-cspf ]

By default, MPLS TE uses the calculated path to establish a CRLSP.

 

Controlling CRLSP path selection

Before performing the configuration tasks in this section, be aware of each configuration objective and its impact on your device.

MPLS TE uses CSPF to calculate a path according to the TEDB and constraints and sets up the CRLSP through RSVP-TE. MPLS TE provides measures that affect the CSPF calculation. You can use these measures to tune the path selection for CRLSP.

Configuring the metric type for path selection

Each MPLS TE link has two metrics: IGP metric and TE metric. By planning the two metrics, you can select different tunnels for different classes of traffic. For example, use the IGP metric to represent a link delay (a smaller IGP metric value indicates a lower link delay), and use the TE metric to represent a link bandwidth value (a smaller TE metric value indicates a bigger link bandwidth value).

You can establish two MPLS TE tunnels: Tunnel 1 for voice traffic and Tunnel 2 for video traffic. Configure Tunnel 1 to use IGP metrics for path selection, and configure Tunnel 2 to use TE metrics for path selection. As a result, the video service (with larger traffic) travels through the path that has larger bandwidth, and the voice traffic travels through the path that has lower delay.

To configure the metric type for tunnel path selection:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE view.

mpls te

N/A

3.       Specify the metric type to use when no metric type is explicitly configured for a tunnel.

path-metric-type { igp | te }

By default, a tunnel uses the TE metric for path selection.

Execute this command on the ingress node of an MPLS TE tunnel.

4.       Return to system view.

quit

N/A

5.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

6.       Specify the metric type for path selection.

mpls te path-metric-type { igp | te }

By default, no link metric type is specified and the one specified in MPLS TE view is used.

Execute this command on the ingress node of an MPLS TE tunnel.

7.       Return to system view.

quit

N/A

8.       Enter interface view.

interface interface-type interface-number

N/A

9.       Assign a TE metric to the link.

mpls te metric value

By default, the link uses its IGP metric as the TE metric.

This command is available on every interface that the MPLS TE tunnel traverses.

 

Configuring route pinning

When route pinning is enabled, MPLS TE tunnel reoptimization and automatic bandwidth adjustment are not available.

Perform this task on the ingress node of an MPLS TE tunnel.

To configure route pinning:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Enable route pinning.

mpls te route-pinning

By default, route pinning is disabled.

 

Configuring tunnel reoptimization

Tunnel reoptimization allows you to manually or dynamically trigger the ingress node to recalculate a path. If the ingress node recalculates a better path, it creates a new CRLSP, switches the traffic from the old CRLSP to the new CRLSP, and then deletes the old CRLSP.

Perform this task on the ingress node of an MPLS TE tunnel.

To configure tunnel reoptimization:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Enable tunnel reoptimization.

mpls te reoptimization [ frequency seconds ]

By default, tunnel reoptimization is disabled.

4.       Return to user view.

return

N/A

5.       (Optional.) Immediately reoptimize all MPLS TE tunnels that are enabled with the tunnel reoptimization function.

mpls te reoptimization

N/A

 

Configuring TE flooding thresholds and interval

When the bandwidth of an MPLS TE link changes, IGP floods the new bandwidth information, so the ingress node can use CSPF to recalculate the path.

To prevent such recalculations from consuming too many resources, you can configure IGP to flood only significant bandwidth changes by setting the following flooding thresholds:

·          Up threshold—When the percentage of the reservable-bandwidth increase to the maximum reservable bandwidth reaches the threshold, IGP floods the TE information.

·          Down threshold—When the percentage of the reservable-bandwidth decrease to the maximum reservable bandwidth reaches the threshold, IGP floods the TE information.

You can also configure the flooding interval at which bandwidth changes that cannot trigger immediate flooding are flooded.

This task can be performed on all nodes that the MPLS TE tunnel traverses.

To configure TE flooding thresholds and the flooding interval:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the up/down threshold.

mpls te bandwidth change thresholds { down | up } percent

By default, the up/down threshold is 10% of the link reservable bandwidth.

4.       Return to system view.

quit

N/A

5.       Enter MPLS TE view.

mpls te

N/A

6.       Configure the flooding interval.

link-management periodic-flooding timer interval

By default, the flooding interval is 180 seconds.

 

Controlling MPLS TE tunnel setup

Before performing the configuration tasks in this section, be aware of each configuration objective and its impact on your device.

Perform the tasks in this section on the ingress node of the MPLS TE tunnel.

Enabling route and label recording

Perform this task to record the nodes that an MPLS TE tunnel traverses and the label assigned by each node. The recorded information helps you know about the path used by the MPLS TE tunnel and the label distribution information, and when the tunnel fails, it helps you locate the fault.

To enable route and label recording:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Record routes or record both routes and labels.

·         To record routes:
mpls te record-route

·         To record both routes and labels:
mpls te record-route label

By default, both route recording and label recording are disabled.

 

Enabling loop detection

Enabling loop detection also enables the route recording function, regardless of whether you have configured the mpls te record-route command. Loop detection enables each node of the tunnel to detect whether a loop has occurred according to the recorded route information.

To enable loop detection:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Enable loop detection.

mpls te loop-detection

By default, loop detection is disabled.

 

Configuring tunnel setup retry

If the ingress node fails to establish an MPLS TE tunnel, it waits for the retry interval, and then tries to set up the tunnel again. It repeats this process until the tunnel is established or until the number of attempts reaches the maximum. If the tunnel cannot be established when the number of attempts reaches the maximum, the ingress waits for a longer period and then repeats the previous process.

To configure tunnel setup retry:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure maximum number of tunnel setup attempts.

mpls te retry times

By default, the maximum number of attempts is 3.

4.       Configure the retry interval.

mpls te timer retry seconds

By default, the retry interval is 2 seconds.

 

Configuring automatic bandwidth adjustment

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE view.

mpls te

N/A

3.       Enable automatic bandwidth adjustment globally, and configure the output rate sampling interval.

auto-bandwidth enable [ sample-interval seconds ]

By default, the global auto bandwidth adjustment is disabled.

The sampling interval configured in MPLS TE view applies to all MPLS TE tunnels. The output rates of all MPLS TE tunnels are recorded every sampling interval to calculate the actual average bandwidth of each MPLS TE tunnel in one sampling interval.

4.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

5.       Enable automatic bandwidth adjustment or output rate sampling for the MPLS TE tunnel.

·         To enable automatic bandwidth adjustment:
mpls te auto-bandwidth adjustment [ frequency seconds ] [ max-bw max-bandwidth | min-bw min-bandwidth ] *

·         To enable output rate sampling:
mpls te auto-bandwidth collect-bw [ frequency seconds ]

Use either command.

By default, automatic bandwidth adjustment and output rate sampling are disabled for an MPLS TE tunnel.

6.       Return to user view.

return

N/A

7.       (Optional.) Reset the automatic bandwidth adjustment.

reset mpls te auto-bandwidth-adjustment timers

After this command is executed, the system clears the output rate sampling information and the remaining time to the next bandwidth adjustment to start a new output rate sampling and bandwidth adjustment.

 

Configuring RSVP resource reservation style

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure the resources reservation style for the tunnel.

mpls te resv-style { ff | se }

By default, the resource reservation style is SE.

In current MPLS TE applications, tunnels are established usually by using the make-before-break mechanism. As a best practice, use the SE style.

 

Configuring traffic forwarding

Perform the tasks in this section on the ingress node of the MPLS TE tunnel.

Configuring static routing to direct traffic to an MPLS TE tunnel

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure a static route to direct traffic to an MPLS TE tunnel.

For information about static routing commands, see Layer 3—IP Routing Command Reference.

By default, no static route exists on the device.

The interface specified in this command must be an MPLS TE tunnel interface in load sharing mode.

 

Configuring automatic route advertisement to direct traffic to an MPLS TE tunnel

You can use either IGP shortcut or forwarding adjacency to implement automatic route advertisement. When you use IGP shortcut, you can specify a metric for the TE tunnel. If you assign an absolute metric, the metric is directly used as the MPLS TE tunnel's metric. If you assign a relative metric, the MPLS TE tunnel's metric is the assigned metric plus the IGP link metric.

Before configuring automatic route advertisement, perform the following tasks:

·          Enable OSPF or IS-IS on the tunnel interface to advertise the tunnel interface address to OSPF or IS-IS.

·          Enable MPLS TE for an OSPF area or an IS-IS process by executing the mpls te enable command in OSPF area view or IS-IS view.

Follow these restrictions and guidelines when you configure automatic route advertisement:

·          The destination address of the MPLS TE tunnel can be the LSR ID of the egress node or the primary IP address of an interface on the egress node. As a best practice, configure the destination address of the MPLS TE tunnel as the LSR ID of the egress node.

·          If you configure the tunnel destination address as the primary IP address of an interface on the egress node, you must enable MPLS TE, and configure OSPF or IS-IS on that interface. This makes sure the primary IP address of the interface can be advertised to its peer.

·          The route to the tunnel interface address and the route to the tunnel destination must be in the same OSPF area or at the same IS-IS level.

Configuring IGP shortcut

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Enable IGP shortcut.

mpls te igp shortcut [ isis | ospf ]

By default, IGP shortcut is disabled.

If no IGP is specified, both OSPF and IS-IS will include the MPLS TE tunnel in route calculation.

4.       Assign a metric to the MPLS TE tunnel.

mpls te igp metric { absolute value | relative value }

By default, the metric of an MPLS TE tunnel equals its IGP metric.

 

Configuring forwarding adjacency

To use forwarding adjacency, you must establish two MPLS TE tunnels in opposite directions between two nodes, and configure forwarding adjacency on both the nodes.

To configure forwarding adjacency:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Enable forwarding adjacency.

mpls te igp advertise [ hold-time value ]

By default, forwarding adjacency is disabled.

 

Configuring a bidirectional MPLS TE tunnel

Before you create a bidirectional MPLS TE tunnel, complete the following tasks:

·          Disable the PHP feature on both ends of the tunnel.

·          To set up a bidirectional MPLS TE tunnel in co-routed mode, you must specify the signaling protocol as RSVP-TE, and use the mpls te resv-style command to configure the resources reservation style as FF for the tunnel.

·          To set up a bidirectional MPLS TE tunnel in associated mode and use RSVP-TE to set up one CRLSP of the tunnel, you must use the mpls te resv-style command to configure the resources reservation style as FF for the CR-LSP.

To create a bidirectional MPLS TE tunnel, create an MPLS TE tunnel interface on both ends of the tunnel and enable the bidirectional tunnel function on the tunnel interfaces:

·          For a co-routed bidirectional tunnel, configure one end of the tunnel as the active end and the other end as the passive end, and specify the reverse CR-LSP at the passive end.

·          For an associated bidirectional tunnel, specify a reverse CR-LSP at both ends of the tunnel.

To configure the active end of a co-routed bidirectional MPLS TE tunnel:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure a co-routed bidirectional MPLS TE tunnel and specify the local end as the active end of the tunnel.

mpls te bidirectional co-routed active

By default, no bidirectional tunnel is configured, and tunnels established on the tunnel interface are unidirectional MPLS TE tunnels.

 

To configure the passive end of a co-routed bidirectional MPLS TE tunnel:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure a co-routed bidirectional MPLS TE tunnel and specify the local end as the passive end of the tunnel.

mpls te bidirectional co-routed passive reverse-lsp lsr-id ingress-lsr-id tunnel-id tunnel-id

By default, no bidirectional tunnel is configured, and tunnels established on the tunnel interface are unidirectional MPLS TE tunnels.

 

To configure an associated bidirectional MPLS TE tunnel:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Configure an associated bidirectional MPLS TE tunnel.

mpls te bidirectional associated reverse-lsp { lsp-name lsp-name | lsr-id ingress-lsr-id tunnel-id tunnel-id } }

By default, no bidirectional tunnel is configured, and tunnels established on the tunnel interface are unidirectional MPLS TE tunnels.

 

Configuring CRLSP backup

CRLSP backup provides end-to-end CRLSP protection. Only MPLS TE tunnels established through RSVP-TE support CRLSP backup.

Perform this task on the ingress node of an MPLS TE tunnel.

To configure CRLSP backup:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE tunnel interface view.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Enable CRLSP backup and specify the backup mode.

mpls te backup { hot-standby | ordinary }

By default, tunnel backup is disabled.

4.       Specify a path for the primary CRLSP and set the preference of the path.

mpls te path preference value { dynamic | explicit-path path-name } [ no-cspf ]

By default, MPLS TE uses the dynamically calculated path to set up the primary CRLSP.

5.       Specify a path for the backup CRLSP and set the preference of the path.

mpls te backup-path preference value { dynamic | explicit-path path-name } [ no-cspf ]

By default, MPLS TE uses the dynamically calculated path to set up the backup CRLSP.

 

Configuring MPLS TE FRR

MPLS TE FRR provides temporary link or node protection on a CRLSP. When you configure FRR, note the following restrictions and guidelines:

·          Do not configure both FRR and RSVP authentication on the same interface.

·          Only MPLS TE tunnels established through RSVP-TE support FRR.

Enabling FRR

Perform this task on the ingress node of a primary CRLSP.

To enable FRR:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter tunnel interface view of the primary CRLSP.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Enable FRR.

mpls te fast-reroute [ bandwidth ]

By default, FRR is disabled.

If you specify the bandwidth keyword, the primary CRLSP must have bandwidth protection.

 

Configuring a bypass tunnel on the PLR

Overview

To configure FRR, you must configure bypass tunnels for primary CRLSPs on the PLR.

To configure bypass tunnels on the PLR, you can use the following methods:

·          Manually configuring a bypass tunnel on the PLRCreate an MPLS TE tunnel on the PLR, and configure the tunnel as a bypass tunnel for a primary CRLSP. You need to specify the bandwidth and CT that the bypass tunnel can protect, and bind the bypass tunnel to the egress interface of the primary CRLSP.

You can configure up to three bypass tunnels for a primary CRLSP.

·          Configuring the PLR to set up bypass tunnels automatically—Configure the automatic bypass tunnel setup function (also referred to as the auto FRR function) on the PLR. The PLR automatically sets up two bypass tunnels for each of its primary CRLSPs: one in link protection mode and the other in node protection mode. Automatically created bypass tunnels can be used to protect any type of CT, but they cannot provide bandwidth protection.

A primary tunnel can have both manually configured and automatically created bypass tunnels. The PLR will select one bypass tunnel to protect the primary CRLSP. The selected bypass tunnel is bound to the primary CRLSP.

Manually created bypass tunnels take precedence over automatically created bypass tunnels. An automatically created bypass tunnel in node protection mode takes precedence over an automatically created bypass tunnel in link protection mode. Among manually created bypass tunnels, the PLR selects the bypass tunnel for protecting the primary CRLSP by following these rules:

1.        Selects a bypass tunnel according to the principles, as shown in Table 2.

2.        Prefers the bypass tunnel in node protection mode over the one in link protection mode.

3.        Prefers the bypass tunnel with a smaller ID over the one with a bigger tunnel ID.

Table 2 FRR protection principles

Bandwidth required by primary CRLSP

Primary CRLSP requires bandwidth protection or not

Bypass tunnel providing bandwidth protection

Bypass tunnel providing no bandwidth protection

0

Yes

The primary CRLSP cannot be bound to the bypass tunnel.

The primary CRLSP can be bound to the bypass tunnel if CT 0 or no CT is specified for the bypass tunnel.

After binding, the RRO message does not carry the bandwidth protection flag. The bypass tunnel does not provide bandwidth protection for the primary CRLSP, and performs best-effort forwarding for traffic of the primary CRLSP.

No

None-zero

Yes

The primary CRLSP can be bound to the bypass tunnel when all the following conditions are met:

·         The bandwidth that the bypass tunnel can protect is no less than the bandwidth required by the primary CRLSP.

·         There is not a CT specified for the bypass tunnel, or the specified CT is the same as that specified for the primary CRLSP.

After binding, the RRO message carries the bandwidth protection flag, and the bypass tunnel provides bandwidth protection for the primary CRLSP.

The primary CRLSP prefers bypass tunnels that provide bandwidth protection over those providing no bandwidth protection.

The primary CRLSP can be bound to the bypass tunnel when one of the following conditions is met:

·         No CT is specified for the bypass tunnel.

·         The specified CT is the same as that specified for the primary CRLSP.

After binding, the RRO message does not carry the bandwidth protection flag.

This bypass tunnel is selected only when no bypass tunnel that provides bandwidth protection can be bound to the primary CRLSP.

Non-zero

No

The primary CRLSP can be bound to the bypass tunnel when all the following conditions are met:

·         The bandwidth that the bypass tunnel can protect is no less than the bandwidth required by the primary CRLSP.

·         No CT that the bypass tunnel can protect is specified, or the specified CT is the same as that of the traffic on the primary CRLSP.

After binding, the RRO message carries the bandwidth protection flag.

This bypass tunnel is selected only when no bypass tunnel that does not provide bandwidth protection can be bound to the primary CRLSP.

The primary CRLSP can be bound to the bypass tunnel when one of the following conditions is met:

·         No CT is specified for the bypass tunnel.

·         The specified CT is the same as that of the traffic on the primary CRLSP.

After binding, the RRO message does not carry the bandwidth protection flag.

The primary CRLSP prefers bypass tunnels that does not provide bandwidth protection over those providing bandwidth protection.

 

Configuration restrictions and guidelines

When you configure a bypass tunnel on the PLR, follow these restrictions and guidelines:

·          Use bypass tunnels to protect only critical interfaces or links when bandwidth is insufficient. Bypass tunnels are pre-established and require extra bandwidth.

·          Make sure the bandwidth assigned to the bypass tunnel is no less than the total bandwidth needed by all primary CRLSPs to be protected by the bypass tunnel. Otherwise, some primary CRLSPs might not be protected by the bypass tunnel.

·          A bypass tunnel typically does not forward data when the primary CRLSP operates correctly. For a bypass tunnel to also forward data during tunnel protection, you must assign adequate bandwidth to the bypass tunnel.

·          A bypass tunnel cannot be used for services such as VPN.

·          You cannot configure FRR for a bypass tunnel. A bypass tunnel cannot act as a primary CRLSP.

·          Make sure the protected node or interface is not on the bypass tunnel.

·          After you associate a primary CRLSP that does not require bandwidth protection with a bypass tunnel that provides bandwidth protection, the primary CRLSP occupies the bandwidth that the bypass tunnel protects. The bandwidth is protected on a first-come-first-served basis. The primary CRLSP that needs bandwidth protection cannot preempt the one that does not need bandwidth protection.

·          After an FRR, the primary CRLSP will be down if you modify the bandwidth that the bypass tunnel can protect and your modification results in one of the following:

?  The CT type changes.

?  The bypass tunnel cannot protect adequate bandwidth as configured.

?  FRR protection type (whether or not to provide bandwidth protection for the primary CRLSP) changes.

Manually configuring a bypass tunnel

The bypass tunnel setup method is the same as a normal MPLS TE tunnel. This section describes only FRR-related configurations.

To configure a bypass tunnel on the PLR:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter tunnel interface view of the bypass tunnel.

interface tunnel tunnel-number [ mode mpls-te ]

N/A

3.       Specify the destination address of the bypass tunnel.

destination ip-address

The bypass tunnel destination address is the LSR ID of the MP.

4.       Configure the bandwidth and the CT to be protected by the bypass tunnel.

mpls te backup bandwidth [ ct0 | ct1 | ct2 | ct3 ] { bandwidth | un-limited }

By default, the bandwidth and the CT to be protected by the bypass tunnel are not specified.

5.       Return to system view.

quit

N/A

6.       Enter interface view of the egress interface of a primary CRLSP.

interface interface-type interface-number

N/A

7.       Specify a bypass tunnel for the protected interface (the current interface).

mpls te fast-reroute bypass-tunnel tunnel tunnel-number

By default, no bypass tunnel is specified for an interface.

 

Automatically setting up bypass tunnels

With auto FRR, if the PLR is the penultimate node of a primary CRLSP, the PLR does not create a node-protection bypass tunnel for the primary CRLSP.

To configure auto FRR on the PLR:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE view.

mpls te

N/A

3.       Enable the auto FRR function globally.

auto-tunnel backup

By default, the auto FRR function is disabled globally.

4.       Specify an interface number range for the automatically created bypass tunnels.

tunnel-number min min-number max max-number

By default, no interface number range is specified, and the PLR cannot set up a bypass tunnel automatically.

5.       (Optional.) Configure the PLR to create only link-protection bypass tunnels.

nhop-only

By default, the PLR automatically creates both a link-protection and a node-protection bypass tunnel for each of its primary CRLSPs.

Execution of this command deletes all existing node-protection bypass tunnels automatically created for MPLS TE auto FRR.

6.       (Optional.) Configure a removal timer for unused bypass tunnels.

timers removal unused seconds

By default, a bypass tunnel is removed after it is unused for 3600 seconds.

7.       (Optional.) Return to system view.

quit

N/A

8.       (Optional.) Enter interface view.

interface interface-type interface-number

N/A

9.       (Optional.) Disable the auto FRR function on the interface.

mpls te auto-tunnel backup disable

By default, the auto FRR function is enabled on all RSVP-enabled interfaces after it is enabled globally.

Execution of this command deletes all existing bypass tunnels automatically created on the interface for MPLS TE auto FRR.

 

Configuring node fault detection

Perform this task to configure the RSVP hello mechanism or BFD on the PLR and the protected node to detect the node faults caused by signaling protocol faults. FRR does not need to use the RSVP hello mechanism or BFD to detect the node faults caused by the link faults between the PLR and the protected node.

You do not need to perform this task for FRR link protection.

To configure node fault detection:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view of the connecting interface between the PLR and the protected node.

interface interface-type interface-number

N/A

3.       Configure node fault detection.

·         (Method 1) Enable RSVP hello extension on the interface:
rsvp hello enable

·         (Method 2) Enable BFD on the interface:
rsvp bfd enable

By default, RSVP hello extension is disabled, and BFD is not configured.

For more information about the rsvp hello enable command and the rsvp bfd enable command, see "Configuring RSVP."

 

Configuring the optimal bypass tunnel selection interval

If you have specified multiple bypass tunnels for a primary CRLSP, MPLS TE selects an optimal bypass tunnel to protect the primary CRLSP. Sometimes, a bypass tunnel might become better than the current optimal bypass tunnel because, for example, the reservable bandwidth changes. Therefore, MPLS TE needs to poll the bypass tunnels periodically to update the optimal bypass tunnel.

Perform this task on the PLR to configure the interval for selecting an optimal bypass tunnel:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MPLS TE view.

mpls te

N/A

3.       Configure the interval for selecting an optimal bypass tunnel.

fast-reroute timer interval

By default, the interval is 300 seconds.

 

Displaying and maintaining MPLS TE

Execute display commands in any view and reset commands in user view.

 

Task

Command

Display information about explicit paths.

display explicit-path [ path-name ]

Display link and node information in an IS-IS TEDB.

display isis mpls te advertisement [ [ level-1 | level-2 ] | [ originate-system system-id | local ] | verbose ] * [ process-id ]

Display sub-TLV information for IS-IS TE.

display isis mpls te configured-sub-tlvs [ process-id ]

Display network information in an IS-IS TEDB.

display isis mpls te network [ [ level-1 | level-2 ] | local | lsp-id lsp-id ]* [ process-id ]

Display IS-IS tunnel interface information.

display isis mpls te tunnel [ level-1 | level-2 ] [ process-id ]

Display DS-TE information.

display mpls te ds-te

Display bandwidth information on MPLS TE-enabled interfaces.

display mpls te link-management bandwidth-allocation [ interface interface-type interface-number ]

Display MPLS TEDB information.

display mpls te tedb { { isis { level-1 | level-2 } | ospf area area-id } | link ip-address | network | node [ local | mpls-lsr-id ] | summary }

Display information about MPLS TE tunnel interfaces.

display mpls te tunnel-interface [ tunnel number ]

Display link and node information in an OSPF TEDB.

display ospf [ process-id ] [ area area-id ] mpls te advertisement [ originate-router advertising-router-id | self-originate ]

Display network information in an OSPF TEDB.

display ospf [ process-id ] [ area area-id ] mpls te network [ originate-router advertising-router-id | self-originate ]

Display OSPF tunnel interface information.

display ospf [ process-id ] [ area area-id ] mpls te tunnel

Reset the automatic bandwidth adjustment function.

reset mpls te auto-bandwidth-adjustment timers

 

MPLS TE configuration examples

Establishing an MPLS TE tunnel over a static CRLSP

Network requirements

Switch A, Switch B, and Switch C run IS-IS.

Establish an MPLS TE tunnel over a static CRLSP from Switch A to Switch C.

The MPLS TE tunnel requires a bandwidth of 2000 kbps. The maximum bandwidth of the link that the tunnel traverses is 10000 kbps. The maximum reservable bandwidth of the link is 5000 kbps.

Figure 27 Network diagram

 

Configuration procedure

1.        Configure IP addresses and masks for interfaces. (Details not shown.)

2.        Configure IS-IS to advertise interface addresses, including the loopback interface address:

# Configure Switch A.

<SwitchA> system-view

[SwitchA] isis 1

[SwitchA-isis-1] network-entity 00.0005.0000.0000.0001.00

[SwitchA-isis-1] quit

[SwitchA] interface vlan-interface 1

[SwitchA-Vlan-interface1] isis enable 1

[SwitchA-Vlan-interface1] quit

[SwitchA] interface loopback 0

[SwitchA-LoopBack0] isis enable 1

[SwitchA-LoopBack0] quit

# Configure Switch B.

<SwitchB> system-view

[SwitchB] isis 1

[SwitchB-isis-1] network-entity 00.0005.0000.0000.0002.00

[SwitchB-isis-1] quit

[SwitchB] interface vlan-interface 1

[SwitchB-Vlan-interface1] isis enable 1

[SwitchB-Vlan-interface1] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] isis enable 1

[SwitchB-Vlan-interface2] quit

[SwitchB] interface loopback 0

[SwitchB-LoopBack0] isis enable 1

[SwitchB-LoopBack0] quit

# Configure Switch C.

<SwitchC> system-view

[SwitchC] isis 1

[SwitchC-isis-1] network-entity 00.0005.0000.0000.0003.00

[SwitchC-isis-1] quit

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] isis enable 1

[SwitchC-Vlan-interface2] quit

[SwitchC] interface loopback 0

[SwitchC-LoopBack0] isis enable 1

[SwitchC-LoopBack0] quit

# Execute the display ip routing-table command on each switch to verify that the switches have learned the routes to one another, including the routes to the loopback interfaces. (Details not shown.)

3.        Configure an LSR ID, and enable MPLS and MPLS TE:

# Configure Switch A.

[SwitchA] mpls lsr-id 1.1.1.1

[SwitchA] mpls te

[SwitchA-te] quit

[SwitchA] interface vlan-interface 1

[SwitchA-Vlan-interface1] mpls enable

[SwitchA-Vlan-interface1] mpls te enable

[SwitchA-Vlan-interface1] quit

# Configure Switch B.

[SwitchB] mpls lsr-id 2.2.2.2

[SwitchB] mpls te

[SwitchB-te] quit

[SwitchB] interface vlan-interface 1

[SwitchB-Vlan-interface1] mpls enable

[SwitchB-Vlan-interface1] mpls te enable

[SwitchB-Vlan-interface1] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls enable

[SwitchB-Vlan-interface2] mpls te enable

[SwitchB-Vlan-interface2] quit

# Configure Switch C.

[SwitchC] mpls lsr-id 3.3.3.3

[SwitchC] mpls te

[SwitchC-te] quit

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] mpls enable

[SwitchC-Vlan-interface2] mpls te enable

[SwitchC-Vlan-interface2] quit

4.        Configure MPLS TE attributes of links:

# Configure the maximum link bandwidth and maximum reservable bandwidth on Switch A.

[SwitchA] interface vlan-interface 1

[SwitchA-Vlan-interface1] mpls te max-link-bandwidth 10000

[SwitchA-Vlan-interface1] mpls te max-reservable-bandwidth 5000

[SwitchA-Vlan-interface1] quit

# Configure the maximum link bandwidth and maximum reservable bandwidth on Switch B.

[SwitchB] interface vlan-interface 1

[SwitchB-Vlan-interface1] mpls te max-link-bandwidth 10000

[SwitchB-Vlan-interface1] mpls te max-reservable-bandwidth 5000

[SwitchB-Vlan-interface1] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls te max-link-bandwidth 10000

[SwitchB-Vlan-interface2] mpls te max-reservable-bandwidth 5000

[SwitchB-Vlan-interface2] quit

# Configure the maximum link bandwidth and maximum reservable bandwidth on Switch C.

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] mpls te max-link-bandwidth 10000

[SwitchC-Vlan-interface2] mpls te max-reservable-bandwidth 5000

[SwitchC-Vlan-interface2] quit

5.        Configure an MPLS TE tunnel on Switch A:

# Configure MPLS TE tunnel interface Tunnel 0.

[SwitchA] interface tunnel 0 mode mpls-te

[SwitchA-Tunnel0] ip address 6.1.1.1 255.255.255.0

# Specify the tunnel destination address as the LSR ID of Switch C.

[SwitchA-Tunnel0] destination 3.3.3.3

# Configure MPLS TE to use a static CRLSP to establish the tunnel.

[SwitchA-Tunnel0] mpls te signaling static

[SwitchA-Tunnel0] quit

6.        Create a static CRLSP:

# Configure Switch A as the ingress node of the static CRLSP, and specify the next hop address as 2.1.1.2, outgoing label as 20, and bandwidth for the tunnel as 2000 kbps.

[SwitchA] static-cr-lsp ingress static-cr-lsp-1 nexthop 2.1.1.2 out-label 20 bandwidth 2000

# On Switch A, configure tunnel 0 to reference the static CRLSP static-cr-lsp-1.

[SwitchA] interface Tunnel0

[SwitchA-Tunnel0] mpls te static-cr-lsp static-cr-lsp-1

[SwitchA-Tunnel0] quit

# Configure Switch B as the transit node of the static CRLSP, and specify the incoming label as 20, next hop address as 3.2.1.2, outgoing label as 30, and bandwidth for the tunnel as 2000 kbps.

[SwitchB] static-cr-lsp transit static-cr-lsp-1 in-label 20 nexthop 3.2.1.2 out-label 30 bandwidth 2000

# Configure Switch C as the egress node of the static CRLSP, and specify the incoming label as 30.

[SwitchC] static-cr-lsp egress static-cr-lsp-1 in-label 30

7.        Configure a static route on Switch A to direct traffic destined for subnet 3.2.1.0/24 to MPLS TE tunnel 0.

[SwitchA] ip route-static 3.2.1.2 24 tunnel 0 preference 1

Verifying the configuration

# Execute the display interface tunnel command on Switch A. The output shows that the tunnel interface is up.

[SwitchA] display interface tunnel

Tunnel0

Current state: UP

Line protocol state: UP

Description: Tunnel0 Interface

Bandwidth: 64kbps

Maximum Transmit Unit: 1496

Internet Address is 6.1.1.1/24 Primary

Tunnel source unknown, destination 3.3.3.3

Tunnel TTL 255

Tunnel protocol/transport CR_LSP

Output queue - Urgent queuing: Size/Length/Discards 0/100/0

Output queue - Protocol queuing: Size/Length/Discards 0/500/0

Output queue - FIFO queuing: Size/Length/Discards 0/75/0

Last clearing of counters: Never

Last 300 seconds input rate: 0 bytes/sec, 0 bits/sec, 0 packets/sec

Last 300 seconds output rate: 0 bytes/sec, 0 bits/sec, 0 packets/sec

Input: 0 packets, 0 bytes, 0 drops

Output: 0 packets, 0 bytes, 0 drops

# Execute the display mpls te tunnel-interface command on Switch A to display detailed information about the MPLS TE tunnel.

[SwitchA] display mpls te tunnel-interface

Tunnel Name            : Tunnel 0

Tunnel State           : Up (Main CRLSP up)

Tunnel Attributes      :

  LSP ID               : 1               Tunnel ID            : 0

  Admin State          : Normal

  Ingress LSR ID       : 1.1.1.1         Egress LSR ID        : 3.3.3.3

  Signaling            : Static          Static CRLSP Name    : static-cr-lsp-1

  Resv Style           : -

  Tunnel mode          : -

  Reverse-LSP name     : -

  Reverse-LSP LSR ID   : -               Reverse-LSP Tunnel ID: -

  Class Type           : -               Tunnel Bandwidth     : -

  Reserved Bandwidth   : -

  Setup Priority       : 0               Holding Priority     : 0

  Affinity Attr/Mask   : -/-

  Explicit Path        : -

  Backup Explicit Path : -

  Metric Type          : TE

  Record Route         : -               Record Label         : -

  FRR Flag             : -               Bandwidth Protection : -

  Backup Bandwidth Flag: -               Backup Bandwidth Type: -

  Backup Bandwidth     : -

  Bypass Tunnel        : -               Auto Created         : -

  Route Pinning        : -

  Retry Limit          : 3               Retry Interval       : 2 sec

  Reoptimization       : -               Reoptimization Freq  : -

  Backup Type          : -               Backup LSP ID        : -

  Auto Bandwidth       : -               Auto Bandwidth Freq  : -

  Min Bandwidth        : -               Max Bandwidth        : -

  Collected Bandwidth  : - 

# Execute the display mpls lsp command or the display mpls static-cr-lsp command on each switch to display static CRLSP information.

[SwitchA] display mpls lsp

FEC                         Proto    In/Out Label    Interface/Out NHLFE

1.1.1.1/0/1                 StaticCR -/20            Vlan1

2.1.1.2                     Local    -/-             Vlan1

[SwitchB] display mpls lsp

FEC                         Proto    In/Out Label    Interface/Out NHLFE

-                           StaticCR 20/30           Vlan2

3.2.1.2                     Local    -/-             Vlan2

[SwitchC] display mpls lsp

FEC                         Proto    In/Out Label    Interface/Out NHLFE

-                           StaticCR 30/-            -

[SwitchA] display mpls static-cr-lsp

Name            LSR Type    In/Out Label   Out Interface        State

static-cr-lsp-1 Ingress     Null/20        Vlan1                Up

[SwitchB] display mpls static-cr-lsp

Name            LSR Type    In/Out Label   Out Interface        State

static-cr-lsp-1 Transit     20/30          Vlan2                Up

[SwitchC] display mpls static-cr-lsp

Name            LSR Type    In/Out Label   Out Interface        State

static-cr-lsp1  Egress      30/Null        -                    Up

# Execute the display ip routing-table command on Switch A. The output shows a static route entry with interface Tunnel 0 as the output interface. (Details not shown.)

Establishing an MPLS TE tunnel with RSVP-TE

Network requirements

Switch A, Switch B, Switch C, and Switch D run IS-IS and all of them are Level-2 switches.

Use RSVP-TE to create an MPLS TE tunnel from Switch A to Switch D. The MPLS TE tunnel requires a bandwidth of 2000 kbps.

The maximum bandwidth of the link that the tunnel traverses is 10000 kbps and the maximum reservable bandwidth of the link is 5000 kbps.

Figure 28 Network diagram

 

Table 3 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Switch A

Loop0

1.1.1.9/32

Switch D

Loop0

4.4.4.9/32

 

Vlan-int1

10.1.1.1/24

 

Vlan-int3

30.1.1.2/24

Switch B

Loop0

2.2.2.9/32

Switch C

Loop0

3.3.3.9/32

 

Vlan-int1

10.1.1.2/24

 

Vlan-int3

30.1.1.1/24

 

Vlan-int2

20.1.1.1/24

 

Vlan-int2

20.1.1.2/24

 

Configuration procedure

1.        Configure IP addresses and masks for interfaces. (Details not shown.)

2.        Configure IS-IS to advertise interface addresses, including the loopback interface address:

# Configure Switch A.

<SwitchA> system-view

[SwitchA] isis 1

[SwitchA-isis-1] network-entity 00.0005.0000.0000.0001.00

[SwitchA-isis-1] quit

[SwitchA] interface vlan-interface 1

[SwitchA-Vlan-interface1] isis enable 1

[SwitchA-Vlan-interface1] isis circuit-level level-2

[SwitchA-Vlan-interface1] quit

[SwitchA] interface loopback 0

[SwitchA-LoopBack0] isis enable 1

[SwitchA-LoopBack0] isis circuit-level level-2

[SwitchA-LoopBack0] quit

# Configure Switch B.

<SwitchB> system-view

[SwitchB] isis 1

[SwitchB-isis-1] network-entity 00.0005.0000.0000.0002.00

[SwitchB-isis-1] quit

[SwitchB] interface vlan-interface 1

[SwitchB-Vlan-interface1] isis enable 1

[SwitchB-Vlan-interface1] isis circuit-level level-2

[SwitchB-Vlan-interface1] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] isis enable 1

[SwitchB-Vlan-interface2] isis circuit-level level-2

[SwitchB-Vlan-interface2] quit

[SwitchB] interface loopback 0

[SwitchB-LoopBack0] isis enable 1

[SwitchB-LoopBack0] isis circuit-level level-2

[SwitchB-LoopBack0] quit

# Configure Switch C.

<SwitchC> system-view

[SwitchC] isis 1

[SwitchC-isis-1] network-entity 00.0005.0000.0000.0003.00

[SwitchC-isis-1] quit

[SwitchC] interface vlan-interface 3

[SwitchC-Vlan-interface3] isis enable 1

[SwitchC-Vlan-interface3] isis circuit-level level-2

[SwitchC-Vlan-interface3] quit

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] isis enable 1

[SwitchC-Vlan-interface2] isis circuit-level level-2

[SwitchC-Vlan-interface2] quit

[SwitchC] interface loopback 0

[SwitchC-LoopBack0] isis enable 1

[SwitchC-LoopBack0] isis circuit-level level-2

[SwitchC-LoopBack0] quit

# Configure Switch D.

<SwitchD> system-view

[SwitchD] isis 1

[SwitchD-isis-1] network-entity 00.0005.0000.0000.0004.00

[SwitchD-isis-1] quit

[SwitchD] interface vlan-interface 3

[SwitchD-Vlan-interface3] isis enable 1

[SwitchD-Vlan-interface3] isis circuit-level level-2

[SwitchD-Vlan-interface3] quit

[SwitchD] interface loopback 0

[SwitchD-LoopBack0] isis enable 1

[SwitchD-LoopBack0] isis circuit-level level-2

[SwitchD-LoopBack0] quit

# Execute the display ip routing-table command on each switch to verify that the switches have learned the routes to one another, including the routes to the loopback interfaces. (Details not shown.)

3.        Configure an LSR ID, and enable MPLS, MPLS TE, and RSVP-TE:

# Configure Switch A.

[SwitchA] mpls lsr-id 1.1.1.9

[SwitchA] mpls te

[SwitchA-te] quit

[SwitchA] rsvp

[SwitchA-rsvp] quit

[SwitchA] interface vlan-interface 1

[SwitchA-Vlan-interface1] mpls enable

[SwitchA-Vlan-interface1] mpls te enable

[SwitchA-Vlan-interface1] rsvp enable

[SwitchA-Vlan-interface1] quit

# Configure Switch B.

[SwitchB] mpls lsr-id 2.2.2.9

[SwitchB] mpls te

[SwitchB-te] quit

[SwitchB] rsvp

[SwitchB-rsvp] quit

[SwitchB] interface vlan-interface 1

[SwitchB-Vlan-interface1] mpls enable

[SwitchB-Vlan-interface1] mpls te enable

[SwitchB-Vlan-interface1] rsvp enable

[SwitchB-Vlan-interface1] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls enable

[SwitchB-Vlan-interface2] mpls te enable

[SwitchB-Vlan-interface2] rsvp enable

[SwitchB-Vlan-interface2] quit

# Configure Switch C.

[SwitchC] mpls lsr-id 3.3.3.9

[SwitchC] mpls te

[SwitchC-te] quit

[SwitchC] rsvp

[SwitchC-rsvp] quit

[SwitchC] interface vlan-interface 3

[SwitchC-Vlan-interface3] mpls enable

[SwitchC-Vlan-interface3] mpls te enable

[SwitchC-Vlan-interface3] rsvp enable

[SwitchC-Vlan-interface3] quit

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] mpls enable

[SwitchC-Vlan-interface2] mpls te enable

[SwitchC-Vlan-interface2] rsvp enable

[SwitchC-Vlan-interface2] quit

# Configure Switch D.

[SwitchD] mpls lsr-id 4.4.4.9

[SwitchD] mpls te

[SwitchD-te] quit

[SwitchD] rsvp

[SwitchD-rsvp] quit

[SwitchD] interface vlan-interface 3

[SwitchD-Vlan-interface3] mpls enable

[SwitchD-Vlan-interface3] mpls te enable

[SwitchD-Vlan-interface3] rsvp enable

[SwitchD-Vlan-interface3] quit

4.        Configure IS-IS TE:

# Configure Switch A.

[SwitchA] isis 1

[SwitchA-isis-1] cost-style wide

[SwitchA-isis-1] mpls te enable level-2

[SwitchA-isis-1] quit

# Configure Switch B.

[SwitchB] isis 1

[SwitchB-isis-1] cost-style wide

[SwitchB-isis-1] mpls te enable level-2

[SwitchB-isis-1] quit

# Configure Switch C.

[SwitchC] isis 1

[SwitchC-isis-1] cost-style wide

[SwitchC-isis-1] mpls te enable level-2

[SwitchC-isis-1] quit

# Configure Switch D.

[SwitchD] isis 1

[SwitchD-isis-1] cost-style wide

[SwitchD-isis-1] mpls te enable level-2

[SwitchD-isis-1] quit

5.        Configure MPLS TE attributes of links:

# Configure the maximum link bandwidth and maximum reservable bandwidth on Switch A.

[SwitchA] interface vlan-interface 1

[SwitchA-Vlan-interface1] mpls te max-link-bandwidth 10000

[SwitchA-Vlan-interface1] mpls te max-reservable-bandwidth 5000

[SwitchA-Vlan-interface1] quit

# Configure the maximum link bandwidth and maximum reservable bandwidth on Switch B.

[SwitchB] interface vlan-interface 1

[SwitchB-Vlan-interface1] mpls te max-link-bandwidth 10000

[SwitchB-Vlan-interface1] mpls te max-reservable-bandwidth 5000

[SwitchB-Vlan-interface1] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] mpls te max-link-bandwidth 10000

[SwitchB-Vlan-interface2] mpls te max-reservable-bandwidth 5000

[SwitchB-Vlan-interface2] quit

# Configure the maximum link bandwidth and maximum reservable bandwidth on Switch C.

[SwitchC] interface vlan-interface 3

[SwitchC-Vlan-interface3] mpls te max-link-bandwidth 10000

[SwitchC-Vlan-interface3] mpls te max-reservable-bandwidth 5000

[SwitchC-Vlan-interface3] quit

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] mpls te max-link-bandwidth 10000

[SwitchC-Vlan-interface2] mpls te max-reservable-bandwidth 5000

[SwitchC-Vlan-interface2] quit

# Configure the maximum link bandwidth and maximum reservable bandwidth on Switch D.

[SwitchD] interface vlan-interface 3

[SwitchD-Vlan-interface3] mpls te max-link-bandwidth 10000

[SwitchD-Vlan-interface3] mpls te max-reservable-bandwidth 5000

[SwitchD-Vlan-interface3] quit

6.        Configure an MPLS TE tunnel on Switch A:

# Configure MPLS TE tunnel interface Tunnel 1.

[SwitchA] interface tunnel 1 mode mpls-te

[SwitchA-Tunnel1] ip address 7.1.1.1 255.255.255.0

# Specify the tunnel destination address as the LSR ID of Switch D.

[SwitchA-Tunnel1] destination 4.4.4.9

# Configure MPLS TE to use RSVP-TE to establish the tunnel.

[SwitchA-Tunnel1] mpls te signaling rsvp-te

# Assign 2000 kbps bandwidth to the tunnel.

[SwitchA-Tunnel1] mpls te bandwidth 2000

[SwitchA-Tunnel1] quit

7.        Configure a static route on Switch A to direct the traffic destined for subnet 30.1.1.0/24 to MPLS TE tunnel 1.

[SwitchA] ip route-static 30.1.1.2 24 tunnel 1 preference 1

Verifying the configuration

# Execute the display interface tunnel command on Switch A. The output shows that the tunnel interface is up.

[SwitchA] display interface tunnel

Tunnel1 current state: UP

Line protocol current state: UP

Description: Tunnel1 Interface

The Maximum Transmit Unit is 64000

Internet Address is 7.1.1.1/24 Primary

Tunnel source unknown, destination 4.4.4.9

Tunnel bandwidth 64 (kbps)

Tunnel TTL 255

Tunnel protocol/transport CR_LSP

Last clearing of counters: Never

    Last 300 seconds input rate: 0 bytes/sec, 0 bits/sec, 0 packets/sec

    Last 300 seconds output rate: 6 bytes/sec, 48 bits/sec, 0 packets/sec

    0 packets input, 0 bytes, 0 drops

    177 packets output, 11428 bytes, 0 drops

# Execute the display mpls te tunnel-interface command on Switch A to display detailed information about the MPLS TE tunnel.

[SwitchA] display mpls te tunnel-interface

Tunnel Name            : Tunnel 1

Tunnel State           : Up (Main CRLSP up, Shared-resource CRLSP down)

Tunnel Attributes      :

  LSP ID               : 23331           Tunnel ID            : 1

  Admin State          : Normal

  Ingress LSR ID       : 1.1.1.9         Egress LSR ID        : 4.4.4.9

  Signaling            : RSVP-TE         Static CRLSP Name    : -

  Resv Style           : SE

  Tunnel mode          : -

  Reverse-LSP name     : -

  Reverse-LSP LSR ID   : -               Reverse-LSP Tunnel ID: -

  Class Type           : CT0             Tunnel Bandwidth     : 2000 kbps

  Reserved Bandwidth   : 2000 kbps

  Setup Priority       : 7               Holding Priority     : 7

  Affinity Attr/Mask   : 0/0

  Explicit Path        : -

  Backup Explicit Path : -

  Metric Type          : TE

  Record Route         : Disabled        Record Label         : Disabled

  FRR Flag             : Disabled        Bandwidth Protection : Disabled

  Backup Bandwidth Flag: Disabled        Backup Bandwidth Type: -

  Backup Bandwidth     : -

  Bypass Tunnel        : No               Auto Created         : No

  Route Pinning        : Disabled

  Retry Limit          : 10              Retry Interval       : 2 sec

  Reoptimization       : Disabled        Reoptimization Freq  : -

  Backup Type          : None            Backup LSP ID        : -

  Auto Bandwidth       : Disabled        Auto Bandwidth Freq  : -

  Min Bandwidth        : -               Max Bandwidth        : -

  Collected Bandwidth  : -

# Execute the display ip routing-table command on Switch A. The output shows a static route entry with interface Tunnel 1 as the output interface. (Details not shown.)

Establishing an inter-AS MPLS TE tunnel with RSVP-TE

Network requirements

Switch A and Switch B are in AS 100. Switch C and Switch D are in AS 200. AS 100 and AS 200 use OSPF as the IGP.

Establish an EBGP connection between ASBRs Switch B and Switch C. Redistribute BGP routes into OSPF and OSPF routes into BGP, so that a route is available between AS 100 and AS 200.

Establish an MPLS TE tunnel from Switch A to Switch D. The tunnel requires a bandwidth of 2000 kbps. The maximum bandwidth of the link that the tunnel traverses is 10000 kbps and the maximum reservable bandwidth of the link is 5000 kbps.

Figure 29 Network diagram

 

Table 4 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Switch A

Loop0

1.1.1.9/32

Switch D

Loop0

4.4.4.9/32

 

Vlan-int1

10.1.1.1/24

 

Vlan-int3

30.1.1.2/24

Switch B

Loop0

2.2.2.9/32

Switch C

Loop0

3.3.3.9/32

 

Vlan-int1

10.1.1.2/24

 

Vlan-int3

30.1.1.1/24

 

Vlan-int2

20.1.1.1/24

 

Vlan-int2

20.1.1.2/24

 

Configuration procedure

1.        Configure IP addresses and masks for interfaces. (Details not shown.)

2.        Configure OSPF to advertise routes within the ASs, and redistribute the direct and BGP routes into OSPF on Switch B and Switch C:

# Configure Switch A.

<SwitchA> system-view

[SwitchA] ospf

[SwitchA-ospf-1] area 0

[SwitchA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255

[SwitchA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0

[SwitchA-ospf-1-area-0.0.0.0] quit

[SwitchA-ospf-1] quit

# Configure Switch B.

<SwitchB> system-view

[SwitchB] ospf

[SwitchB-ospf-1] import-route direct

[SwitchB-ospf-1] import-route bgp

[SwitchB-ospf-1] area 0

[SwitchB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255

[SwitchB-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0

[SwitchB-ospf-1-area-0.0.0.0] quit

[SwitchB-ospf-1] quit

# Configure Switch C.

<SwitchC> system-view

[SwitchC] ospf

[SwitchC-ospf-1] import-route direct

[SwitchC-ospf-1] import-route bgp

[SwitchC-ospf-1] area 0

[SwitchC-ospf-1-area-0.0.0.0] network 30.1.1.0 0.0.0.255

[SwitchC-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0

[SwitchC-ospf-1-area-0.0.0.0] quit

[SwitchC-ospf-1] quit

# Configure Switch D.

<SwitchD> system-view

[SwitchD] ospf

[SwitchD-ospf-1] area 0

[SwitchD-ospf-1-area-0.0.0.0] network 30.1.1.0 0.0.0.255

[SwitchD-ospf-1-area-0.0.0.0] network 4.4.4.9 0.0.0.0

[SwitchD-ospf-1-area-0.0.0.0] quit

[SwitchD-ospf-1] quit

# Execute the display ip routing-table command on each switch to verify that the switches have learned the routes to one another, including the routes to the loopback interfaces. Take Switch A as an example:

[SwitchA] display ip routing-table

 

Destinations : 6        Routes : 6

 

Destination/Mask    Proto  Pre  Cost         NextHop         Interface

 

1.1.1.9/32          Direct 0    0            127.0.0.1       InLoop0

2.2.2.9/32          OSPF   10   1            10.1.1.2        Vlan1

10.1.1.0/24         Direct 0    0            10.1.1.1        Vlan1

10.1.1.1/32         Direct 0    0            127.0.0.1       InLoop0

127.0.0.0/8         Direct 0    0            127.0.0.1       InLoop0

127.0.0.1/32        Direct 0    0            127.0.0.1       InLoop0

3.        Configure BGP on Switch B and Switch C to make sure the ASs can communicate with each other:

# Configure Switch B.

[SwitchB] bgp 100

[SwitchB-bgp] peer 20.1.1.2 as-number 200

[SwitchB-bgp] address-family ipv4 unicast

[SwitchB-bgp-ipv4] peer 20.1.1.2 enable

[SwitchB-bgp-ipv4] import-route ospf

[SwitchB-bgp-ipv4] import-route direct

[SwitchB-bgp-ipv4] quit

[SwitchB-bgp] quit

# Configure Switch C.

[SwitchC] bgp 200

[SwitchC-bgp] peer 20.1.1.1 as-number 100

[SwitchC-bgp] address-family ipv4 unicast

[SwitchC-bgp-ipv4] peer 20.1.1.1 enable

[SwitchC-bgp-ipv4] import-route ospf

[SwitchC-bgp-ipv4] import-route direct

[SwitchC-bgp-ipv4] quit

[SwitchC-bgp] quit

# Execute the display ip routing-table command on each switch to verify that the switches have learned AS-external routes. Take Switch A as an example:

[SwitchA] display ip routing-table

 

Destinations : 10       Routes : 10

 

Destination/Mask    Proto  Pre  Cost         NextHop         Interface

 

1.1.1.9/32          Direct 0    0            127.0.0.1       InLoop0

2.2.2.9/32          OSPF   10   1            10.1.1.2        Vlan1

3.3.3.9/32          O_ASE  150  1            10.1.1.2        Vlan1

4.4.4.9/32          O_ASE  150  1            10.1.1.2        Vlan1

10.1.1.0/24         Direct 0    0            10.1.1.1        Vlan1

10.1.1.1/32         Direct 0    0            127.0.0.1       InLoop0

20.1.1.0/24         O_ASE  150  1            10.1.1.2        Vlan1

30.1.1.0/24         O_ASE  150  1            10.1.1.2        Vlan1

127.0.0.0/8         Direct 0    0            127.0.0.1       InLoop0

127.0.0.1/32        Direct 0    0            127.0.0.1       InLoop0

4.        Configure an LSR ID, and enable MPLS, MPLS TE, and RSVP-TE:

# Configure Switch A.

[SwitchA] mpls lsr-id 1.1.1.9

[SwitchA] mpls te

[SwitchA-te] quit

[SwitchA] rsvp

[SwitchA-rsvp] quit

[SwitchA] interface vlan-interface 1

[SwitchA-Vlan-interface1] mpls enable

[SwitchA-Vlan-interface1] mpls te enable

[SwitchA-Vlan-interface1] rsvp enable

[SwitchA-Vlan-interface1] quit

# Configure Switch B.

[SwitchB] mpls lsr-id 2.2.2.9

[SwitchB] mpls te

[SwitchB-te] quit

[SwitchB] rsvp

[SwitchB-rsvp] quit

[SwitchB] interface vlan-interface 1