14-IP Multicast Configuration Guide

HomeSupportConfigure & DeployConfiguration GuidesH3C SecPath M9000 Configuration Guide(V7)(E9X71)-6W70014-IP Multicast Configuration Guide
09-Multicast VPN configuration
Title Size Download
09-Multicast VPN configuration 500.70 KB

Contents

Multicast VPN overview·· 1

Typical network diagram·· 1

MVPN scheme· 1

Basic concepts in MDT-based MVPN· 1

How MDT-based MVPN works· 2

Default MDT establishment 2

Default MDT-based delivery· 6

MDT switchover 8

Inter-AS MDT-based MVPN· 9

Inter-AS option A MDT-based MVPN· 9

Inter-AS option B MDT-based MVPN· 10

Inter-AS option C MDT-based MVPN· 11

MVPN extranet 12

M6VPE· 14

Protocols and standards· 15

Configuring multicast VPN·· 16

Multicast VPN tasks at a glance· 16

MDT-based MVPN tasks at a glance· 16

Configuring MDT-based MVPN· 16

Prerequisites for configuring MDT-based MVPN· 16

Enabling IP multicast routing for a VPN instance· 16

Creating an MDT-based MVPN instance· 17

Creating an MVPN address family· 17

Specifying the default group· 18

Specifying the MVPN source interface· 18

Configuring MDT switchover parameters· 19

Configuring the RPF vector feature· 20

Enabling data group reuse logging· 20

Configuring BGP MDT· 21

Configuring BGP MDT peers or peer groups· 21

Configuring a BGP MDT route reflector 21

Preferring routes learned from a peer or peer group during optimal route selection· 22

Configuring an MVPN extranet RPF selection policy· 23

About configuring MVPN extranet RPF selection policies· 23

Restrictions and guidelines for configuring MVPN extranet RPF selection policies· 23

Prerequisites for configuring MVPN extranet RPF selection policies· 24

Configuring an IPv4 MVPN extranet RPF selection policy· 24

Configuring an IPv6 MVPN extranet RPF selection policy· 24

Display and maintenance commands for multicast VPN· 24

Troubleshooting MVPN· 26

A default MDT cannot be established· 26

An MVRF cannot be created· 26

 


Multicast VPN overview

Multicast VPN implements multicast delivery in VPNs.

Typical network diagram

As shown in Figure 1, VPN A contains Site 1 and Site 3, and VPN B contains Site 2 and Site 4.

Figure 1 Typical VPN networking diagram

 

VPN multicast traffic between the PEs and the CEs is transmitted on a per-VPN-instance basis. The public network multicast traffic between the PEs and the P device is transmitted through the public network. Multicast VPN provides independent multicast services for the public network, VPN A, and VPN B.

For more information about CEs, PEs and Ps, see MPLS Configuration Guide.

MVPN scheme

MVPN is used to implement multicast VPN. MVPN only requires the PEs to support multiple VPN instances and the public network provided by the service provider to support multicast. There is no need to upgrade CEs and Ps or change their original PIM configurations. The MVPN solution is transparent to CEs and Ps.

.

Basic concepts in MDT-based MVPN

This section introduces the following basic concepts in MDT-based MVPN:

·     MVPN—An MVPN logically defines the transmission boundary of the multicast traffic of a VPN over the public network. It also physically identifies all the PEs that support that VPN instance on the public network. Different VPN instances correspond to different MVPNs.

·     Multicast distribution tree (MDT)—An MDT is a multicast distribution tree constructed by all PEs in the same VPN. MDT includes default MDT and data MDT.

·     Multicast tunnel (MT)—An MT is a tunnel that interconnects all PEs in an MVPN. The local PE encapsulates a VPN multicast packet into a public network multicast packet and forwards it through the MT over the public network. The remote PE decapsulates the public network multicast packet to get the original VPN multicast packet.

·     Multicast tunnel interface (MTI)—An MTI is the entrance or exit of an MT, equivalent to an entrance or exit of an MVPN. MTIs are automatically created when the MVPN for the VPN instance is created. PEs use the MTI to access the MT. The local PE sends VPN data out of the MTI. The remote PEs receive the private data from their MTIs. An MTI runs the same PIM mode as the VPN instance to which the MTI belongs. PIM is enabled on MTIs when a minimum of one interface in the VPN instance is enabled with PIM. When PIM is disabled on all interfaces in the VPN instance, PIM is also disabled on MTIs.

·     Default group—A default group is a unique multicast address assigned to each MVPN on the public network. It is the unique identifier of an MVPN on the public network and helps build the default MDT for an MVPN on the public network. A PE encapsulates a VPN multicast packet (a multicast protocol packet or a multicast data packet) into a public network multicast packet. The default group address is used as the public network multicast group.

·     Default MDT—A default MDT uses a default group address as its group address. In a VPN, the default MDT is uniquely identified by the default group. A default MDT is automatically created after the default group is specified and will always exist on the public network, regardless of the presence of any multicast services on the public network or the VPN.

·     Data group—An MVPN is assigned a unique data group for MDT switchover. The ingress PE selects a least used address from the data group range to encapsulate the VPN multicast packets when the multicast traffic of the VPN reaches or exceeds a threshold. Other PEs are notified to use the address to forward the multicast traffic for that VPN. This initiates the switchover to the data MDT.

·     Data MDT—A data MDT is an MDT that uses a data group as it group address. At MDT switchover, PEs with downstream receivers join a data group to build a data MDT. The ingress PE forwards the encapsulated VPN multicast traffic along the data MDT over the public network.

How MDT-based MVPN works

For a VPN instance, multicast data transmission on the public network is transparent. The VPN data is exchanged between the MTIs of the local PE and the remote PE. This implements the seamless transmission of the VPN data over the public network. However, the multicast data transmission process (the MDT transmission process) over the public network is very complicated.

The following types of PIM neighboring relationships exist in MVPN:

·     PE-P PIM neighboring relationship—Established between the public network interface on a PE and the peer interface on the P device over the link.

·     PE-PE PIM neighboring relationship—Established between PEs that are in the same VPN instance after they receive the PIM hello packets.

·     PE-CE PIM neighboring relationship—Established between a PE interface that is bound with the VPN instance and the peer interface on the CE over the link.

Default MDT establishment

The multicast routing protocol running on the public network can be PIM-DM, PIM-SM, BIDIR-PIM, or PIM-SSM. The process of creating a default MDT is different in these PIM modes.

For each PIM mode running on the public network, the default MDT has the following characteristics in common:

·     All PEs that support the same VPN instance join the default MDT.

·     All multicast packets that belong to this VPN are forwarded along the default MDT to every PE on the public network, even if no active downstream receivers exist.

Default MDT establishment in a PIM-DM network

As shown in Figure 2, PIM-DM is enabled on the network, and all PEs support VPN instance A. All PEs establish PIM neighboring relationship among each other and build independent SPTs to constitute a default MDT in the MVPN.

Figure 2 Default MDT establishment in a PIM-DM network

 

The process of establishing a default MDT is as follows:

1.     To establish PIM neighboring relationships with PE 2 and PE 3 through the MTI for VPN instance A, PE 1 does the following:

a.     Encapsulates the PIM protocol packet of the private network into a public network multicast data packet. PE 1 does this by specifying the source address as the IP address of the MVPN source interface and the multicast group address as the default group address.

b.     Sends the multicast data packet to the public network.

For other PEs that support VPN instance A as default group members, PE 1 of VPN instance A initiates a flood-prune process in the entire public network. A (11.1.1.1, 239.1.1.1) state entry is created on each device along the path on the public network. This forms an SPT with PE 1 as the root, and PE 2 and PE 3 as leaves.

2.     At the same time, PE 2 and PE 3 separately initiate a similar flood-prune process.

Finally, three independent SPTs are established in the MVPN, constituting the default MDT in the PIM-DM network.

Default MDT establishment in a PIM-SM network

As shown in Figure 3, PIM-SM is enabled on the network, and all PEs support VPN instance A. All PEs establish PIM neighboring relationship among each other and build independent SPTs to constitute a default MDT in the MVPN with an RPT.

Figure 3 Default MDT establishment in a PIM-SM network

 

The process of establishing a default MDT is as follows:

1.     PE 1 initiates a join to the public network RP by specifying the multicast group address as the default group address in the join message. A (*, 239.1.1.1) state entry is created on each device along the path on the public network.

2.     At the same time, PE 2 and PE 3 separately initiate a similar join process.

Finally, an RPT is established in the MVPN, with the public network RP as the root and PE 1, PE 2, and PE 3 as leaves.

3.     To establish PIM neighboring relationships with PE 2 and PE 3 through the MTI for VPN instance A, PE 1 does the following:

a.     Encapsulates the PIM protocol packet of the private network into a public network multicast data packet. PE 1 does this by specifying the source address as the IP address of the MVPN source interface and the multicast group address as the default group address.

b.     Sends the multicast data packet to the public network.

The public network interface of PE 1 registers the multicast source with the public network RP, and the public network RP initiates a join to PE 1. A (11.1.1.1, 239.1.1.1) state entry is created on each device along the path on the public network.

4.     At the same time, PE 2 and PE 3 separately initiate a similar register process.

Finally, three SPTs between the PEs and the RP are established in the MVPN.

In the PIM-SM network, the RPT, or the (*, 239.1.1.1) tree, and the three independent SPTs constitute the default MDT.

Default MDT establishment in a BIDIR-PIM network

As shown in Figure 4, BIDIR-PIM runs on the network, and all PEs support VPN instance A. All PEs establish PIM neighboring relationship among each other and build receiver-side and source-side RPTs to constitute a default MDT in the MVPN.

Figure 4 Default MDT establishment in a BIDIR-PIM network

 

The process of establishing a default MDT is as follows:

1.     PE 1 initiates a join to the public network RP by specifying the multicast group address as the default group address in the join message. A (*, 239.1.1.1) state entry is created on each device along the path on the public network.

At the same time, PE 2 and PE 3 separately initiate a similar join process. Finally, a receiver-side RPT is established in the MVPN, with the public network RP as the root and PE 1, PE 2, and PE 3 as leaves.

2.     PE 1 sends a multicast packet with the default group address as the multicast group address. The DF of each network segment on the public network forwards the multicast packet to the RP. Each device on the path creates a (*, 239.1.1.1) state entry.

At the same time, PE 2 and PE 3 separately initiate a similar process. Finally, three source-side RPTs are established in the MVPN, with PE 1, PE 2, and PE 3 as the roots and as the public network RP as the leave.

3.     The receiver-side RPT and the three source-side RPTs constitute the default MDT in the BIDIR-PIM network.

Default MDT establishment in a PIM-SSM network

As shown in Figure 5, PIM-SSM runs on the network, and all the PEs support VPN instance A. All PEs establish PIM neighboring relationship among each other and build independent SPTs to constitute a default MDT in the MVPN.

Figure 5 Default MDT establishment in a PIM-SSM network

 

The process of establishing a default MDT is as follows:

1.     PE 1, PE 2, and PE 3 exchange MDT route information (including BGP interface address and the default group address) through BGP.

2.     PE 1 sends a subscribe message to PE 2 and PE 3. Each device on the public network creates an (S, G) entry. An SPT is established in the MVPN with PE 1 as the root and PE 2 and PE 3 as the leaves.

At the same time, PE 2 and PE 3 separately initiate a similar process, and establish an SPT with itself as the root and the other PEs as the leaves.

3.     The three independent SPTs constitute the default MDT in the PIM-SSM network.

In PIM-SSM, the term "subscribe message" refers to a join message.

Default MDT-based delivery

After the default MDT is established, the multicast source forwards the VPN multicast data to the receivers in each site along the default MDT. The VPN multicast packets are encapsulated into public network multicast packets on the local PE, and transmitted along the default-MDT. Then, they are decapsulated on the remote PE and transmitted in that VPN site.

VPN multicast data packets are forwarded across the public network differently in the following circumstances:

·     If PIM-DM or PIM-SSM is running in the VPN, the multicast source forwards multicast data packets to the receivers along the VPN SPT across the public network.

·     When PIM-SM is running in the VPN:

¡     Before the RPT-to-SPT switchover, if the multicast source and the VPN RP are in different sites, the VPN multicast data packets travel to the VPN RP along the VPN SPT across the public network. If the VPN RP and the receivers are in different sites, the VPN multicast data packets travel to the receivers along the VPN RPT over the public network.

¡     After the RPT-to-SPT switchover, if the multicast source and the receivers are in different sites, the VPN multicast data packets travel to the receivers along the VPN SPT across the public network.

·     When BIDIR-PIM is running in the VPN, if the multicast source and the VPN RP are in different sites, the multicast source sends multicast data to the VPN RP across the public network along the source-side RPT. If the VPN RP and the receivers are in different sites, the multicast data packets travel to the receivers across the public network along the receiver-side RPT.

For more information about RPT-to-SPT switchover, see "Configuring PIM."

The following example explains how multicast data packets are delivered based on the default MDT when PIM-DM is running in both the public network and the VPN network.

As shown in Figure 6:

·     PIM-DM is running in both the public network and the VPN sites.

·     Receiver of the VPN multicast group G (225.1.1.1) in Site 2 is attached to CE 2.

·     Source in Site 1 sends multicast data to multicast group (G).

·     The default group address used to forward public network multicast data is 239.1.1.1.

Figure 6 Multicast data packet delivery

 

A VPN multicast data packet is delivered across the public network as follows:

1.     Source sends a VPN multicast data packet (192.1.1.1, 225.1.1.1) to CE 1.

2.     CE 1 forwards the VPN multicast data packet along an SPT to PE 1, and the VPN instance on PE 1 examines the MVRF.

If the outgoing interface list of the forwarding entry contains an MTI, PE 1 processes the VPN multicast data packet as described in step 3. The VPN instance on PE 1 considers the VPN multicast data packet to have been sent out of the MTI, because step 3 is transparent to it.

3.     PE 1 encapsulates the VPN multicast data packet into a public network multicast packet (11.1.1.1, 239.1.1.1) by using the GRE method. The source IP address of the packet is the MVPN source interface 11.1.1.1, and the destination address is the default group address 239.1.1.1. PE 1 then forwards it to the public network.

4.     The default MDT forwards the multicast data packet (11.1.1.1, 239.1.1.1) to the public network instance on all the PEs. After receiving this packet, every PE decapsulates it to get the original VPN multicast data packet, and passes it to the corresponding VPN instance. If a PE has a downstream interface for an SPT, it forwards the VPN multicast packet down the SPT. Otherwise, it discards the packet.

5.     The VPN instance on PE 2 looks up the MVRF and finally delivers the VPN multicast data to Receiver.

By now, the process of transmitting a VPN multicast data packet across the public network is completed.

MDT switchover

Switching from default MDT to data MDT

When a multicast packet of a VPN is transmitted through the default MDT on the public network, the packet is forwarded to all PEs that support that VPN instance. This occurs whether or not any active receivers exist in the attached sites. When the rate of the multicast traffic of that VPN is high, multicast data might get flooded on the public network. This increases the bandwidth use and brings extra burden on the PEs.

To optimize multicast transmission of large VPN multicast traffic that enters the public network, the MVPN solution introduces a dedicated data MDT. The data MDT is built between the PEs that connect VPN multicast receivers and multicast sources. When specific network criteria are met, a switchover from the default MDT to the data MDT occurs to forward VPN multicast traffic to receivers.

The process of default MDT to data MDT switchover is as follows:

1.     The source-side PE (PE 1, for example) periodically examines the forwarding rate of the VPN multicast traffic. The default MDT switches to the data MDT only when the following criteria are both met:

¡     The VPN multicast data has passed the ACL rule filtering for default MDT to data MDT switchover.

¡     The traffic rate of the VPN multicast stream has exceeded the switchover threshold and stayed higher than the threshold for a certain length of time.

2.     PE 1 selects a least-used address from the data group range. Then, it sends an MDT switchover message to all the other PEs down the default MDT. This message contains the VPN multicast source address, the VPN multicast group address, and the data group address.

3.     Each PE that receives this message examines whether it interfaces with a VPN that has receivers of that VPN multicast stream.

If so, it joins the data MDT rooted at PE 1. Otherwise, it caches the message and will join the data MDT when it has attached receivers.

4.     After sending the MDT switchover message, PE 1 starts the data delay timer. When the timer expires, PE 1 uses the default group address to encapsulate the VPN multicast data. The multicast data is then forwarded down the data MDT.

5.     After the multicast traffic is switched from the default MDT to the data MDT, PE 1 continues sending MDT switchover messages periodically. Subsequent PEs with attached receivers can then join the data MDT. When a downstream PE no longer has active receivers attached to it, it leaves the data MDT.

For a given VPN instance, the default MDT and data MDTs are both forwarding tunnels in the same MVPN. A default MDT is uniquely identified by a default group address, and a data MDT is uniquely identified by a data group address. Each default group is uniquely associated with a data group range.

Backward switching from data MDT to default MDT

After the VPN multicast traffic is switched to a data MDT, the multicast traffic conditions might change and no longer meet the switchover criterion. In this case, PE 1, as in the preceding example, initiates a backward MDT switchover process when any of the following criteria are met:

·     The traffic rate of the VPN multicast data has dropped below the switchover threshold and has stayed lower than the threshold for a certain length of time (known as the data hold-down period).

·     The associated data group range is changed, and the data group address for encapsulating the VPN multicast data is out of the new address range.

·     The ACL rule for controlling the switchover from the default MDT to the data MDT has changed, and the VPN multicast data fails to pass the new ACL rule.

Inter-AS MDT-based MVPN

In an inter-AS VPN networking scenario, VPN sites are located in multiple ASs. These sites must be interconnected. Inter-AS VPN provides the following solutions:

·     VRF-to-VRF connections between ASBRs—This solution is also called inter-AS option A.

·     EBGP redistribution of labeled VPN-IPv4 routes between ASBRs—ASBRs advertise VPN-IPv4 routes to each other through MP-EBGP. This solution is also called inter-AS option B.

·     Multihop EBGP redistribution of labeled VPN-IPv4 routes between PE devices—PEs advertise VPN-IPv4 routes to each other through MP-EBGP. This solution is also called inter-AS option C.

For more information about the three inter-AS VPN solutions, see "Configuring MPLS L3VPN."

Based on these solutions, there are three ways to implement inter-AS MDT-based MVPN:

·     Inter-AS option A MDT-based MVPN.

·     Inter-AS option B MDT-based MVPN.

·     Inter-AS option C MDT-based MVPN.

Inter-AS option A MDT-based MVPN

As shown in Figure 7:

·     Two VPN instances are in AS 1 and AS 2.

·     PE 3 and PE 4 are ASBRs for AS 1 and AS 2, respectively.

·     PE 3 and PE 4 are interconnected through their respective VPN instance and treat each other as a CE.

Figure 7 Inter-AS option A MDT-based MVPN

 

To implement inter-AS option A MDT-based MVPN, a separate MVPN must be created in each AS. Multicast data is transmitted between the VPNs in different ASs through the MVPNs.

Multicast packets of VPN instance 1 are delivered as follows:

1.     CE 1 forwards the multicast packet of VPN instance 1 to PE 1.

2.     PE 1 encapsulates the multicast packet into a public network packet and forwards it to PE 3 through the MTI interface in MVPN 1.

3.     PE 3 considers PE 4 as a CE of VPN instance 1, so PE 3 forwards the multicast packet to PE 4.

4.     PE 4 considers PE 3 as a CE of VPN instance 2, so it forwards the multicast packet to PE 2 through the MTI interface in MVPN 2 on the public network.

5.     PE 2 forwards the multicast packet to CE 2.

Because only VPN multicast data is forwarded between ASBRs, different PIM modes can run within different ASs. However, the same PIM mode must run on all interfaces that belong to the same VPN (including interfaces with VPN bindings on ASBRs).

Inter-AS option B MDT-based MVPN

In inter-AS option B MDT-based MVPN, RPF vector and BGP connector are introduced:

·     RPF vector—Attribute encapsulated in a PIM join message. It is the next hop of BGP MDT route from the local PE to the remote PE. Typically, it is the ASBR in the local AS.

When a device receives the join message with the RPF vector, it first checks whether the RPF vector is its own IP address. If so, the device removes the RPF vector, and sends the message to its upstream neighbor according to the route to the remote PE. Otherwise, it keeps the RPF vector, looks up the route to the RPF vector, and sends the message to the next hop of the route. In this way, the PIM message can be forwarded across the ASs and an MDT is established.

·     BGP connector—Attribute shared by BGP peers when they exchange VPNv4 routes. It is the IP address of the remote PE.

The local PE fills the upstream neighbor address field with the BGP connector in a join message. This ensures that the message can pass the RPF check on the remote PE after it travels along the MT.

To implement inter-AS option B MDT-based MVPN, only one MVPN needs to be established for the two ASs. VPN multicast data is transmitted between different ASs on the public network within this MVPN.

As shown in Figure 8:

·     A VPN network involves AS 1 and AS 2.

·     PE 3 and PE 4 are the ASBRs for AS 1 and AS 2, respectively.

·     PE 3 and PE 4 are interconnected through MP-EBGP and treat each other as a P device.

·     PE 3 and PE 4 advertise VPN-IPv4 routes to each other through MP-EBGP.

·     An MT is established between PE 1 and PE 2 for delivering VPN multicast traffic across the ASs.

Figure 8 Inter-AS option B MDT-based MVPN

 

The establishment of the MVPN on the public network is as follows:

1.     PE 1 originates a PIM join message to join the SPT rooted at PE 2. In the join message, the upstream neighbor address is the IP address of PE 2 (the BGP connector). The RPF vector attribute is the IP address of PE 3. PE 1 encapsulates the join message as a public network packet and forwards it through the MTI.

2.     P 1 determines that the RPF vector is not an IP address of its own. It looks up the routing table for a route to PE 3, and forwards the packet to PE 3.

3.     PE 3 removes the RPF vector because the RPF vector is its own IP address. It fails to find a BGP MDT route to PE 2, so it encapsulates a new RPF vector (IP address of PE 4) in the packet and forwards it to PE 4.

4.     PE 4 removes the RPF vector because the RPF vector is its own IP address. It has a local route to PE 2, so it forwards the packet to P 2, which is the next hop of the route to PE 2.

5.     P 2 sends the packet to PE 2.

6.     PE 2 receives the packet on the MTI and decapsulates the packet. The receiving interface is the RPF interface of the RPF route back to PE 1 for the join message, and the join message passes the RPF check. The SPT from PE 1 to PE 2 is established.

When PE 1 joins the SPT rooted at PE 1, PE 2 also initiates a join process to the SPT rooted at PE 1. An MDT is established when the two SPTs are finished.

The public network supports only PIM-SSM on an inter-AS option B MDT-based MVPN network.

Inter-AS option C MDT-based MVPN

As shown in Figure 9:

·     A VPN network involves AS 1 and AS 2.

·     PE 3 and PE 4 are the ASBRs for AS 1 and AS 2, respectively.

·     PE 3 and PE 4 are interconnected through MP-EBGP and treat each other as a P device.

·     PEs in different ASs establish a multihop MP-EBGP session to advertise VPN-IPv4 routes to each other.

Figure 9 Inter-AS option C MDT-based MVPN

 

To implement inter-AS option C MDT-based MVPN, only one MVPN needs to be created for the two ASs. Multicast data is transmitted between the two ASs through the MVPN.

Multicast packets are delivered as follows:

1.     CE 1 forwards the VPN instance multicast packet to PE 1.

2.     PE 1 encapsulates the multicast packet into a public network multicast packet and forwards it to PE 3 through the MTI interface on the public network.

3.     PE 3 and PE 4 are interconnected through MP-EBGP, so PE 3 forwards the public network multicast packet to PE 4 along the VPN IPv4 route.

4.     The public network multicast packet arrives at the MTI interface of PE 2 in AS 2. PE 2 decapsulates the public network multicast packet and forwards the VPN multicast packet to CE 2.

MVPN extranet

MVPN extranet implements inter-VPN multicast traffic transmission. In MVPN extranet, the multicast source and receivers are in different VPNs. In actual application, the MVPN extranet solution ensures that the service provider in a VPN provides multicast services for users in other VPNs.

The following terms are used in the MVPN extranet solution:

·     Source VPN instance—VPN instance to which the multicast source belongs.

·     Receiver VPN instance—VPN instance to which the multicast receiver belongs.

·     Source PE—PE directly connected to the multicast source.

·     Receiver PE—PE directly connected to multicast receivers.

As shown in Figure 10, multicast source Source 1 is in Site 1 of VPN A. Receiver 1 and Receiver 2 are located in Site 2 of VPN A and Site 1 of VPN B, respectively. MVPN enables Receiver 1 to receive multicast data from Source 1, and MVPN extranet enables Receiver 2 to receive multicast data from Source 1.

The MVPN extranet solution can be implemented through the source-PE-based MVPN extranet option or the receiver-PE-based MVPN extranet option.

Figure 10 MVPN extranet

 

Source-PE-based MVPN extranet option

To use this option, create a receiver VPN instance on the source PE and specify a default group for this VPN instance. The default group for the receiver VPN instance must be the same as the default group specified on other PEs in this VPN instance.

This option is available only in MDT-based MVPN.

As shown in Figure 11, perform the following tasks:

1.     For multicast traffic transmission from Site 1 to Site 2, configure MVPN for VPN A on PE 1 and PE 2.

2.     For multicast traffic transmission from Site 1 to Site 3, create VPN B and configure MVPN for VPN B on PE 1.

3.     Configure an MVPN extranet RPF selection policy for VPN B on PE 1.

Figure 11 Source-PE-based MVPN extranet option

 

When Receiver 2 in VPN B joins the multicast group that matches the policy, multicast packets from Source 1 are transmitted as follows:

4.     PE 1 replicates and encapsulates the packets for VPN A and VPN B and forwards them.

5.     The packets travel along default MDTs of VPN A and VPN B and arrive at PE 2 and PE 3.

6.     PE 2 and PE 3 decapsulate and forward the packets to receivers.

Receiver-PE-based MVPN extranet option

To use this option, create a source VPN instance on the receiver PE and specifies a default group for this VPN instance. The default group for the source VPN instance must be the same as the default group specified on other PEs in this VPN instance.

This option is available in MDT-based MVPN, RSVP-TE-based MVPN, and mLDP-based MVPN. This example uses MDT-based MVPN.

As shown in Figure 12, configure PEs as follows:

1.     For multicast traffic transmission from Site 1 to Site 2, configure MVPN for VPN A on PE 1 and PE 2.

2.     For multicast traffic transmission from Site 1 to Site 3, create VPN A and configure MVPN for VPN A on PE 3.

3.     Configure an MVPN extranet RPF selection policy for VPN B on PE 3.

Figure 12 Receiver-PE-based MVPN extranet option

 

When Receiver 2 in VPN B joins the multicast group that matches the RPF selection policy, multicast packets from Source 1 are encapsulated on PE 1 and transmitted along the default MDT.

·     When the packets arrive at PE 2, PE 2 decapsulates the packets and forwards them to Receiver 1.

·     When the packets arrive at PE 3, PE 3 decapsulates the packets and forwards them to Receiver 2 based on the MVPN extranet RPF selection policy.

M6VPE

The multicast IPv6 VPN provider edge (M6VPE) feature enables PEs to transmit IPv6 multicast traffic of a VPN instance over the public network. Only the IPv4 network is available for the backbone network.

As shown in Figure 13, the public network runs IPv4 protocols, and sites of VPN instance VPN A run IPv6 multicast protocols. To transmit IPv6 multicast traffic between CE 1 and CE 2, configure M6VPE on the PEs.

Figure 13 M6VPE network

 

IPv6 multicast traffic forwarding over the IPv4 public network is as follows:

1.     CE 1 forwards an IPv6 multicast packet for VPN instance VPN A to PE 1.

2.     PE 1 encapsulates the IPv6 multicast packet with an IPv4 packet header and transmits the IPv4 packet in the IPv4 backbone network.

3.     PE 2 decapsulates the IPv4 packet and forwards the IPv6 multicast packet to CE 2.

Protocols and standards

·     RFC 6037, Cisco Systems' Solution for Multicast in BGP/MPLS IP VPNs

·     RFC 6513, Multicast in MPLS/BGP IP VPNs

·     RFC 6514, BGP Encodings and Procedures for Multicast in MPLS/BGP IP VPNs


Configuring multicast VPN

Multicast VPN tasks at a glance

MDT-based MVPN tasks at a glance

To configure multicast VPN, perform the following tasks:

1.     Configuring MDT-based MVPN

a.     Enabling IP multicast routing for a VPN instance

b.     Creating an MDT-based MVPN instance

c.     Creating an MVPN address family

d.     Specifying the default group

e.     Specifying the MVPN source interface

f.     Configuring MDT switchover parameters

g.     (Optional.) Configuring the RPF vector feature

h.     (Optional.) Enabling data group reuse logging

2.     Configuring BGP MDT

If PIM-SSM is running on the public network, you must configure BGP MDT.

a.     Configuring BGP MDT peers or peer groups

b.     (Optional.) Configuring a BGP MDT route reflector

c.     (Optional.) Preferring routes learned from a peer or peer group during optimal route selection

Configuring MDT-based MVPN

Prerequisites for configuring MDT-based MVPN

Before you configure MDT-based MVPN, complete the following tasks:

·     Configure a unicast routing protocol on the public network.

·     Configure MPLS L3VPN on the public network.

·     Configure PIM-DM, PIM-SM, BIDIR-PIM, or PIM-SSM on the public network.

Enabling IP multicast routing for a VPN instance

1.     Enter system view.

system-view

2.     Create a VPN instance and enter its view.

ip vpn-instance vpn-instance-name

For more information about this command, see MPLS Command Reference.

3.     Configure an RD for the VPN instance.

route-distinguisher route-distinguisher

For more information about this command, see MPLS Command Reference.

4.     Return to system view.

quit

5.     Enter interface view.

interface interface-type interface-number

6.     Associate the interface with the VPN instance.

ip binding vpn-instance vpn-instance-name

By default, an interface is associated with no VPN instance and belongs to the public network.

For more information about this command, see MPLS Command Reference.

7.     Return to system view.

quit

8.     Enable IP multicast routing for the VPN instance and enter MRIB view of the VPN instance.

IPv4:

multicast routing vpn-instance vpn-instance-name

By default, IPv4 multicast routing is disabled for a VPN instance.

For more information about this command, see IP Multicast Command Reference.

IPv6:

ipv6 multicast routing vpn-instance vpn-instance-name

By default, IPv6 multicast routing is disabled for a VPN instance.

For more information about this command, see IP Multicast Command Reference.

Creating an MDT-based MVPN instance

About this task

To provide multicast services for a VPN instance, you must create an MDT-based MVPN instance on PEs that belong to the VPN instance. After the MVPN instance is created, the system automatically creates MTIs and binds them with the VPN instance.

You can create one or more MDT-based MVPN instances on a PE.

Procedure

1.     Enter system view.

system-view

2.     Create an MDT-based MVPN instance and enter MVPN view.

multicast-vpn vpn-instance vpn-instance-name mode mdt

Creating an MVPN address family

About this task

You must create an MVPN IPv4 or IPv6 address family for a VPN instance before you can perform other MVPN VPN configuration tasks for the VPN instance. For a VPN instance, configurations in MVPN IPv4 and IPv6 address family views apply to IPv4 and IPv6 multicast packets of the instance, respectively.

Procedure

1.     Enter system view.

system-view

2.     Enter MVPN view of a VPN instance.

multicast-vpn vpn-instance vpn-instance-name mode mdt

3.     Create an MVPN address family and enter MVPN address family view.

IPv4:

address-family ipv4

IPv6:

address-family ipv6

Specifying the default group

Restrictions and guidelines

You must specify the same default group on all PEs that belong to the same MVPN.

The default group for an MVPN must be different from the default group and the data group used by any other MVPN.

For an MVPN that transmits both IPv4 and IPv6 multicast packets, you must specify the same default group in MVPN IPv4 address view and MVPN IPv6 address family view.

Procedure

1.     Enter system view.

system-view

2.     Enter MVPN view.

multicast-vpn vpn-instance vpn-instance-name mode mdt

3.     Enter MVPN address family view.

IPv4:

address-family ipv4

IPv6:

address-family ipv6

4.     Specify the default group.

default-group group-address

Specifying the MVPN source interface

About this task

An MTI of a VPN instance uses the IP address of the MVPN source interface as the source address to encapsulate multicast packets for the VPN instance.

Restrictions and guidelines

For the PE to obtain correct routing information, you must specify the interface used for establishing BGP peer relationship as the MVPN source interface.

For an MVPN that transmits both IPv4 and IPv6 multicast packets, you must specify the same MVPN source interface in MVPN IPv4 address family view and MVPN IPv6 address family view.

The MTI takes effect only after the default group and MVPN source interface are specified and the MTI obtains the public IP address of the MVPN source interface.

Procedure

1.     Enter system view.

system-view

2.     Enter MVPN view.

multicast-vpn vpn-instance vpn-instance-name mode mdt

3.     Enter MVPN address family view.

IPv4:

address-family ipv4

IPv6:

address-family ipv6

4.     Specify the MVPN source interface.

source interface-type interface-number

By default, no MVPN source interface is specified.

Configuring MDT switchover parameters

About this task

In some cases, the traffic rate of the private network multicast data might fluctuate around the MDT switchover threshold. To avoid frequent switching of multicast traffic between the default MDT and the data MDT, you can specify a data delay period and a data hold-down period.

·     MDT switchover does not take place immediately after the multicast traffic rate exceeds the switchover threshold. It takes place after a data delay period, during which the traffic rate must stay higher than the switchover threshold.

·     Likewise, a backward switchover does not take place immediately after the multicast traffic rate drops below the MDT switchover threshold. It takes place after a data hold-down period, during which the traffic rate must stay lower than the switchover threshold.

Restrictions and guidelines

On a PE, the data group range for an MVPN cannot include the default group or data groups of any other MVPN. The data group ranges for different MVPNs on different PE devices cannot overlap with one another if the PIM mode is not PIM-SSM on the public network.

For an MVPN that transmits both IPv4 and IPv6 multicast packets, the data group range in MVPN IPv4 address family view and MVPN IPv6 address family view cannot overlap.

All VPN instances share the data group resources. As a best practice to avoid data group resource exhaustion, specify a reasonable data group range for a VPN instance.

If BIDIR-PIM runs in a VPN instance, the switchover from the default MDT to a data MDT is not supported.

Procedure

1.     Enter system view.

system-view

2.     Enter MVPN view.

multicast-vpn vpn-instance vpn-instance-name mode mdt

3.     Enter MVPN address family view.

IPv4:

address-family ipv4

IPv6:

address-family ipv6

4.     Configure the data group range and the switchover criteria.

data-group group-address { mask-length | mask } [ threshold threshold-value | acl acl-number ] *

By default, no data group range is configured, and the default MDT to data MDT switchover never occurs.

5.     Set the data delay period.

data-delay delay

By default, the data delay period is 3 seconds.

6.     Set the data hold-down period.

data-holddown delay

By default, the data hold-down period is 60 seconds.

Configuring the RPF vector feature

About this task

In inter-AS MDT-based MVPN, this feature enables the device to insert the RPF vector (IP address of the ASBR in the local AS) in PIM join messages for other devices to perform RPF check.

Restrictions and guidelines

Perform this task on PEs that have attached receivers.

For the device to work with other manufacturers' products on the RPF vector, you must enable RPF vector compatibility for all H3C P devices and H3C PE devices on the public network.

Procedure

1.     Enter system view.

system-view

2.     Enter MRIB view of a VPN instance.

multicast routing vpn-instance vpn-instance-name

3.     Enable the RPF vector feature.

rpf proxy vector

By default, the RPF vector feature is disabled.

4.     Enable RPF vector compatibility.

multicast rpf-proxy-vector compatible

By default, RPF vector compatibility is disabled.

Enabling data group reuse logging

About this task

For a given VPN, the number of VPN multicast streams to be switched to data MDTs might exceed the number of addresses in the data group range. In this case, the VPN instance on the source-side PE can reuse the addresses in the address range. With data group reuse logging enabled, the address reuse information will be logged.

Attributed to the MVPN module, the group address reuse logging information has a severity level informational. For more information about the logging information, see information center configuration in Network Management and Monitoring Configuration Guide.

Procedure

1.     Enter system view.

system-view

2.     Enter MVPN view.

multicast-vpn vpn-instance vpn-instance-name mode mdt

3.     Enter MVPN address family view.

IPv4:

address-family ipv4

IPv6:

address-family ipv6

4.     Enable data group reuse logging.

log data-group-reuse

By default, data group reuse logging is disabled.

Configuring BGP MDT

Configuring BGP MDT peers or peer groups

About this task

Perform this task so that the PE can exchange MDT information with the BGP peer or peer group. MDT information includes the IP address of the PE and default group to which the PE belongs. On a public network running PIM-SSM, the multicast VPN establishes a default MDT rooted at the PE (multicast source) based on the MDT information.

Prerequisites

Before you configure a BGP MDT peer or peer group, you must create a BGP peer or peer group in BGP instance view. For more information about creating a BGP peer or peer group, see BGP configuration in Layer 3—IP Routing Configuration Guide.

Procedure

1.     Enter system view.

system-view

2.     Enter BGP instance view.

bgp as-number [ instance instance-name ]

3.     Create a BGP IPv4 MDT address family and enter its view.

address-family ipv4 mdt

By default, no BGP IPv4 address family exists.

4.     Enable the device to exchange MDT routing information with the BGP peer or the peer group.

peer { group-name | ip-address [ mask-length ] } enable

By default, the device cannot exchange BGP MDT routing information with a BGP peer or peer group.

For more information about this command, see BGP commands in Layer 3—IP Routing Command Reference.

Configuring a BGP MDT route reflector

About this task

·     Configuring a BGP MDT route reflector—BGP MDT peers in the same AS must be fully meshed to maintain connectivity. However, when multiple BGP MDT peers exist in an AS, connection establishment among them might result in increased costs. To reduce connections between BGP MDT peers, you can configure one of them as a route reflector and specify other devices as clients.

·     Disabling routing reflection between clients—When clients establish BGP MDT connections with the route reflector, the route reflector forwards (or reflects) BGP MDT routing information between clients. The clients are not required to be fully meshed. To save bandwidth if the clients have been fully meshed, you can disable the routing reflection between clients by using the undo reflect between-clients command.

·     Configuring the cluster ID of the route reflector—The route reflector and its clients form a cluster. Typically, a cluster has only one route reflector whose router ID identifies the cluster. However, you can configure several route reflectors in a cluster to improve network reliability. To avoid routing loops, make sure the route reflectors in a cluster have the same cluster ID.

Procedure

1.     Enter system view.

system-view

2.     Enter BGP instance view.

bgp as-number [ instance instance-name ]

3.     Enter BGP IPv4 MDT address family view.

address-family ipv4 mdt

4.     Configure the device as a route reflector and specify its peers or peer groups as clients.

peer { group-name | ip-address [ mask-length ] } reflect-client

5.     (Optional.) Disable route reflection between clients.

undo reflect between-clients

By default, route reflection between clients is disabled.

For more information about this command, see BGP commands in Layer 3—IP Routing Command Reference.

6.     (Optional.) Configure the cluster ID of the route reflector.

reflector cluster-id { cluster-id | ip-address }

By default, a route reflector uses its router ID as the cluster ID.

For more information about this command, see BGP commands in Layer 3—IP Routing Command Reference.

Preferring routes learned from a peer or peer group during optimal route selection

About this task

By default, BGP selects an optimal route based on route selection rules. It does not prefer routes learned from any peer or peer groups during optimal route selection.

After you perform this task, routes learned from the specified peer or peer group take precedence over other routes if these routes have the same prefix. BGP uses this rule to continue route selection if it fails to select an optimal route by using the peer type selection rule. If BGP still fails route selection, it uses the IGP metric selection rule to select an optimal route. For more information about BGP route selection rules, see BGP in in Layer 3IP Routing Configuration Guide.

Procedure

1.     Enter system view.

system-view

2.     Enter BGP instance view.

bgp as-number [ instance instance-name ]

3.     Enter BGP IPv4 MDT address family view:

address-family ipv4 mdt

4.     Prefer routes learned from the specified peer or peer group during optimal route selection.

peer { group-name | ipv4-address [ mask-length ] } high-priority

By default, BGP does not prefer routes learned from any peer or peer groups during optimal route selection.

Configuring an MVPN extranet RPF selection policy

About configuring MVPN extranet RPF selection policies

MVPN extranet RPF routing policies are used for multicast transmission when multicast sources and receivers are located in different VPNs.

Restrictions and guidelines for configuring MVPN extranet RPF selection policies

The PIM modes in the source VPN instance and the receiver VPN instance must be the same. Only PIM-SM and PIM-SSM are supported.

Multicast packets can only be forwarded between two VPNs. The receiver VPN instance cannot also be the source VPN instance.

In PIM-SM mode, you can configure only one RPF selection policy for a multicast group in a VPN instance.

If an IPv4 MVPN extranet RPF selection policy with only the multicast group address specified is configured in the receiver VPN instance, the multicast traffic for the intra-VPN transmission will be interrupted.

To implement source-specific RPF selection in MVPN extranet, you must configure two MVPN extranet RPF routing policies as follows:

·     In one policy, specify the address of the RP designated to the multicast group that requires inter-VPN multicast communication as the source address.

·     In the other policy, specify the multicast source in the source VPN instance as the source address.

To implement source-and-group-specific RPF selection in MVPN extranet, you must configure two MVPN extranet RPF routing policies as follows:

·     In one policy, specify the address of the RP designated to the multicast group as the source address, and specify the multicast group.

·     In the other policy, specify the multicast source in the source VPN instance as the source address, and specify the multicast group.

·     Make sure the multicast groups in the two policies are the same to avoid inter-VPN multicast transmission failure.

Common Layer 3 multicast and MDT-based MVPN support the source-PE-based MVPN extranet option and receiver-PE-based MVPN extranet option.

For the source-PE-based MVPN extranet option, if PIM-SM mode is used, the RP of the receiver VPN instance must be configured on the multicast source-side device.

Prerequisites for configuring MVPN extranet RPF selection policies

For the source-PE-based MVPN extranet option, configure MDT-based MVPN first. For more information, see "Configuring MDT-based MVPN."

For the receiver-PE-based MVPN extranet option, configure MDT-based first. For more information, see "Configuring MDT-based MVPN."

Configuring an IPv4 MVPN extranet RPF selection policy

1.     Enter system view.

system-view

2.     Enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

3.     Configure an IPv4 MVPN extranet RPF selection policy.

multicast extranet select-rpf [ vpn-instance vpn-instance-name ] { source source-address { mask | mask-length } | group group-address { mask | mask-length } } *

By default, no IPv4 MVPN extranet RPF selection policies are configured.

Configuring an IPv6 MVPN extranet RPF selection policy

1.     Enter system view.

system-view

2.     Enter IPv6 MRIB view.

ipv6 multicast routing [ vpn-instance vpn-instance-name ]

3.     Configure an IPv6 MVPN extranet RPF selection policy.

ipv6 multicast extranet select-rpf { vpn-instance vpn-instance-name } { source ipv6-source-address prefix-length | group ipv6-group-address  prefix-length }*

By default, no IPv6 MVPN extranet RPF selection policies are configured.

Display and maintenance commands for multicast VPN

Execute display commands in any view and reset commands in user view.

Display and maintenance commands for MDT-based MVPN:

 

Task

Command

Display BGP MDT peer group information.

display bgp [ instance instance-name ] group ipv4 mdt [ group-name group-name ]

Display information about BGP MDT peers or peer groups.

display bgp [ instance instance-name ] peer ipv4 mdt [ ip-address mask-length | { ip-address | group-name group-name } log-info | [ ip-address ] verbose ]

Display information about BGP MVPN peers or peer groups.

display bgp [ instance instance-name ] peer ipv4 mvpn [ ip-address mask-length | { ip-address | group-name group-name } log-info | [ ip-address ] verbose ]

Display BGP MDT routing information.

display bgp [ instance instance-name ] routing-table ipv4 mdt [ route-distinguisher route-distinguisher ] [ ip-address [ advertise-info ] ]

Display information about BGP update groups for the BGP IPv4 MDT address family.

display bgp [ instance instance-name ] update-group ipv4 mdt [ ip-address ]

Display information about data groups for IPv4 multicast transmission that are received in a VPN instance.

display multicast-vpn vpn-instance vpn-instance-name data-group receive [ brief | [ active | group group-address | sender source-address | vpn-source-address [ mask { mask-length | mask } ] | vpn-group-address [ mask { mask-length | mask } ] ] * ]

Display information about data groups for IPv6 multicast transmission that are received in a VPN instance.

display multicast-vpn vpn-instance vpn-instance-name ipv6 data-group receive [ brief | [ active | group group-address | sender source-address | vpn-source-address [ mask-length ] | vpn-group-address [ mask-length ] ] * ]

Display information about data groups for IPv4 multicast transmission that are sent in a VPN instance.

display multicast-vpn vpn-instance vpn-instance-name data-group send [ group group-address | reuse interval | vpn-source-address [ mask { mask-length | mask } ] | vpn-group-address [ mask { mask-length | mask } ] ] *

Display information about data groups for IPv6 multicast transmission that are sent in a VPN instance.

display multicast-vpn vpn-instance vpn-instance-name ipv6 data-group send [ group group-address | reuse interval | vpn-source-address [ mask-length ] | vpn-group-address [ mask-length ] ] *

Display information about default groups for IPv4 multicast transmission.

display multicast-vpn [ vpn-instance vpn-instance-name ] default-group { local | remote }

Display information about default groups for IPv6 multicast transmission.

display multicast-vpn [ vpn-instance vpn-instance-name ] ipv6 default-group { local | remote }

Reset BGP sessions for BGP IPv4 MDT address family.

reset bgp [ instance instance-name ] { as-number | ip-address [ mask-length ] | all | external | group group-name | internal } ipv4 mdt

 

For more information about the display bgp group, display bgp peer, display bgp update-group, and reset bgp commands, see BGP commands in Layer 3—IP Routing Command Reference.

 

 

Troubleshooting MVPN

A default MDT cannot be established

Symptom

The default MDT cannot be established. PIM neighboring relationship cannot be established between PE devices' interfaces that are in the same VPN instance.

Solution

To resolve the problem:

1.     Use the display interface command to examine the MTI interface state and address encapsulation on the MTI.

2.     Use the display multicast-vpn default-group command to verify that the same default group address has been configured for the same VPN instance on different PE devices.

3.     Use the display pim interface command to verify the following:

¡     PIM is enabled on a minimum of one interface of the same VPN on different PE devices.

¡     The same PIM mode is running on all the interfaces of the same VPN instance on different PE devices and on all the interfaces of the P router.

4.     Use the display ip routing-table command to verify that a unicast route exists from the VPN instance on the local PE device to the same VPN instance on each remote PE device.

5.     Use the display bgp peer command to verify that the BGP peer connections have been correctly configured.

6.     If the problem persists, contact H3C Support.

An MVRF cannot be created

Symptom

A VPN instance cannot create an MVRF correctly.

Solution

To resolve the problem:

1.     Use the display pim bsr-info command to verify that the BSR information exists on the public network and VPN instance. If it does not, verify that a unicast route exists to the BSR.

2.     Use the display pim rp-info command to examine the RP information. If no RP information is available, verify that a unicast route exists to the RP. Use the display pim neighbor command to verify that the PIM adjacencies have been correctly established on the public network and the VPN.

3.     Use the ping command to examine the connectivity between the VPN DR and the VPN RP.

4.     If the problem persists, contact H3C Support.

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网