Multicast VPN Technology White Paper-6W100

HomeSupportTechnology LiteratureTechnology White PapersMulticast VPN Technology White Paper-6W100
Download Book
  • Released At: 29-03-2025
  • Page Views:
  • Downloads:
Table of Contents
Related Documents

Multicast VPN Technology White Paper

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Copyright © 2025 New H3C Technologies Co., Ltd. All rights reserved.

No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd.

Except for the trademarks of New H3C Technologies Co., Ltd., any trademarks that may be mentioned in this document are the property of their respective owners.

This document provides generic technical information, some of which might not be applicable to your products.

 



Overview

Background

The application of IP multicast is becoming widespread, and the Virtual Private Network (VPN) technology is increasingly common in enterprise networks. Almost all current e-government networks, power data networks, and other enterprise networks are based on the BGP/MPLS VPN architecture, which isolates data by dividing different departments into different VPNs. Similarly, multicast services within departments, such as video conferencing and data sharing, also require VPN isolation, making multicast VPN demand increasingly urgent. However, RFC 4364 only proposes solutions for unicast VPN services and does not provide specific planning or recommendations for multicast VPN services. Therefore, applying the multicast technology in VPN environments has become a critical issue.

Internet service providers (ISPs) aim to offer multicast VPN services to users using the existing BGP/MPLS VPN infrastructure, which should be scalable and leverage the backbone's multicast capabilities. VPN users want each site's CEs to establish PIM neighbor relationships only with the corresponding PE, without involving remote site's CEs. Additionally, users expect their network configurations and existing multicast application plans (for example, PIM modes, RP location, and RP discovery mechanisms) to remain unchanged. In addition, the following issues must be addressed when multicast services are delivered over BGP/MPLS VPN networks:

·     Overlapping private address spaces—BGP/MPLS VPN networks allow overlapping private address spaces across different VPNs, so multicast source and group addresses might overlap. The PE must correctly forward private network multicast data to users within the same VPN.

·     Public network multicast support—Private network multicast data should be forwarded in the public network where possible to significantly reduce public network data load and save bandwidth.

·     RPF checks for private multicast data in the public network—Different from unicast packets, a multicast packet can be forwarded only if the source address and input interface of that packet passes the RPF check. In BGP/MPLS VPN networks, a provider (P) device cannot directly forward private network multicast data due to the lack of private network routes.

·     On-demand transmission of private multicast data—A VPN consists of multiple sites connected to different PEs, but not every site requires multicast data. Private multicast data should only flow to PEs that need it, reducing the load on PEs.

To address these issues, you can deploy the multicast VPN technology in BGP/MPLS VPN networks, enabling the transmission of private multicast data via the public network to remote private sites.

Technical advantages

Comware uses the Multicast Virtual Private Network (MVPN) solution to implement multicast services in BGP/MPLS VPN networks. MVPN supports the following modes: MDT, RSVP-TE, and mLDP. The RSVP-TE mode and mLDP mode are next-generation MVPNs (NG MVPNs).

MVPN has the following benefits:

·     Network upgrades are simple. Only PEs need to be upgraded. CE and P devices do not require upgrades or configuration changes, making MVPN transparent to them. The routes of the public network remains stable, because changes in private network multicast services are not sensed.

·     No changes are required for transmitting unicast routes in the private network. The MVPN scheme uses existing BGP/MPLS VPN technology to transmit routes of multicast sources within the VPN using VPN-IPv4 routes, allowing receivers and PE devices to obtain unicast routes to the multicast sources.

·     The MDT-based MVPN addresses RPF check issues by using the multicast forwarding capability of the public network. A PE encapsulates private network multicast packets as public multicast packets and transmit them over the public network.

·     In an NG MVPN, the public network uses BGP to transmit private network multicast protocol packets and routes, eliminating the need for other multicast protocols and simplifying network deployment and maintenance.

·     In an NG MVPN, the public network uses mature MPLS label forwarding and tunnel protection technologies, enhancing the service quality and reliability of multicast traffic.

The MVPN solution is based on the existing BGP/MPLS VPN architecture, making it simple to upgrade and highly compatible. It is the inevitable trend for enabling multicast support in VPNs.

MDT-based MVPN

Basic concepts

·     MVPN—An MVPN logically defines the transmission boundary of the multicast traffic of a VPN over the public network. It also physically identifies all the PEs that support that VPN instance on the public network. Each MVPN is dedicated to serving a specific VPN. All private network multicast data that belongs to this VPN is transmitted within this MVPN. Different VPN instances correspond to different MVPNs.

·     MVPN instance—A virtual instance that provides multicast services for an MVPN on an PE. Different MVPN instances isolate services of different MVPNs.

·     Multicast distribution tree (MDT)—An MDT is a multicast distribution tree constructed by all PEs in the same VPN. MDTs include the default MDT and the data MDT.

·     Multicast tunnel (MT)—An MT is a tunnel that interconnects all PEs in an MVPN. The local PE encapsulates a VPN multicast packet into a public network multicast packet and forwards it through the MT over the public network. The remote PE decapsulates the public network multicast packet to get the original VPN multicast packet.

·     Multicast tunnel interface (MTI)—An MTI is the entrance or exit of an MT, equivalent to an entrance or exit of an MVPN. MTIs are automatically created when the MVPN for the VPN instance is created. PEs use the MTI to access the MT. The local PE sends VPN data out of the MTI. The remote PEs receive the private data from their MTIs. An MTI runs the same PIM mode as the VPN instance to which the MTI belongs. PIM is enabled on MTIs when a minimum of one interface in the VPN instance is enabled with PIM. When PIM is disabled on all interfaces in the VPN instance, PIM is also disabled on MTIs.

·     Default group—A default group is a unique multicast address assigned to each MVPN on the public network. It is the unique identifier of an MVPN on the public network and helps build the default MDT for an MVPN on the public network. A PE encapsulates a VPN multicast packet (a multicast protocol packet or a multicast data packet) into a public network multicast packet. The default group address is used as the public network multicast group.

·     Default MDT—A default MDT uses a default group address as its group address. In a VPN, the default MDT is uniquely identified by the default group. A default MDT is automatically created after the default group is specified and will always exist on the public network, regardless of the presence of any multicast services on the public network or the VPN.

·     Data group—An MVPN is assigned a unique data group for MDT switchover. If you use an ACL to match the multicast traffic of an MVPN, the ingress PE selects a least used address from the data group range to encapsulate the matching multicast packets of the MVPN. Other PEs are notified to use the address to forward the matching traffic of the MVPN. This initiates the switchover to the data MDT.

·     Data MDT—A data MDT is an MDT that uses a data group as it group address. At MDT switchover, VTEPs with downstream receivers join a data group to build a data MDT. The ingress PE forwards the encapsulated MVPN multicast traffic along the data MDT over the public network.

Fundamentals

The basic idea behind MDT-based MVPN is that a default MDT is maintained for each VPN on the public network. Multicast protocol packets and data packets from any site in a VPN are forwarded down the default MDT to all PEs in the MVPN. A PE forwards multicast packets to CEs if it has attached receivers or drops them if it does not have attached receivers.

For a VPN instance, multicast data transmission on the public network is transparent. The VPN data is exchanged between the MTIs of the local PE and the remote PE. This implements the seamless transmission of the VPN data over the public network. However, the multicast data transmission process (the MDT transmission process) over the public network is very complicated.

 

 

NOTE:

For simplicity, this document uses P- to indicate the public network and C- to indicate the MVPN instance. For example, this document refers to a multicast protocol packet and a multicast data packet in a VPN and as a C-Control-Packet and a C-Data-Packet, respectively. C-Control-Packets and C-Data-Packets are collectively referred to as C-Packets. This document refers to a multicast protocol packet and a multicast data packet in the public network and as a P-Control-Packet and a P-Data-Packet, respectively. P-Control-Packets and P-Data-Packets are collectively referred to as P-Packets.

 

Figure 1 MDT-based MVPN

 

As shown in Figure 1,  the three PEs belong to the same MVPN, and MTs have been established among the PEs. The source-side PE (PE 1) encapsulates a C-Packet in a P-Packet and forwards it to PEs in another site.

·     If a PE (for example, PE 2) has downstream receivers, it decapsulates the C-Packet from the P-Packet  and forwards the packet to the CE.

·     If a PE (for example, PE 3) does not have downstream receivers, it drops the P-Packet.

The preceding scheme has a serious drawback: Multicast data in the public network is forwarded down the default MDT to all PEs within the MVPN, regardless of whether a PE has downstream multicast receivers. This results in bandwidth waste and increased PE processing load.

To overcome the drawback, MDT-based MVPN offers a compromise between multicast route optimization and scalability: multicast services with low traffic are transmitted down the default MDT, and multicast services with low traffic are transmitted down a data MDT (with the data group as the multicast group address) to PEs with downstream receivers.

Interface types on a PE

A PE has the following interface types:

·     Provider Network Interface (PNI)—Interface connecting to a P device. Packets received or sent on a PNI are forwarded according to the public network routing table.

·     Customer Network Interface (CNI)—Interface connecting to a CE device. Packets received or sent on a CNI are forwarded according to the routing table of the VPN instance to which the CNI belongs.

·     MTI—A virtual interface used for configuring the default MDT and PIM. After an IBGP connection is established, an MTI is created and comes up automatically . An MTI is the entrance or exit of an MT, equivalent to an entrance or exit of an MVPN.

PIM neighboring relationships

As shown in Figure 2, the following types of PIM neighboring relationships are established before MDT transmission:

·     PE-P PIM neighboring relationship—Established between the public network interface on a PE and the peer interface on the P device over the link.

·     PE-CE PIM neighboring relationship—Established between a PE interface that is bound with the VPN instance and the peer interface on the CE over the link.

·     PE-PE PIM neighboring relationship—Established between PEs that are in the same VPN instance after they receive the PIM hello packets.

Figure 2 PIM neighboring relationships

 

In the MDT-based MVPN scheme, PIM in the public network establishes MTs between PEs and provides non-VPN multicast services. PIM in a VPN instance establishes PIM neighbor relationships between PEs and CEs and establishes PIM neighbor relationships among PEs through MTs. A PIM neighbor relationship established between a PE and a CE creates a multicast routing table for the VPN instance and discovers RP information within the VPN. A PIM neighbor relationship established between PEs through an MT identifies RPF neighbors and detects the PIM capability of peer PEs.

RPF check

The RPF check is a crucial part of the PIM protocol. PIM uses the unicast routing table to determine RPF information, including RPF interface information for packet checks and RPF neighbor information for PIM join/prune messages. The RPF check methods differ for the public and private networks.

RPF check for the public network side

When the PE performs an RPF check for the public network side, the check process is the same as a scenario without a multicast VPN. As shown in Figure 3, the default MDT has not been established. PE 2 performs an RPF check on multicast packets from P. The RPF interface is PE 2's PNI (Interface A,), and the RPF neighbor is P.

Figure 3 RPF check for the public network side

 

RPF check for the private network side

When a PE performs an RPF check on the private network side, there are two scenarios:

·     For packets from a local multicast source, the check process is the same as a scenario without a multicast VPN. As shown in Figure 4, PE 1 performs an RPF check on multicast packets from CE 1. The RPF interface is PE 1's CNI (Interface A,), and the RPF neighbor is CE 1.

Figure 4 RPF check on packets from a local multicast source

 

·     For packets from a remote multicast source, the RPF interface is the MTI because each MVPN instance has only one MTI. If a remote PE is both the next hop in the local PE's BGP route to the multicast source and a PIM neighbor, it is the local PE's RPF neighbor. As shown in Figure 5, the default MDT has been established. PE 1 sends multicast packets to PE 2 through the MT. PE 2 performs an RPF check on multicast packets from PE 1. The RPF interface is PE 2's MTI (MTunnel0), and the RPF neighbor is PE 1.

Figure 5 RPF check on packets from a remote multicast source

 

Default MDT

Default MDT establishment

The default MDT is a multicast distribution tree established among all PEs within the same MVPN. All multicast protocol packets and data packets exchanged between PEs are forwarded through the MT formed by this tree. The default MDT always exists on the public network, regardless of the presence of any multicast services on the public network or the VPN.

 

 

NOTE:

·     An RPF check can succeed only if the multicast source address of the default MDT is the interface IP address used by a PE to establish IBGP connections with other PEs.

·     The multicast group address (default group) of the default MDT is configured by the administrator on each PE. All PEs within the same MVPN must be configured with the same group address, and PEs in different MVPNs must be configured with different group addresses.

 

The multicast routing protocol running on the public network can be PIM-DM, PIM-SM, BIDIR-PIM, or PIM-SSM. The process of creating a default MDT is different in these PIM modes.

Default MDT establishment in a PIM-DM network

Figure 6 Default MDT establishment in a PIM-DM network

 

As shown in Figure 6, PIM-DM is enabled on the network, and all PEs support VPN instance A. The process of establishing a default MDT is as follows:

1.     To establish PIM neighboring relationships with PE 2 and PE 3 through the MTI for VPN instance A, PE 1 does the following:

a.     Encapsulates the PIM protocol packet of the private network into a public network multicast data packet. PE 1 does this by specifying the source address as the IP address of the MVPN source interface and the multicast group address as the default group address.

b.     Sends the multicast data packet to the public network.

For other PEs that support VPN instance A as default group members, PE 1 of VPN instance A initiates a flood-prune process in the entire public network. A (11.1.1.1, 239.1.1.1) entry is created on each device along the path on the public network. This forms an SPT with PE 1 as the root, and PE 2 and PE 3 as leaves.

2.     At the same time, PE 2 and PE 3 separately initiate a similar flood-prune process.

Finally, three independent SPTs are established in the MVPN, constituting the default MDT in the PIM-DM network.

Default MDT establishment in a PIM-SM network

Figure 7 Default MDT establishment

 

As shown in Figure 7, the process of establishing a default MDT is as follows:

1.     PE 1 initiates a join to the public network RP by specifying the multicast group address as the default group address in the join message. A (*, default group) entry is created on each device along the path on the public network.

2.     At the same time, PE 2 and PE 3 separately initiate a similar join process.

Finally, an RPT is established in the MVPN, with the public network RP as the root and PE 1, PE 2, and PE 3 as leaves.

3.     After PIM is configured on PE 1's MVPN instance, the PIM protocol periodically multicasts PIM hello messages. PE 1 encapsulates PIM hello messages into P-Data-Packets, with the MTI source interface's address as the source address and the default group as the destination address. It then registers with the RP on the public network. A (S, default group) entry is created on each devices on the public network. At the same time, PE 2 and PE 3 separately initiate a similar register process.

Finally, three SPTs between the PEs and the RP are established in the MVPN.

In the PIM-SM network, the RPT (with the RP as the root and PE 1, PE 2, and PE 3 as leaves) and the three independent SPTs constitute the default MDT.

Default MDT establishment in a BIDIR-PIM network

Figure 8 Default MDT establishment in a BIDIR-PIM network

 

As shown in Figure 8, BIDIR-PIM is enabled on the public network, and PE 1, PE 2, and PE 3 support VPN instance A. The process of establishing a default MDT is as follows:

1.     PE 1 initiates a join to the public network RP by specifying the multicast group address as the default group address in the join message. A (*, 239.1.1.1) entry is created on each device along the path on the public network.

At the same time, PE 2 and PE 3 separately initiate a similar join process. Finally, a receiver-side RPT is established in the MVPN, with the public network RP as the root and PE 1, PE 2, and PE 3 as leaves.

2.     PE 1 sends a multicast packet with the default group address as the multicast group address. The DF of each network segment on the public network forwards the multicast packet to the RP. Each device on the path creates a (*, 239.1.1.1) entry.

At the same time, PE 2 and PE 3 separately initiate a similar process. Finally, three source-side RPTs are established in the MVPN, with PE 1, PE 2, and PE 3 as roots and the public network RP as a leaf.

The receiver-side RPT and the three source-side RPTs constitute the default MDT in the BIDIR-PIM network.

Default MDT establishment in a PIM-SSM network

Figure 9 Default MDT establishment in a PIM-SSM network

 

As shown in Figure 9, PIM-SSM runs on the public network, and PE 1, PE 2, and PE 3 support VPN instance A. All PEs establish PIM neighboring relationship among each other and build independent SPTs to constitute a default MDT in the MVPN.

The process of establishing a default MDT is as follows:

1.     PE 1, PE 2, and PE 3 exchange MDT route information (including BGP interface address and the default group address) through BGP.

2.     PE 1 sends a subscribe message to PE 2 and PE 3. Each device on the public network creates an (S, G) entries. An SPT is established in the MVPN with PE 1 as the root and PE 2 and PE 3 as leaves.

At the same time, PE 2 and PE 3 separately initiate a similar process, and establish an SPT with itself as the root and the other PEs as leaves.

3.     The three independent SPTs constitute the default MDT in the PIM-SSM network.

In PIM-SSM, the term "subscribe message" refers to a join message.

Default MDT-based data forwarding

After the default MDT is established, multicast data can be transmitted within the VPN in two scenarios:

·     Multicast source and receiver on the same PE side

Multicast protocol interaction and data forwarding are performed only within the VPN. The process is the same as the scenario where there is no multicast VPN.

Figure 10 Multicast source and receiver on the same PE side

 

As shown in Figure 10, CE 1 is connected to multicast source S, and CE 2 is connected to the receiver of multicast group G. After receiving a PIM join message from CE 2, PE 1 adds Interface A to the output interface list of the (*, G) entry and sets Interface B as the input interface by looking up the unicast route to the multicast source. Multicast data is sent from CE 1 to PE 1 and then forwarded to CE 2.

·     Multicast source and receiver on different PE sides

Multicast protocol interaction and data forwarding are performed across the public network.

Figure 11 Multicast source and receiver on different PE sides

 

As shown in Figure 11, PIM-SM is used in the VPN. CE 1 is the RP in the VPN and is connected to multicast source S, and CE 2 is connected to the receiver of multicast group G. CE 2 sends a PIM join message to the RP (CE 1). PE 2 encapsulates the message as a public network packet and sends it down the default MDT to all PEs within the MVPN. PE 3 decapsulates the message and discards it because PE 3 or CE 3  is not the RP. PE 1 decapsulates the message and forwards the decapsulated PIM join message to CE 1 and adds MTunnel0 to the output interface list of the (*, G) entry. This is because CE 1 is the RP. After receiving the PIM join message, CE 1 forwards multicast data (S, G) to PE 1. PE 1 encapsulates it as a P-Packet and sends it down the default MDT to all PEs within the MVPN. PE 3 decapsulates the P-Packet and discards it because there are no local receivers for G. PE 2 decapsulates the P-Packet and forwards the decapsulated packet to CE 2 because there are local receivers for G.

Data MDT

Overview

The biggest advantage of the default MDT is the stable multicast state on the public network, and its major drawback is low bandwidth efficiency. When multicast traffic is high, unnecessary multicast flows consume valuable bandwidth on branches of the default MDT without receivers.

MDT-based MVPN uses data MDTs to address this issue. When the multicast source-side PE detects that the multicast data forwarding rate reaches a threshold, a new data MDT is established on the public network. Only interested PEs join this new tree. After the data MDT is successfully established, the PE uses the data group instead of the data group as the destination address and forwards multicast data down the data MDT to PEs with multicast receivers.

 

 

NOTE:

The group address (data group) used by a data MDT is preconfigured. A default group uniquely determines a data group range for use in switchover to the data MDT. During the data MDT switchover process, the least used address from the data group range is selected as the data group.

 

MDT switchover messages

An MDT switchover message is type of UDP packet that includes the private network multicast source address, the private network multicast group address, and the data group address. The PE that initiates an MDT switchover encapsulates an MDT switchover message in a P-Packet and sends the packet down the default MDT to all PEs in the same MVPN. The PE periodically sends MDT switchover messages until the multicast data forwarding rate falls below the threshold. PEs wishing to receive the multicast data send a PIM join message upon receiving the switchover message to join the data MDT. Subsequently, if these PEs do not receive new switchover messages within the data delay period, they delete the multicast forwarding entries created for the data MDT.

PEs without connected receivers do not join the data MDT upon receiving the switchover message but cache it to reduce the delay in joining the data MDT if receivers appear in the future.

Data delay timer and data hold-down timer

After sending the MDT switchover message, the PE starts the data delay timer. When the timer expires, the PE uses the data group address to encapsulate the VPN multicast data. The multicast data is then forwarded down the data MDT. This delay provides downstream PEs with time to join the data MDT, helping prevent data loss during the switchover.

Subsequently, if multicast traffic falls below the threshold, the PE does not immediately switch back to the default MDT. Instead, it only switches back if the multicast traffic remains below the threshold for the duration of the data holddown timer. This mechanism helps prevent flapping during the switchover process.

Data MDT switchover process

Figure 12 Data MDT switchover

 

As shown in Figure 12, the specific data MDT switchover process is as follows:

1.     The source-side PE (PE 1, for example) periodically examines the forwarding rate of the VPN multicast traffic. PE 1 selects a least-used address from the data group range and sends an MDT switchover message to all the other PEs down the default MDT if the switchover criteria are met.

2.     Each PE that receives this message examines whether it has receivers of that VPN multicast stream. If yes (for example, PE 2), it sends a join message to join the data MDT rooted at PE 1. If no (for example, PE 3), it caches the message and will join the data MDT when it has attached receivers.

3.     When the data delay timer expires, the multicast traffic is switched from the default MDT to the data MDT.

Inter-AS MDT-based MVPN

In an inter-AS VPN networking scenario, VPN sites are located in multiple ASs. These sites must be interconnected. Inter-AS VPN provides the following solutions:

·     VRF-to-VRF connections between ASBRs—This solution is also called inter-AS option A.

·     EBGP redistribution of labeled VPN-IPv4 routes between ASBRs—ASBRs advertise VPN-IPv4 routes to each other through MP-EBGP. This solution is also called inter-AS option B.

·     Multihop EBGP redistribution of labeled VPN-IPv4 routes between PE devices—PEs advertise VPN-IPv4 routes to each other through MP-EBGP. This solution is also called inter-AS option C.

Inter-AS option A MDT-based MVPN

As shown in Figure 13, two VPN instances are in AS 1 and AS 2. PE 3 and PE 4 are ASBRs for AS 1 and AS 2, respectively. PE 3 and PE 4 are interconnected through their respective VPN instance and treat each other as a CE.

Figure 13 Inter-AS option A MDT-based MVPN

 

To implement inter-AS option A MDT-based MVPN, you must create a separate MVPN in each AS. Multicast data is transmitted between the VPNs in different ASs through the MVPNs. Multicast packets of VPN instance 1 are delivered as follows:

1.     CE 1 forwards the multicast packet of VPN instance 1 to PE 1.

2.     PE 1 encapsulates the multicast packet into a public network packet and forwards it to PE 3 through the MTI interface in MVPN 1.

3.     PE 3 considers PE 4 as a CE of VPN instance 1, so PE 3 forwards the multicast packet to PE 4.

4.     PE 4 considers PE 3 as a CE of VPN instance 2, so it forwards the multicast packet to PE 2 through the MTI interface in MVPN 2 on the public network.

5.     PE 2 forwards the multicast packet to CE 2.

Inter-AS option B MDT-based MVPN

As shown in Figure 14, two VPN instances are in AS 1 and AS 2. PE 3 and PE 4 are ASBRs for AS 1 and AS 2, respectively. PE 3 and PE 4 are interconnected through MP-EBGP PE 3 and PE 4 advertise VPN-IPv4 routes to each other through MP-EBGP.

Figure 14 Inter-AS option B MDT-based MVPN

 

To implement inter-AS option B MDT-based MVPN, you need to establish only one MVPN for the two ASs. VPN multicast data is transmitted between different ASs on the public network within this MVPN. The implementation is as follows:

1.     RPF vector used to create the default MDT on the public network

 

 

NOTE:

The public network supports only PIM-SSM on an inter-AS option B MDT-based MVPN network.

 

In this network setup, the public routes between AS 1 and AS 2 are isolated from each other. PEs in different ASs cannot find routes to each other. This can lead to RPF check failures. To address this issue, you must use an RPF vector complete the RPF checks on the public network.

The process of establishing a default MDT is as follows:

a.     PE 1 originates a PIM join message to join the SPT rooted at PE 2. In the join message, the upstream neighbor address is the IP address of PE 2 (the BGP connector). The RPF vector attribute is the IP address of PE 3. PE 1 encapsulates the join message as a public network packet and forwards it through the MTI.

b.     P 1 determines that the RPF vector is not an IP address of its own. It looks up the routing table for a route to PE 3, and forwards the packet to PE 3.

c.     PE 3 removes the RPF vector because the RPF vector is its own IP address. It fails to find a BGP MDT route to PE 2, so it encapsulates a new RPF vector (IP address of PE 4) in the packet and forwards it to PE 4.

d.     PE 4 removes the RPF vector because the RPF vector is its own IP address. It has a local route to PE 2, so it forwards the packet to P 2, which is the next hop of the route to PE 2. P 2 sends the packet to PE 2.

e.     PE 2 receives the packet on the MTI and decapsulates the packet. The receiving interface is the RPF interface of the RPF route back to PE 1 for the join message, and the join message passes the RPF check. The SPT from PE 1 to PE 2 is established. When PE 1 joins the SPT rooted at PE 1, PE 2 also initiates a join process to the SPT rooted at PE 1. An MDT is established when the two SPTs are finished.

2.     BGP connector used to perform an RPF check

When the receiver-side PE sends a PIM join message through the MTI to the multicast source-side PE on the private network, it needs to find the private network route to the multicast source. The next hop of this route is used as the upstream neighbor address in the PIM join message, allowing the multicast source-side PE to perform RPF checks upon receiving the message. In non-inter-AS scenarios, the next hop corresponds to the source address of the MTI on the multicast source-side PE. However, in inter-AS option B scenarios, the BGP protocol changes the next hop to the ASBR address of the local AS instead of the address of the multicast source-side PE. This causes RPF checks to fail.

To prevent RPF check failures, BGP peers must carry the address of the multicast source-side PE (known as the BGP connector) when exchanging VPN-IPv4 routes. When the receiver-side PE sends a PIM join message through the MTI, it uses the BGP connector of the multicast source-side PE as the upstream address in the message.

Inter-AS option C MDT-based MVPN

As shown in Figure 15, two VPN instances are in AS 1 and AS 2. PE 3 and PE 4 are ASBRs for AS 1 and AS 2, respectively. PE 3 and PE 4 are interconnected through MP-EBGP and treat each other as a P device. PEs in different ASs establish a multihop MP-EBGP session to advertise VPN-IPv4 routes to each other.

Figure 15 Inter-AS option C MDT-based MVPN

 

To implement inter-AS option C MDT-based MVPN, you need to establish only one MVPN for the two ASs. VPN multicast data is transmitted between different ASs on the public network within this MVPN. Multicast packets of VPN instance 1 are delivered as follows:

1.     CE 1 forwards the VPN instance multicast packet to PE 1.

2.     PE 1 encapsulates the multicast packet into a public network multicast packet and forwards it to PE 3 through the MTI interface on the public network.

3.     PE 3 and PE 4 are interconnected through MP-EBGP, so PE 3 forwards the public network multicast packet to PE 4 along the VPN IPv4 route.

4.     The public network multicast packet arrives at the MTI interface of PE 2 in AS 2. PE 2 decapsulates the public network multicast packet and forwards the VPN multicast packet to CE 2.

NG MVPN

NG MVPN is a next-generation solution for transmitting IP multicast data across BGP/MPLS L3VPN networks. NG MVPN uses multiprotocol BGP (MP-BGP) to transfer private network multicast routing information. It uses point-to-multipoint (P2MP) tunnels to transport private network multicast protocol traffic and multicast data traffic. This setup allows multicast data traffic from the private network to be transmitted over the public network to remote private sites.

NG MVPN supports the following modes: RSVP-TE and mLDP.

Basic concepts

·     MVPN—An MVPN logically defines the transmission boundary of the multicast traffic of a VPN over the public network. It also physically identifies all the PEs that support that VPN instance on the public network. Each MVPN is dedicated to serving a specific VPN. All private network multicast data that belongs to this VPN is transmitted within this MVPN. Different VPN instances correspond to different MVPNs.

·     MVPN instance—A virtual instance that provides multicast services for an MVPN on an PE. Different MVPN instances isolate services of different MVPNs.

·     Inclusive tunnel—Transmits all multicast packets (including multicast protocol packets and multicast data packets of all multicast groups) for an MVPN. Only one inclusive tunnel can be established between two PEs in the MVPN. A PE encapsulates multicast data packets and PIM bootstrap messages (BSMs) of an MVPN into public network multicast data packets and sends them over the public network through the inclusive tunnel.

·     Selective tunnel—Transmits multicast packets of one or more multicast groups for an MVPN. Multiple selective tunnels can be established between two PEs in the MVPN.

MP-BGP route extensions

BGP MVPN neighbor establishment

To support NG MVPN, MP-BGP introduces the BGP MVPN address family. This address family is used to negotiate and establish BGP MVPN neighbors and to transfer private network multicast routing information. For BGP IPv4 MVPN and BGP IPv6 MVPN, the (Address Family Identifier (AFI) values are 1 and 2, respectively, and the Subsequent Address Family Identifier (SAFI) is 5 for both.

In an NG MVPN network, PEs can establish either IBGP or EBGP neighbor relationships:

·     IBGP neighbor relationship—To simplify full-mesh configurations, you must deploy a route reflector (RR). All PEs need to establish an IBGP neighbor relationship with only the RR. The RR creates a client list after discovering and receiving BGP connections initiated by PEs. It reflects routes received from one PE to all other PEs.

·     EBGP neighbor relationship—No RR is needed. BGP automatically sends MVPN routes received from an EBGP neighbor to other EBGP and IBGP neighbors.

BGP MVPN routes

In NG MVPN, MVPN routing information is transmitted in the Network Layer Reachability Information (NLRI) field of BGP update messages. The NLRI carrying MVPN routing information is referred to as MVPN NLRI.

Figure 16 shows the format of the MVPN NLRI.

Figure 16 MVPN NLRI format

 

The meaning of each field of the MVPN NLRI is as follows:

·     Route Type—MVPN route types. Seven MVPN route types are available. For more information, see Table 1.

·     Length—Length of the Route Type specific field.

·     Route Type specific—MVPN route information. The length of this field is variable, because different MVPN route types contain different information.

Table 1 MVPN route types

Type

Name

Description

1

Intra-AS I-PMSI A-D route

Used for autodiscovery of MVPN members within an AS. PEs use this type of route to establish inclusive tunnels.

2

Inter-AS I-PMSI A-D route

Used for autodiscovery of MVPN members across ASs. ASBRs configured with MVPN initiate the autodiscovery.

3

S-PMSI A-D route

The multicast source-side PE sends this type of route to receiver-side PEs for tunnel switchover when selective tunnel creation is enabled and the tunnel creation criterion is met.

4

Leaf A-D route

A receiver-side PE that has attached receivers replies with a Leaf A-D route when it receives an S-PMSI A-D route from the multicast source-side PE. RSVP-TE creates selective tunnels between receiver-side PEs and the multicast-side PE based on the neighbor information contained in Leaf A-D routes.

5

Source Active A-D route

The multicast source-side PE sends this type of route to receiver-side PEs to advertise the location of a newly discovered multicast source.

6

Shared Tree Join route

Used to transfer join messages of private network multicast members. When a receiver-side PE receives a (* G) join request from the user side, it converts the (* G) join message into a Shared Tree Join route. This route is then sent across the public network to the multicast source-side PE.

7

Source Tree Join route

Used to transfer join messages of private network multicast members. When a receiver-side PE receives a (S G) join request from the user side, it converts the (S G) join message into a Source Tree Join route. This route is then sent across the public network to the multicast source-side PE.

 

 

NOTE:

Type-1 to type-5 routes are called MVPN A-D routes. They primarily enable MVPN member autodiscovery and assist MPLS in establishing P2MP tunnels. Type-6 and type-7 routes are called C-multicast routes (C indicating Customer, meaning multicast routes from a private network). They mainly initiate private network user joins and guide private network multicast data transmission. Currently, type-6 routes are not supported.

 

BGP MVPN route attributes

PMSI Tunnel attribute

The Provider Multicast Service Interface (PMSI) is a logical channel that carries private network multicast data traffic over a public network. At the multicast source-side PE, PMSI distributes specific multicast data traffic to other PEs. The receiver-side PE  receives multicast data traffic belonging to the same MVPN based on the PMSI. In NG MVPN, public network tunnels implement PMSIs and are categorized into inclusive and selective tunnels.

The PMSI Tunnel attribute is primarily used for creating public network tunnels. It is currently included in the Intra-AS I-PMSI A-D route and S-PMSI A-D route, as shown in Figure 17.

Figure 17 PMSI Tunnel attribute format

 

The meaning of each field of the PMSI Tunnel attribute is as follows:

·     Flags—Contains flags. This field is meaningful only when type-3 routes (S-PMSI A-D route) carry it.

¡     If this field is set to 0, the receiver-side PE does not need to respond to an S-PMSI A-D route.

¡     If this field is set to 1, the receiver-side PE needs to respond with a Leaf A-D route (type-4 route).

·     Tunnel Type— Only RSVP-TE P2MP and mLDP P2MP tunnels are supported in the current software version.

·     MPLS Label—Used for VPN tunnel reuse. This field is not supported in the current software version.

·     Tunnel Identifier

¡     For an RSVP-TE P2MP tunnel, the tunnel identifier is in the form of <P2MP ID, Tunnel ID, Extended Tunnel ID>.

¡     For an mLDP P2MP tunnel, the tunnel identifier is in the form of <Root node address, Opaque value>.

Route target extended community attributes

Route target extended community attributes are used to control the advertisement and acceptance of BGP MVPN routing information.

The following types of route target attributes are available:

·     Export target attribute—A PE sets the export target attribute for MVPN A-D routes of the MVPN instance corresponding to a VPN instance before advertising them to other PEs.

·     Import target attribute—A PE checks the export target attribute of MVPN A-D routes received from other PEs. If the export target attribute matches the import target attribute of a VPN instance, the PE accepts the routes and records MVPN members in the VPN instance. If the export target attribute does not match the import target attribute of any VPN instance, the PE discards the routes.

MVPN-related extended community attributes carried in BGP VPN-IPv4 routes

In an NG MVPN network, BGP VPN-IPv4 routes must carry MVPN-related extended community attributes to control the advertisement and acceptance of C-multicast routes, enabling accurate multicast user joins or leaves. MVPN-related extended community attributes include:

·     Source AS Extended Community—Carries the local BGP AS number with the value as the AS number of the MVPN multicast source. The format is 32-bit AS number:0. This attribute is mainly used for inter-AS scenarios.

·     VRF Route Import Extended Community—Carries the local router ID of the local BGP instance and the VPN instance associated with the BGP VPN-IPv4 route. The format is 32-bit router ID:VPN instance index. This attribute is included in VPN-IPv4 routes advertised by the source-side PE to the receiver-side PE and in C-multicast routes sent by the receiver-side PE to the source-side PE. Upon receiving a Shared Tree Join route or Source Tree Join route, if the router ID matches its own, the source-side PE adds the route to the multicast forwarding table of the VPN instance. If the router ID is not that of the source-side PE, the source-side PE ignores the route.

RSVP-TE-based MVPN

The basic idea behind RSVP-TE-based MVPN involves establishing IBGP neighbors between each pair of PEs and using MP-BGP to advertise MVPN route information. It sets up RSVP-TE P2MP tunnels on the public network. Private network multicast data is transmitted to remote PEs through either inclusive or selective RSVP-TE P2MP tunnels. Upon receiving the packets, the remote PE strips the label information to restore them as private network multicast packets.

RSVP-TE P2MP tunnel establishment

RSVP-TE P2MP tunnels, established by the RSVP-TE P2MP protocol, are point-to-multipoint tunnels for carrying private network multicast traffic. The RSVP-TE P2MP protocol is used to establish point-to-multipoint CR-LSPs (called P2MP LSPs). One or more RSVP-TE P2MP LSPs form a P2MP RSVP-TE tunnel.

A P2MP LSP has multiple egress nodes. The point-to-point LSP from the ingress node to each egress node is called a sub-LSP. A P2MP LSP contains multiple sub-LSPs.

RSVP-TE P2MP protocol extensions for RSVP

The RSVP P2MP protocol adds the following objects to Path and Resv messages:

·     P2MP SESSION object

Both Path and Resv messages can carry this object.

It uniquely identifies a P2MP TE tunnel. Figure 18 shows the P2MP SESSION object format.

Figure 18 P2MP SESSION object format

 

The meaning of each field in the P2MP SESSION object is as follows:

¡     P2MP ID—The ingress node assigns different P2MP IDs to different P2MP TE tunnels.

¡     MUST be zero—Received field.

¡     Tunnel ID—ID of the tunnel.

¡     Extended Tunnel ID—Source address of the P2MP TE tunnel.

A P2MP LSP is uniquely identified by the P2MP SESSION object and the LSP ID field in the P2MP SENDER_TEMPLATE object.

·     S2L_SUB_LSP object

Both Path and Resv messages can carry this object.

It carries the destination address of a P2MP LSP. An S2L_SUB_LSP can carry only one destination address, and a Path message can carry only one S2L_SUB_LSP. A P2MP LSP supports multiple destination addresses and needs to use multiple Path messages.

·     P2MP_SENDER_TEMPLATE object

Only Path messages can carry this object.

The RSVP P2MP protocol adds the Sub-Group Originator ID and Sub-Group ID fields to identify different Path messages of the same P2MP LSP. A node uses its own LSR ID as the Sub-Group Originator ID and allocates different Sub-Group IDs for different Path messages.

·     P2MP_FILTER_SPEC object

Only Resv messages can carry this object.

The RSVP P2MP protocol adds the Sub-Group Originator ID and Sub-Group ID fields to identify different Resv messages of the same P2MP LSP. A node uses its own LSR ID as the Sub-Group Originator ID and allocates different Sub-Group IDs for different Resv messages.

RSVP-TE P2MP establishment process

Figure 19 P2MP LSP establishment process

 

Figure 19 shows the process of establishing a P2MP LSP by using RSVP-TE.

1.     Egress LSR 1 is selected as the destination node.

2.     The ingress LSR carries the address of egress LSR 1 in the S2L_SUB_LSP object to establish a CRLSP.

¡     The ingress LSR generates a P2MP Path message that carries the LABEL_REQUEST object, and then forwards the message along the path calculated by CSPF towards the egress LSR. Each LSR that receives the Path message generates a path state based on the message.

¡     After the egress LSR receives the Path message, it generates a Resv message carrying the reserved information and the LABEL object. It forwards the Resv message to the ingress LSR along the reverse direction of the path that the Path message traveled. The Resv message advertises labels, reserves resources, and creates a reserve state on each LSR it traverses.

3.     Egress LSR 2 is selected as the destination node, and step 2 is repeated.

4.     After the ingress LSR creates sub-LSPs for all egress LSRs, an RSVP-TE P2MP LSP is created successfully.

 

 

NOTE:

Before you create a P2MP LSP, you must obtain the destination addresses through MVPN routes. For more information, see "Inclusive tunnel establishment" and "Selective tunnel switchover."

 

RSVP- TE P2MP packet forwarding

As shown in Figure 20, CE 1, CE 2, and CE 3 belong to the same multicast group. The multicast source sends multicast data to only root node PE 1. The network devices replicate and forward the multicast data based on the group member distribution, accurately sending it to CE 1, CE 2, and CE 3. When PE 1 receives a multicast packet, it first looks up the multicast routing table and determines to forward the packet through an RSVP P2MP LSP. Then, PE 1 adds labels to the packet, and performs MPLS forwarding according to the multicast label forwarding table.

Figure 20 RSVP P2MP forwarding

 

Inclusive tunnel establishment

As shown in Figure 21, after BGP and MVPN are deployed on both the multicast source-side PE and receiver-side PE, the process for establishing an inclusive tunnels is as follows:

1.     The multicast source-side PE sends a type-1 route (Intra-AS I-PMSI A-D route) to the receiver-side PE. This route carries the following attributes:

¡     Route Target—used to control the advertisement and acceptance of routes.

¡     PMSI Tunnel attribute—Used to transmit tunnel information. The value of the Tunnel Type field is RSVP-TE P2MP. The value of the Tunnel Identifier field is tunnel identifier assigned by the multicast source-side PE for the RSVP-TE P2MP tunnel.

2.     The receiver-side PE sends a type-1 route (Intra-AS I-PMSI A-D route) to the multicast source-side PE. This route does not carry the PMSI Tunnel attribute and carries only the Route Target attribute to control the advertisement and acceptance of routes.

3.     Upon receiving the type-1 route from the source-side PE, the receiver-side PE identifies whether the Route Target attribute in the route matches the Import Target configured for the local VPN instance. If yes, the receiver-side PE accepts the route and records the source-side PE as the remote PE of the tunnel.

4.     Upon receiving the type-1 route from the receiver-side PE, the source-side PE identifies whether the Route Target attribute in the route matches the Import Target configured for the local VPN instance. If yes, the source-side PE accepts the route, sends a Path message to the receiver-side PE, and records the receiver-side PE as the destination of the tunnel.

5.     Upon receiving the Resv message from the receiver-side PE, the source-side PE creates a sub-LSP with itself as the source and the receiver-side PE as the destination.

6.     When multiple receiver-side PEs exist, the source-side PE can create multiple sub-LSPs, forming an RSVP-TE P2MP LSP with the multicast source-side PE as the source and all receiver-side PEs as destinations. For more information about establishing an RSVP-TE P2MP LSP, see "RSVP-TE P2MP tunnel establishment."

Figure 21 Inclusive tunnel establishment process

 

 

At this point, the establishment of RSVP-TE P2MP inclusive tunnels between the multicast source-side PE and each receiver-side PE in the MVPN is complete, as shown in Figure 22.

Figure 22 Inclusive tunnels established on the public network

http://press/data/infoblade/Comware%20V7%E5%B9%B3%E5%8F%B0B75%E5%88%86%E6%94%AF%E4%B8%AD%E6%96%87/09-IP%E7%BB%84%E6%92%AD/09-%E7%BB%84%E6%92%ADVPN/%E7%BB%84%E6%92%ADVPN%E9%85%8D%E7%BD%AE.files/20250329_13124702_x_Img_x_png_14_2383881_294551_0.png

 

Selective tunnel switchover

Selective tunnel switchover process

Since an inclusive tunnel carries all multicast traffic for a single MVPN and includes all PEs belonging to that MVPN, each receiver-side PE receives the multicast data flow regardless of whether it has downstream receivers. This situation leads to bandwidth waste and increases the processing load on PE devices.

If a multicast packet meets the tunnel switchover criterion, it is switched over from the inclusive tunnel to a selective tunnel. Different multicast traffic flows can be transmitted through different tunnels.

The process of switching from an inclusive tunnel to a selective tunnel is as follows:

1.     When the multicast source-side PE receives a private network multicast packet that meets the tunnel switchover criterion, it sends an S-PMSI A-D route with the PMSI Tunnel attribute to the receiver-side PE and requests the receiver-side PE to respond with join information.

2.     Upon receiving the S-PMSI A-D route from the multicast source-side PE, the receiver-side PE records the route. If the receiver-side PE has downstream receivers, it responds with a Leaf A-D route to the multicast source-side PE. Additionally, it joins the corresponding tunnel based on the PMSI Tunnel attribute in the S-PMSI A-D route. At this stage, because the multicast source-side PE does not know the destination PE information for the tunnel, the tunnel has not been established. If the receiver-side PE does not have downstream receivers, it does not respond with a Leaf A-D route.

3.     Upon receiving the Leaf A route, the source-side PE creates a sub-LSP with itself as the source and the receiver-side PE as the destination.

4.     When multiple receiver-side PEs exist, the source-side PE can create multiple sub-LSPs, forming an RSVP-TE P2MP LSP with the multicast source-side PE as the source and all receiver-side PEs as destinations. For more information about establishing an RSVP-TE P2MP LSP, see "RSVP-TE P2MP tunnel establishment."

Figure 23 Selective tunnel switchover process

 

At this point, the establishment of RSVP-TE P2MP selective tunnels between the multicast source-side PE and each receiver-side PE in the MVPN is complete, as shown in Figure 24.

Figure 24 Selective tunnel establishment and tunnel switchover

 

Delayed switching to a selective tunnel

When multicast traffic meets the tunnel switchover criterion, the PE connected to the multicast source delays the traffic switchover to a selective tunnel. This behavior allows time for downstream PEs to respond with Leaf A-D routes and for tunnel establishment, avoiding data loss.

mLDP-based MVPN

Overview

mLDP-based MVPN is also a solution for NG MVPN. Its main difference from the RSVP-TE-based MVPN is how the public network tunnel is established.

·     RSVP-TE-based MVPN—Starting from the ingress PE, the upstream devices use RSVP to establish RSVP-TE P2MP tunnels with downstream devices. In this mode, the ingress PE needs to know the IP address of the egress PE.

·     mLDP-based MVPN—Starting from the egress PE, the downstream devices use LDP to establish mLDP P2MP tunnels with downstream devices. In this mode, the egress PE needs to know the IP address of the ingress PE.

mLDP P2MP tunnel establishment

mLDP P2MP node roles

As shown in Figure 25, mLDP P2MP establishes a tree-shaped tunnel from the ingress node (PE 1) to multiple egress nodes (PE 3, PE 4, and PE 5). The multicast traffic enters this tunnel at the ingress node. When receivers in the network need to receive multicast packets, the multicast source needs to send only one packet to the ingress node. The packet is then duplicated at branch nodes (PE 2 and P 3), increasing bandwidth efficiency.

Figure 25 mLDP P2MP network

 

Node roles in an mLDP P2MP tunnel include:

·     Root node— The root node is the ingress node of the mLDP P2MP network. Multicast packets are encapsulated with MPLS labels on this node. It transmits multicast source and root node information to leaf nodes via BGP-advertised MVPN routes (type-1 and type-3 routes).

·     Transit node—A transit node performs label switching.

·     Branch node—A branch node is a type of transit node. It replicates MPLS packets based on the number of leaf nodes and then performs label switching.

·     Leaf node— A leaf node is a node connected to a device with multicast receivers. It is the destination node of an mLDP P2MP tunnel.

·     Bud node— It acts as both a leaf node and a branch node in the mLDP P2MP network.

LDP protocol extensions

·     LDP capability negotiation message

When establishing an LDP session, the local and remote peers must negotiate the LDP capabilities. An mLDP P2MP tunnel can be established only if both the local and remote peers support the mLDP P2MP feature. Figure 26 shows the format of an LDP capability negotiation message.

Figure 26 Format of an LDP capability negotiation message

 

The meaning of each field in the LDP capability negotiation is as follows:

¡     U—Set to 1, which indicates that the peer can ignore this TLV if it does not support negotiation.

¡     F—Set to 0, which indicates that negotiation packets do not need to forwarded.

¡     TLV Code PointLDP capability type. The value 0x0508 indicates that the LSP is negotiating P2MP capabilities.

¡     S—Set to 1, indicating P2MP capabilities.

¡     Reserved—Reserved field.

·     P2MP FEC Element

mLDP extends the FEC TLV in label mapping messages for establishing mLDP P2MP tunnels. The extended FEC TLV, called the P2MP FEC Element, has a format shown in Figure 27. It contains the following fields:

¡     Type—Type of the tree-shaped LSP established by mLDP. Only P2MP is supported in the current software version.

¡     Address family—Address family of the root node. IPv4 and IPv6 are supported.

¡     Address length—Address length of the root node.

¡     Root node address—Address of the root node.

¡     Opaque length—Length of the Opaque value.

¡     Opaque value—Used to distinguish different P2MP LSPs at the root node and carry information about the root and leaf nodes.

Figure 27 P2MP FEC  Element format

http://press.h3c.com/data/infoblade/Comware%20V7%E5%B9%B3%E5%8F%B0B75%E5%88%86%E6%94%AF%E4%B8%AD%E6%96%87/10-MPLS/03-LDP/LDP%E9%85%8D%E7%BD%AE.files/20250329_13124716_x_Img_x_png_27_2383881_294551_0.png

 

mLDP P2MP tunnel establishment

As shown in Figure 28, after successful negotiation of mLDP capabilities between local and remote peers, the process of establishing an mLDP P2MP tunnel is as follows:

1.     The root node transmits multicast source and root node information to leaf nodes via BGP-advertised MVPN routes (type-1 and type-3 routes). Leaf nodes and transit nodes select the optimal route to the root node and use the next hop of that route as their upstream node.

2.     A leaf node sends label mapping messages upstream and generate corresponding forwarding entries.

3.     Upon receiving a label mapping message from downstream, a transit node identifies whether it has already sent a label mapping message upstream. If no, it determines the upstream node from its routing table, sends a label mapping message upstream, and generates corresponding forwarding entries.

4.     Upon receiving a label mapping message from downstream, the root node generates corresponding forwarding entries.

Figure 28 mLDP P2MP tunnel establishment process

http://press.h3c.com/data/infoblade/Comware%20V7%E5%B9%B3%E5%8F%B0B75%E5%88%86%E6%94%AF%E4%B8%AD%E6%96%87/10-MPLS/03-LDP/LDP%E9%85%8D%E7%BD%AE.files/20250329_13124717_x_Img_x_png_28_2383881_294551_0.png

 

mLDP P2MP packet forwarding

As shown in Figure 29, CE 1, CE 2, and CE 3 belong to the same multicast group. The multicast source sends multicast data to only root node PE 1. The network devices replicate and forward the multicast data based on the group member distribution, accurately sending it to CE 1, CE 2, and CE 3. When PE 1 receives a multicast packet, it first looks up the multicast routing table and determines to forward the packet through an mLDP P2MP tunnel. Then, PE 1 adds labels to the packet, and performs MPLS forwarding according to the multicast label forwarding table.

Figure 29 mLDP P2MP packet forwarding

 

Inclusive tunnel establishment

As shown in Figure 30, after BGP and MVPN are deployed on both the multicast source-side PE and receiver-side PE, the process for establishing an inclusive tunnels is as follows:

1.     The multicast source-side PE sends a type-1 route (Intra-AS I-PMSI A-D route) to the receiver-side PE. This route carries the following attributes:

¡     Route TargetUsed to control the advertisement and acceptance of routes.

¡     PMSI Tunnel attribute—Used to transmit tunnel information. The value of the Tunnel Type field is mLDP P2MP. The value of the Tunnel Identifier field is the tunnel identifier assigned by the multicast source-side PE for the mLDP P2MP tunnel.

2.     Upon receiving the type-1 route from the multicast source-side PE, the receiver-side PE identifies whether the Route Target attribute matches the Import Target configured for the local VPN instance. If yes, the receiver-side PE accepts the route and sends a label mapping message to the multicast source-side PE.

3.     The receiver-side PE creates an LSP with the multicast source-side PE as the root and itself as a leaf.

4.     When multiple receiver-side PEs exist, they can create multiple LSPs, forming an mLDP P2MP LSP with the multicast source-side PE as the root and all receiver-side PEs as leaves. For more information about establishing an mLDP P2MP LSP, see "mLDP P2MP tunnel establishment."

Figure 30 Inclusive tunnel establishment process

 

At this point, the establishment of mLDP P2MP inclusive tunnels between the multicast source-side PE and each receiver-side PE in the MVPN is complete, as shown in Figure 31.

Figure 31 Inclusive tunnels established on the public network

http://press/data/infoblade/Comware%20V7%E5%B9%B3%E5%8F%B0B75%E5%88%86%E6%94%AF%E4%B8%AD%E6%96%87/09-IP%E7%BB%84%E6%92%AD/09-%E7%BB%84%E6%92%ADVPN/%E7%BB%84%E6%92%ADVPN%E9%85%8D%E7%BD%AE.files/20250329_13124702_x_Img_x_png_14_2383881_294551_0.png

 

Selective tunnel switchover

The process of switching from an inclusive tunnel to a selective tunnel is as follows:

1.     When the multicast source-side PE receives a private network multicast packet that meets the tunnel switchover criterion, it sends an S-PMSI A-D route with the PMSI Tunnel attribute to the receiver-side PE and does not request the receiver-side PE to respond with join information.

2.     Upon receiving the S-PMSI A-D route from the multicast source-side PE, the receiver-side PE records the route. If the receiver-side PE has downstream receivers, it creates an LSP with the multicast source-side PE as the root and itself as a leaf based on the PMSI Tunnel attribute in the route. If the receiver-side PE does not have downstream receivers, it does not create an LSP.

3.     When multiple receiver-side PEs have downstream receivers, they can create multiple LSPs, forming an mLDP P2MP LSP with the multicast source-side PE as the root and all receiver-side PEs as leaves. For more information about establishing an mLDP P2MP LSP, see "mLDP P2MP tunnel establishment."

Figure 32 Selective tunnel switchover process

 

At this point, the establishment of mLDP P2MP selective tunnels between the multicast source-side PE and each receiver-side PE in the MVPN is complete, as shown in Figure 33.

Figure 33 Selective tunnel establishment and tunnel switchover

 

Inter-AS mLDP-based MVPN

Inter-AS option A mLDP-based MVPN

As shown in Figure 34, two VPN instances are in AS 100 and AS 200. PE 3 and PE 4 are ASBRs for AS 100 and AS 200, respectively. PE 3 and PE 4 are interconnected through their respective VPN instance and treat each other as a CE.

Figure 34 Inter-AS option A mLDP-based MVPN

 

To implement inter-AS option A mLDP-based MVPN, you must create a separate MVPN in each AS. Multicast data is transmitted between the VPNs in different ASs through the MVPNs.

Multicast packets of VPN instance 1 are delivered as follows:

1.     CE 1 forwards a multicast packet of VPN instance 1 to PE 1.

2.     PE 1 encapsulates the multicast packet into an MPLS packet and forwards it to PE 3 through mLDP tunnel 1.

3.     PE 3 considers PE 4 as a CE of MVPN 1, so PE 3 decapsulates the MPLS packet and forwards the multicast packet to PE 4.

4.     PE 4 considers PE 2 as a CE of MVPN 2, so PE 4 encapsulates the multicast packet into an MPLS packet and then forwards the packet to PE 2 through mLDP tunnel 2.

5.     PE 2 decapsulates the MPLS packet and then forwards the multicast packet to CE 2.

In inter-AS option A, PEs in different ASs cannot advertise Source Active A-D routes to each other. Therefore, you must configure MSDP or Anycast-RP between RPs to allow Source Active A-D routes to be advertised across ASs.

Inter-AS option B mLDP-based MVPN

As shown in Figure 35, two VPN instances are in AS 100 and AS 200. PE 3 and PE 4 are ASBRs for AS 100 and AS 200, respectively. PE 3 and PE 4 are interconnected through MP-EBGP. PE 3 and PE 4 advertise VPN-IPv4 routes to each other through MP-EBGP.

Figure 35 Inter-AS option B mLDP-based MVPN

 

To implement inter-AS option B mLDP-based MVPN, you need to establish only one MVPN for the two ASs. VPN multicast data is transmitted across the ASs through this MVPN.

Multicast packets of the VPN instance are transmitted as follows:

1.     Upon receiving a multicast packet of a multicast group in the VPN instance, CE 1 advertises the RP information for this group to PE 1 as follows:

¡     If the RP is PE 1, CE 1 directly sends a register message to PE 1 to advertise the RP information.

¡     If the RP is CE 1 or CE 2, CE 1 advertises the RP information to PE 1 through MSDP or Anycast-RP depending on the actual configuration.

2.     PE 1 sends a Source Active A-D route to PE 3, PE 4, and PE 2 through BGP.

3.     PE 2 sends a C-multicast route to join the multicast group if it has a receiver attached. The C-multicast route will be advertised to PE 4, PE 3, and PE 1 through BGP.

4.     Upon receiving the C-multicast route, PE 1 encapsulates the multicast packet into an MPLS packet and then sends the packet to PE 2 through the mLDP tunnel.

5.     PE 2 decapsulates the MPLS packet and then sends the multicast packet to CE 2.

Inter-AS option C mLDP-based MVPN

As shown in Figure 36, two VPN instances are in AS 100 and AS 200. PE 3 and PE 4 are ASBRs for AS 100 and AS 200, respectively. PE 3 and PE 4 are interconnected through MP-EBGP and treat each other as a P device. PEs in different ASs establish a multihop MP-EBGP session to advertise VPN-IPv4 routes to each other.

Figure 36 Inter-AS option C mLDP-based MVPN

http://press/data/infoblade/Comware%20V7%E5%B9%B3%E5%8F%B0B75%E5%88%86%E6%94%AF%E4%B8%AD%E6%96%87/09-IP%E7%BB%84%E6%92%AD/09-%E7%BB%84%E6%92%ADVPN/%E7%BB%84%E6%92%ADVPN%E9%85%8D%E7%BD%AE.files/20250329_13124710_x_Img_x_png_21_2383881_294551_0.png

 

To implement inter-AS option C mLDP-based MVPN, you need to establish only one MVPN for the two ASs. VPN multicast data is transmitted between different ASs through this MVPN.

Multicast packets of VPN instance 1 are delivered as follows:

1.     Upon receiving a multicast packet of a multicast group in the VPN instance, CE 1 advertises the RP information for this group to PE 1 as follows:

¡     If the RP is PE 1, CE 1 directly sends a register message to PE 1 to advertise the RP information.

¡     If the RP is CE 1 or CE 2, CE 1 advertises the RP information to PE 1 through MSDP or Anycast-RP depending on the actual configuration.

2.     PE 1 sends a source-active A-D route to PE 2 through BGP.

3.     PE 2 sends a C-multicast route to join the multicast group if it has a receiver attached. The C-multicast route will be advertised to PE 1 through BGP.

4.     PE 1 encapsulates the multicast packet into an MPLS packet and then sends the packet to PE 2 through the mLDP tunnel.

5.     PE 2 decapsulates the MPLS packet and then sends the multicast packet to CE 2.

Application scenarios

Intra-AS MVPN

Deploying intra-AS MVPN services in an BGP/MPLS VPN network enables isolation of different VPN multicast services within the same AS. As shown in Figure 37, multicast source S1 and receivers R1, R2, and R3 belong to VPN a, and multicast source S2 and receiver R4 belong to VPN b. The public network belongs to one AS. The intra-AS MVPN solution ensures that the multicast traffic of VPN a and VPN b is isolated. R1, R2, and R3 can receive multicast traffic sent by only S1, and R4 can receive multicast traffic sent by only S2.

Figure 37 Network diagram

 

Inter-AS MVPN

Deploying inter-AS MVPN services in an BGP/MPLS VPN network enables isolation of different VPN multicast services in different ASs. As shown in Figure 38, multicast source S1 and receiver R2 belong to VPN a, and multicast source S2 and receiver R1 belong to VPN b. The public network spans AS 100 and AS 200. The inter-AS MVPN solution ensures that the multicast traffic of VPN a and VPN b is isolated. R2 can receive multicast traffic sent by only S1, and R1 can receive multicast traffic sent by only S2.

Figure 38 Network diagram

 

References

·     RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs)

·     RFC 6513, Multicast in MPLS/BGP IP VPNs

·     RFC 6514, BGP Encodings and Procedures for Multicast in MPLS/BGP IP VPNs

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网