06-IP Multicast Configuration Guide

HomeSupportConfigure & DeployConfiguration GuidesH3C S12500-X & S12500X-AF Switch Series Configuration Guides-Release 113x-6W10106-IP Multicast Configuration Guide
Table of Contents
Related Documents
01-Text
Title Size Download
01-Text 2.08 MB

Contents

Multicast overview· 1

Introduction to multicast 1

Information transmission techniques· 1

Multicast features· 3

Common notations in multicast 4

Multicast benefits and applications· 4

Multicast models· 4

Multicast architecture· 5

Multicast addresses· 5

Multicast protocols· 8

Multicast packet forwarding mechanism·· 11

Configuring IGMP snooping· 12

Overview·· 12

Basic IGMP snooping concepts· 12

How IGMP snooping works· 14

Protocols and standards· 15

IGMP snooping configuration task list 16

Configuring basic IGMP snooping features· 16

Enabling IGMP snooping· 16

Specifying an IGMP snooping version· 17

Setting the maximum number of IGMP snooping forwarding entries· 18

Setting the IGMP last member query interval 18

Configuring IGMP snooping port features· 19

Setting aging timers for dynamic ports· 19

Configuring static ports· 20

Configuring a port as a simulated member host 21

Enabling fast-leave processing· 21

Configuring the IGMP snooping querier 22

Configuration prerequisites· 22

Enabling the IGMP snooping querier 22

Configuring parameters for IGMP general queries and responses· 22

Configuring parameters for IGMP messages· 23

Configuration prerequisites· 23

Configuring source IP addresses for IGMP messages· 24

Configuring IGMP snooping policies· 25

Configuring a multicast group policy· 25

Configuring multicast source port filtering· 25

Enabling dropping unknown multicast data· 26

Enabling IGMP report suppression· 26

Setting the maximum number of multicast groups on a port 27

Enabling multicast group· 27

Displaying and maintaining IGMP snooping· 28

IGMP snooping configuration examples· 29

Group policy configuration example· 29

Static port configuration example· 31

IGMP snooping querier configuration example· 33

Troubleshooting IGMP snooping· 36

Layer 2 multicast forwarding cannot function· 36

Multicast group policy does not work· 36

Configuring multicast routing and forwarding· 37

Overview·· 37

RPF check mechanism·· 37

Static multicast routes· 39

Configuration task list 40

Enabling IP multicast routing· 41

Configuring multicast routing and forwarding· 41

Configuring static multicast routes· 41

Specifying the longest prefix match principle· 41

Configuring multicast load splitting· 42

Configuring a multicast forwarding boundary· 42

Configuring static multicast MAC address entries· 42

Displaying and maintaining multicast routing and forwarding· 43

Configuration examples· 44

Changing an RPF route· 44

Creating an RPF route· 46

Troubleshooting multicast routing and forwarding· 48

Static multicast route failure· 48

Configuring IGMP· 50

Overview·· 50

IGMPv1 overview·· 50

IGMPv2 enhancements· 51

IGMPv3 enhancements· 52

IGMP support for VPNs· 53

Protocols and standards· 53

IGMP configuration task list 54

Configuring basic IGMP features· 54

Enabling IGMP· 54

Specifying an IGMP version· 54

Configuring an interface as a static member interface· 55

Configuring a multicast group policy· 55

Adjusting IGMP performance· 56

Enabling fast-leave processing· 56

Displaying and maintaining IGMP· 56

IGMP configuration examples· 57

Network requirements· 57

Configuration procedure· 57

Verifying the configuration· 58

Troubleshooting IGMP· 59

No membership information on the receiver-side router 59

Inconsistent membership information on the routers on the same subnet 59

Configuring PIM·· 60

Overview·· 60

PIM-DM overview·· 60

PIM-SM overview·· 62

Administrative scoping overview·· 67

PIM-SSM overview·· 68

PIM support for VPNs· 70

Protocols and standards· 70

Configuring PIM-DM·· 70

PIM-DM configuration task list 70

Configuration prerequisites· 70

Enabling PIM-DM·· 70

Enabling the state refresh feature· 71

Configuring state refresh parameters· 71

Configuring PIM-DM graft retry timer 72

Configuring PIM-SM·· 72

PIM-SM configuration task list 72

Configuration prerequisites· 73

Enabling PIM-SM·· 73

Configuring an RP· 73

Configuring a BSR·· 75

Configuring multicast source registration· 76

Configuring the switchover to SPT· 77

Configuring PIM-SSM·· 77

PIM-SSM configuration task list 77

Configuration prerequisites· 77

Enabling PIM-SM·· 77

Configuring the SSM group range· 78

Configuring common PIM features· 78

Configuration task list 78

Configuration prerequisites· 79

Configuring a multicast source policy· 79

Configuring a PIM hello policy· 79

Configuring PIM hello message options· 80

Configuring common PIM timers· 81

Setting the maximum size of each join or prune message· 82

Enabling BFD for PIM·· 83

Enabling PIM passive mode· 83

Displaying and maintaining PIM·· 84

PIM configuration examples· 84

PIM-DM configuration example· 84

PIM-SM non-scoped zone configuration example· 87

PIM-SM admin-scoped zone configuration example· 90

PIM-SSM configuration example· 95

Troubleshooting PIM·· 98

A multicast distribution tree cannot be built correctly· 98

Multicast data is abnormally terminated on an intermediate router 99

An RP cannot join an SPT in PIM-SM·· 99

An RPT cannot be built or multicast source registration fails in PIM-SM·· 99

Configuring MSDP· 101

Overview·· 101

How MSDP works· 101

MSDP support for VPNs· 104

Protocols and standards· 105

MSDP configuration task list 105

Configuring basic MSDP functions· 105

Configuration prerequisites· 105

Enabling MSDP· 105

Creating an MSDP peering connection· 106

Configuring a static RPF peer 106

Configuring an MSDP peering connection· 106

Configuration prerequisites· 106

Configuring the description for an MSDP peer 106

Configuring an MSDP mesh group· 107

Controlling MSDP peering connections· 107

Configuring SA message-related parameters· 108

Configuration prerequisites· 108

Configuring SA message contents· 108

Configuring SA request messages· 109

Configuring SA message policies· 109

Configuring the SA cache mechanism·· 110

Displaying and maintaining MSDP· 111

MSDP configuration examples· 111

PIM-SM inter-domain multicast configuration· 111

Anycast RP configuration· 116

SA message filtering configuration· 120

Troubleshooting MSDP· 123

MSDP peers stay in disabled state· 124

No SA entries exist in the router's SA message cache· 124

No exchange of locally registered (S, G) entries between RPs· 124

Index· 125

 


Multicast overview

Introduction to multicast

As a technique that coexists with unicast and broadcast, the multicast technique effectively addresses the issue of point-to-multipoint data transmission. By enabling high-efficiency point-to-multipoint data transmission over a network, multicast greatly saves network bandwidth and reduces network load.

By using multicast technology, a network operator can easily provide bandwidth-critical and time-critical information services. These services include live webcasting, Web TV, distance learning, telemedicine, Web radio, and real-time video conferencing.

Information transmission techniques

The information transmission techniques include unicast, broadcast, and multicast.

Unicast

In unicast transmission, the information source must send a separate copy of information to each host that needs the information.

Figure 1 Unicast transmission

 

In Figure 1, Host B, Host D, and Host E need the information. A separate transmission channel must be established from the information source to each of these hosts.

In unicast transmission, the traffic transmitted over the network is proportional to the number of hosts that need the information. If a large number of hosts need the information, the information source must send a separate copy of the same information to each of these hosts. Sending many copies can place a tremendous pressure on the information source and the network bandwidth.

Unicast is not suitable for batch transmission of information.

Broadcast

In broadcast transmission, the information source sends information to all hosts on the subnet, even if some hosts do not need the information.

Figure 2 Broadcast transmission

 

In Figure 2, only Host B, Host D, and Host E need the information. If the information is broadcast to the subnet, Host A and Host C also receive it. In addition to information security issues, broadcasting to hosts that do not need the information also causes traffic flooding on the same subnet.

Broadcast is not as efficient as multicast for sending data to groups of hosts.

Multicast

Multicast provides point-to-multipoint data transmissions with the minimum network consumption. When some hosts on the network need multicast information, the information sender, or multicast source, sends only one copy of the information. Multicast distribution trees are built through multicast routing protocols, and the packets are replicated only on nodes where the trees branch.

Figure 3 Multicast transmission

 

The multicast source sends only one copy of the information to a multicast group. Host B, Host D, and Host E, which are information receivers, must join the multicast group. The routers on the network duplicate and forward the information based on the distribution of the group members. Finally, the information is correctly delivered to Host B, Host D, and Host E.

To summarize, multicast has the following advantages:

·          Advantages over unicastMulticast data is replicated and distributed until it flows to the farthest-possible node from the source. The increase of receiver hosts will not remarkably increase the load of the source or the usage of network resources.

·          Advantages over broadcastMulticast data is sent only to the receivers that need it. This saves network bandwidth and enhances network security. In addition, multicast data is not confined to the same subnet.

Multicast features

·          A multicast group is a multicast receiver set identified by an IP multicast address. Hosts must join a multicast group to become members of the multicast group before they receive the multicast data addressed to that multicast group. Typically, a multicast source does not need to join a multicast group.

·          A multicast source is an information sender. It can send data to multiple multicast groups at the same time. Multiple multicast sources can send data to the same multicast group at the same time.

·          The group memberships are dynamic. Hosts can join or leave multicast groups at any time. Multicast groups are not subject to geographic restrictions.

·          Multicast routers or Layer 3 multicast devices are routers or Layer 3 switches that support Layer 3 multicast. They provide multicast routing and manage multicast group memberships on stub subnets with attached group members. A multicast router itself can be a multicast group member.

For a better understanding of the multicast concept, you can compare multicast transmission to the transmission of TV programs.

Table 1 Comparing TV program transmission and multicast transmission

TV program transmission

Multicast transmission

A TV station transmits a TV program through a channel.

A multicast source sends multicast data to a multicast group.

A user tunes the TV set to the channel.

A receiver joins the multicast group.

The user starts to watch the TV program transmitted by the TV station on the channel.

The receiver starts to receive the multicast data addressed to the multicast group from the multicast source.

The user turns off the TV set or tunes to another channel.

The receiver leaves the multicast group or joins another group.

 

Common notations in multicast

The following notations are commonly used in multicast transmission:

·          (*, G)Rendezvous point tree (RPT), or a multicast packet that any multicast source sends to multicast group G. The asterisk (*) represents any multicast source, and "G" represents a specific multicast group.

·          (S, G)Shortest path tree (SPT), or a multicast packet that multicast source "S" sends to multicast group "G." "S" represents a specific multicast source, and "G" represents a specific multicast group.

For more information about the concepts RPT and SPT, see "Configuring PIM."

Multicast benefits and applications

Multicast benefits

·          Enhanced efficiency—Reduces the processor load of information source servers and network devices.

·          Optimal performance—Reduces redundant traffic.

·          Distributed application—Enables point-to-multipoint applications at the price of minimum network resources.

Multicast applications

·          Multimedia and streaming applications, such as Web TV, Web radio, and real-time video/audio conferencing

·          Communication for training and cooperative operations, such as distance learning and telemedicine

·          Data warehouse and financial applications (stock quotes)

·          Any other point-to-multipoint application for data distribution

Multicast models

Based on how the receivers treat the multicast sources, the multicast models include any-source multicast (ASM), source-filtered multicast (SFM), and source-specific multicast (SSM).

ASM model

In the ASM model, any sender can send information to a multicast group. Receivers can join a multicast group and get multicast information addressed to that multicast group from any multicast sources. In this model, receivers do not know the positions of the multicast sources in advance.

SFM model

The SFM model is derived from the ASM model. To a sender, the two models appear to have the same multicast membership architecture.

The SFM model functionally extends the ASM model. The upper-layer software checks the source address of received multicast packets and permits or denies multicast traffic from specific sources. The receivers can receive the multicast data from only part of the multicast sources. To a receiver, multicast sources are not all valid, but are filtered.

SSM model

The SSM model provides a transmission service that enables users to specify at the client side the multicast sources in which they are interested.

In the SSM model, receivers have already determined the locations of the multicast sources. This is the main difference between the SSM model and the ASM model. In addition, the SSM model uses a different multicast address range than the ASM/SFM model. Dedicated multicast forwarding paths are established between receivers and the specified multicast sources.

Multicast architecture

IP multicast addresses the following issues:

·          Where should the multicast source transmit information to? (Multicast addressing.)

·          What receivers exist on the network? (Host registration.)

·          Where is the multicast source that will provide data to the receivers? (Multicast source discovery.)

·          How is the information transmitted to the receivers? (Multicast routing.)

IP multicast is an end-to-end service. The multicast architecture involves the following parts:

·          Addressing mechanismA multicast source sends information to a group of receivers through a multicast address.

·          Host registration—Receiver hosts can join and leave multicast groups dynamically. This mechanism is the basis for management of group memberships.

·          Multicast routing—A multicast distribution tree (a forwarding path tree for multicast data on the network) is constructed for delivering multicast data from a multicast source to receivers.

·          Multicast applications—A software system that supports multicast applications, such as video conferencing, must be installed on multicast sources and receiver hosts. The TCP/IP stack must support reception and transmission of multicast data.

Multicast addresses

IP multicast addresses

·          IPv4 multicast addresses:

IANA assigns the Class D address block (224.0.0.0 to 239.255.255.255) to IPv4 multicast.

Table 2 Class D IP address blocks and description

Address block

Description

224.0.0.0 to 224.0.0.255

Reserved permanent group addresses. The IP address 224.0.0.0 is reserved. Other IP addresses can be used by routing protocols and for topology searching, protocol maintenance, and so on. Table 3 lists common permanent group addresses. A packet destined for an address in this block will not be forwarded beyond the local subnet regardless of the TTL value in the IP header.

224.0.1.0 to 238.255.255.255

Globally scoped group addresses. This block includes the following types of designated group addresses:

·         232.0.0.0/8—SSM group addresses.

·         233.0.0.0/8—Glop group addresses.

239.0.0.0 to 239.255.255.255

Administratively scoped multicast addresses. These addresses are considered locally unique rather than globally unique. You can reuse them in domains administered by different organizations without causing conflicts. For more information, see RFC 2365.

 

 

NOTE:

"Glop" is a mechanism for assigning multicast addresses between different ASs. By filling an AS number into the middle two bytes of 233.0.0.0, you get 255 multicast addresses for that AS. For more information, see RFC 2770.

 

Table 3 Common permanent multicast group addresses

Address

Description

224.0.0.1

All systems on this subnet, including hosts and routers.

224.0.0.2

All multicast routers on this subnet.

224.0.0.3

Unassigned.

224.0.0.4

DVMRP routers.

224.0.0.5

OSPF routers.

224.0.0.6

OSPF designated routers and backup designated routers.

224.0.0.7

Shared Tree (ST) routers.

224.0.0.8

ST hosts.

224.0.0.9

RIPv2 routers.

224.0.0.11

Mobile agents.

224.0.0.12

DHCP server/relay agent.

224.0.0.13

All Protocol Independent Multicast (PIM) routers.

224.0.0.14

RSVP encapsulation.

224.0.0.15

All Core-Based Tree (CBT) routers.

224.0.0.16

Designated SBM.

224.0.0.17

All SBMs.

224.0.0.18

VRRP.

 

·          IPv6 multicast addresses:

Figure 4 IPv6 multicast format

 

The following describes the fields of an IPv6 multicast address:

?  0xFF—The most significant eight bits are 11111111.

?  Flags—The Flags field contains four bits.

Figure 5 Flags field format

 

Table 4 Flags field description

Bit

Description

0

Reserved, set to 0.

R

·         When set to 0, this address is an IPv6 multicast address without an embedded RP address.

·         When set to 1, this address is an IPv6 multicast address with an embedded RP address. (The P and T bits must also be set to 1.)

P

·         When set to 0, this address is an IPv6 multicast address not based on a unicast prefix.

·         When set to 1, that this address is an IPv6 multicast address based on a unicast prefix. (The T bit must also be set to 1.)

T

·         When set to 0, this address is an IPv6 multicast address permanently-assigned by IANA.

·         When set to 1, this address is a transient, or dynamically assigned IPv6 multicast address.

 

?  Scope—The Scope field contains four bits, which represent the scope of the IPv6 internetwork for which the multicast traffic is intended.

Table 5 Values of the Scope field

Value

Meaning

0, F

Reserved.

1

Interface-local scope.

2

Link-local scope.

3

Subnet-local scope.

4

Admin-local scope.

5

Site-local scope.

6, 7, 9 through D

Unassigned.

8

Organization-local scope.

E

Global scope.

 

?  Group ID—The Group ID field contains 112 bits. It uniquely identifies an IPv6 multicast group in the scope that the Scope field defines.

Ethernet multicast MAC addresses

·          IPv4 multicast MAC addresses:

As defined by IANA, the most significant 24 bits of an IPv4 multicast MAC address are 0x01005E. Bit 25 is 0, and the other 23 bits are the least significant 23 bits of a multicast IPv4 address.

Figure 6 IPv4-to-MAC address mapping

 

The most significant four bits of an IPv4 multicast address are 1110. In an IPv4-to-MAC address mapping, five bits of the IPv4 multicast address are lost. As a result, 32 IPv4 multicast addresses are mapped to the same IPv4 multicast MAC address. As a result, a device might receive unwanted multicast data at Layer 2 processing, which needs to be filtered by the upper layer.

·          IPv6 multicast MAC addresses:

As defined by IANA, the most significant 16 bits of an IPv6 multicast MAC address are 0x3333 as its address prefix. The least significant 32 bits are mapped from the least significant 32 bits of an IPv6 multicast address. The problem of duplicate IPv6-to-MAC address mapping also arises like IPv4-to-MAC address mapping.

Figure 7 IPv6-to-MAC address mapping

 

Multicast protocols

Multicast protocols include the following categories:

·          Layer 3 and Layer 2 multicast protocols:

?  Layer 3 multicast refers to IP multicast working at the network layer.

Layer 3 multicast protocols—IGMP, MLD, PIM, IPv6 PIM, MSDP, MBGP, and IPv6 MBGP.

?  Layer 2 multicast refers to IP multicast working at the data link layer.

Layer 2 multicast protocols—IGMP snooping, MLD snooping, PIM snooping, IPv6 PIM snooping, multicast VLAN, and IPv6 multicast VLAN.

·          IPv4 and IPv6 multicast protocols:

?  For IPv4 networks—IGMP snooping, PIM snooping, multicast VLAN, IGMP, PIM, MSDP, and MBGP.

?  For IPv6 networks—MLD snooping, IPv6 PIM snooping, IPv6 multicast VLAN, MLD, IPv6 PIM, and IPv6 MBGP.

This section provides only general descriptions about applications and functions of the Layer 2 and Layer 3 multicast protocols in a network. For more information about these protocols, see the related chapters.

 

 

NOTE:

The switches support IGMP snooping, IGMP, and PIM.

 

Layer 3 multicast protocols

Layer 3 multicast protocols include multicast group management protocols and multicast routing protocols.

Figure 8 Positions of Layer 3 multicast protocols

 

·          Multicast group management protocols:

Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) protocol are multicast group management protocols. Typically, they run between hosts and Layer 3 multicast devices that directly connect to the hosts to establish and maintain multicast group memberships.

·          Multicast routing protocols:

A multicast routing protocol runs on Layer 3 multicast devices to establish and maintain multicast routes and correctly and efficiently forward multicast packets. Multicast routes constitute loop-free data transmission paths (also known as multicast distribution trees) from a data source to multiple receivers.

In the ASM model, multicast routes include intra-domain routes and inter-domain routes.

?  An intra-domain multicast routing protocol discovers multicast sources and builds multicast distribution trees within an AS to deliver multicast data to receivers. Among a variety of mature intra-domain multicast routing protocols, PIM is most widely used. Based on the forwarding mechanism, PIM has dense mode (often referred to as "PIM-DM") and sparse mode (often referred to as "PIM-SM").

?  An inter-domain multicast routing protocol is used for delivering multicast information between two ASs. So far, mature solutions include Multicast Source Discovery Protocol (MSDP) and MBGP. MSDP propagates multicast source information among different ASs. MBGP is an extension of the MP-BGP for exchanging multicast routing information among different ASs.

For the SSM model, multicast routes are not divided into intra-domain routes and inter-domain routes. Because receivers know the position of the multicast source, channels established through PIM-SM are sufficient for the transport of multicast information.

Layer 2 multicast protocols

Layer 2 multicast protocols include IGMP snooping, MLD snooping, PIM snooping, IPv6 PIM snooping, multicast VLAN, and IPv6 multicast VLAN.

Figure 9 Positions of Layer 2 multicast protocols

 

·          IGMP snooping and MLD snooping:

IGMP snooping and MLD snooping are multicast constraining mechanisms that run on Layer 2 devices. They manage and control multicast groups by monitoring and analyzing IGMP or MLD messages exchanged between the hosts and Layer 3 multicast devices. This effectively controls the flooding of multicast data in Layer 2 networks.

·          PIM snooping and IPv6 PIM snooping:

PIM snooping and IPv6 PIM snooping run on Layer 2 devices. They work with IGMP snooping or MLD snooping to analyze received PIM messages. Then, they add the ports that are interested in specific multicast data to a PIM snooping routing entry or IPv6 PIM snooping routing entry. In this way, multicast data can be forwarded to only the ports that are interested in the data.

·          Multicast VLAN and IPv6 multicast VLAN:

Multicast VLAN or IPv6 multicast VLAN runs on a Layer 2 device in a multicast network where multicast receivers for the same group exist in different VLANs. With these protocols, the Layer 3 multicast device sends only one copy of multicast to the multicast VLAN or IPv6 multicast VLAN on the Layer 2 device. This method avoids waste of network bandwidth and extra burden on the Layer 3 device.

Multicast packet forwarding mechanism

In a multicast model, receiver hosts of a multicast group are usually located at different areas on the network. They are identified by the same multicast group address. To deliver multicast packets to these receivers, a multicast source encapsulates the multicast data in an IP packet with the multicast group address as the destination address. Multicast routers on the forwarding paths forward multicast packets that an incoming interface receives through multiple outgoing interfaces. Compared to a unicast model, a multicast model is more complex in the following aspects:

·          To ensure multicast packet transmission on the network, different routing tables are used to guide multicast forwarding. These routing tables include unicast routing tables, routing tables for multicast (for example, the MBGP routing table), and static multicast routing tables.

·          To process the same multicast information from different peers received on different interfaces, the multicast device performs an RPF check on each multicast packet. The RPF check result determines whether the packet will be forwarded or discarded. The RPF check mechanism is the basis for most multicast routing protocols to implement multicast forwarding.

For more information about the RPF mechanism, see "Configuring multicast routing and forwarding."

 


Configuring IGMP snooping

Overview

IGMP snooping runs on a Layer 2 switch as a multicast constraining mechanism to improve multicast forwarding efficiency. It creates Layer 2 multicast forwarding entries from IGMP packets that are exchanged between the hosts and the router.

As shown in Figure 10, when IGMP snooping is not enabled, the Layer 2 switch floods multicast packets to all hosts. When IGMP snooping is enabled, the Layer 2 switch forwards multicast packets of known multicast groups to only the receivers.

Figure 10 Multicast packet transmission without and with IGMP snooping

 

Basic IGMP snooping concepts

IGMP snooping related ports

As shown in Figure 11, IGMP snooping runs on Switch A and Switch B, and Host A and Host C are receivers in a multicast group.

Figure 11 IGMP snooping related ports

 

The following describes the ports involved in IGMP snooping:

·          Router port—Layer 3 multicast device-side port. Layer 3 multicast devices include DRs and IGMP queriers. In Figure 11, FortyGigE 1/0/1 of Switch A and FortyGigE 1/0/1 of Switch B are the router ports. A switch records all its router ports in a router port list.

Do not confuse the "router port" in IGMP snooping with the "routed interface" commonly known as the "Layer 3 interface." The router port in IGMP snooping is a Layer 2 interface.

·          Member port—Multicast receiver-side port. In Figure 11, FortyGigE 1/0/2 and FortyGigE 1/0/3 of Switch A and FortyGigE 1/0/2 of Switch B are the member ports. A switch records all its member ports in the IGMP snooping forwarding table.

Unless otherwise specified, router ports and member ports in this document include both static and dynamic router ports and member ports.

 

 

NOTE:

When IGMP snooping is enabled, all ports that receive PIM hello messages or IGMP general queries with the source addresses other than 0.0.0.0 are considered dynamic router ports. For more information about PIM hello messages, see "Configuring PIM."

 

Aging timers for dynamic ports in IGMP snooping

The following are aging timers for dynamical ports in IGMP snooping:

·          Dynamic router port aging timerThe switch starts this timer for a port that receives an IGMP general query with the source address other than 0.0.0.0 or a PIM hello message. If the port does not receive either of these messages before the timer expires, the switch removes the port from its router port list.

·          Dynamic member port aging timerThe switch starts this timer for a port that receives an IGMP report. If the port does not receive reports before the timer expires, the switch removes the port from the IGMP snooping forwarding entries.

 

 

NOTE:

In IGMP snooping, only dynamic ports age out. Static ports never age out.

 

How IGMP snooping works

The ports in this section are dynamic ports. For information about how to configure and remove static ports, see "Configuring static ports."

IGMP messages types include general query, IGMP report, and leave message. An IGMP snooping-enabled switch performs differently depending on the message.

General query

To check for the existence of multicast group members, the IGMP querier periodically sends IGMP general queries to all hosts and routers on the local subnet. All these hosts and routers are identified by the address 224.0.0.1.

After receiving an IGMP general query, the switch forwards the query to all ports in the VLAN except the port that received the query. The switch also performs one of the following actions:

·          If the receiving port is a dynamic router port in the router port list, the switch restarts the aging timer for the port.

·          If the receiving port does not exist in the router port list, the switch adds the port to the router port list. It also starts an aging timer for the port.

IGMP report

A host sends an IGMP report to the IGMP querier for the following purposes:

·          Responds to queries if the host is a multicast group member.

·          Applies for a multicast group membership.

After receiving an IGMP report from the host, the switch forwards it through all the router ports in the VLAN. The switch also resolves the address of the reported multicast group, and looks up the forwarding table for a matching entry.

·          If no match is found, the switch creates a forwarding entry for the group with the receiving port as an outgoing interface. It also marks the receiving port as a dynamic member port and starts an aging timer for the port.

·          If a match is found but the matching forwarding entry does not contain the receiving port, the switch adds the receiving port to the outgoing interface list. It also marks the receiving port as a dynamic member port and starts an aging timer for the port.

·          If a match is found and the matching forwarding entry contains the receiving port, the switch restarts the aging timer for the port.

In an application with a group policy configured on an IGMP snooping-enabled switch, when a user requests a multicast program, the user's host initiates an IGMP report. After receiving this report, the switch resolves the multicast group address in the report and performs ACL filtering on the report. If the report passes ACL filtering, the switch creates an IGMP snooping forwarding entry for the multicast group with the receiving port as an outgoing interface. Otherwise, the switch drops this report, in which case the multicast data for the multicast group is not sent to this port, and the user cannot retrieve the program.

A switch does not forward an IGMP report through a non-router port because of the IGMP report suppression mechanism. For more information about the IGMP report suppression mechanism, see "Configuring IGMP."

Leave message

An IGMPv1 host silently leaves a multicast group. The switch is not notified of the leaving and cannot immediately update the status of the port that connects to the receiver host. The switch does not remove the port from the outgoing interface list in the associated forwarding entry until the aging time for the group expires. For a static member port, this mechanism does not take effect.

An IGMPv2 or IGMPv3 host sends an IGMP leave message to the multicast router when it leaves a multicast group.

When the switch receives an IGMP leave message on a dynamic member port, the switch first examines whether a forwarding entry matches the group address in the message.

·          If no match is found, the switch discards the IGMP leave message.

·          If a match is found but the receiving port is not an outgoing interface in the forwarding entry, the switch discards the IGMP leave message.

·          If a match is found and the receiving port is not the only outgoing interface in the forwarding entry, the switch performs the following actions:

?  Discards the IGMP leave message.

?  Sends an IGMP group-specific query to identify whether the group has active receivers attached to the receiving port.

?  Sets the aging timer for the receiving port to twice the IGMP last member query interval.

·          If a match is found and the receiving port is the only outgoing interface in the forwarding entry, the switch performs the following actions:

?  Forwards the IGMP leave message to all router ports in the VLAN.

?  Sends an IGMP group-specific query to identify whether the group has active receivers attached to the receiving port.

?  Sets the aging timer for the receiving port to twice the IGMP last member query interval.

After receiving the IGMP leave message on a port, the IGMP querier resolves the multicast group address in the message. Then the IGMP querier sends an IGMP group-specific query to the multicast group through the receiving port.

After receiving the IGMP group-specific query, the switch forwards it through all its router ports in the VLAN and all member ports of the multicast group. Then, it waits for the responding IGMP reports from the directly connected hosts. For the dynamic member port that received the leave message, the switch performs one of the following actions:

·          If the port receives an IGMP report before the aging timer expires, the switch resets the aging timer for the port.

·          If the port does not receive any IGMP reports when the aging timer expires, the switch removes the port from the forwarding entry for the multicast group.

Protocols and standards

RFC 4541, Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches

IGMP snooping configuration task list

Tasks at a glance

Configuring basic IGMP snooping features:

·         (Required.) Enabling IGMP snooping

·         (Optional.) Specifying an IGMP snooping version

·         (Optional.) Setting the maximum number of IGMP snooping forwarding entries

·         (Optional.) Setting the IGMP last member query interval

Configuring IGMP snooping port features:

·         (Optional.) Setting aging timers for dynamic ports

·         (Optional.) Configuring static ports

·         (Optional.) Configuring a port as a simulated member host

·         (Optional.) Enabling fast-leave processing

Configuring the IGMP snooping querier:

·         (Optional.) Enabling the IGMP snooping querier

·         (Optional.) Configuring parameters for IGMP general queries and responses

Configuring parameters for IGMP messages:

·         (Optional.) Configuring source IP addresses for IGMP messages

Configuring IGMP snooping policies:

·         (Optional.) Configuring a multicast group policy

·         (Optional.) Configuring multicast source port filtering

·         (Optional.) Enabling dropping unknown multicast data

·         (Optional.) Enabling IGMP report suppression

·         (Optional.) Setting the maximum number of multicast groups on a port

·         (Optional.) Enabling multicast group

 

The IGMP snooping configurations made on Layer 2 aggregate interfaces do not interfere with the configurations made on member ports. In addition, the configurations made on Layer 2 aggregate interfaces do not take part in aggregation calculations. The configuration made on a member port of the aggregate group takes effect after the port leaves the aggregate group.

Configuring basic IGMP snooping features

Before you configure basic IGMP snooping features, complete the following tasks:

·          Configure the corresponding VLANs.

·          Determine the IGMP snooping version.

·          Determine the maximum response time for IGMP general queries.

·          Determine the IGMP last member query interval.

Enabling IGMP snooping

When you enable IGMP snooping, follow these guidelines:

·          You must enable IGMP snooping globally before you enable it for a VLAN.

·          IGMP snooping for a VLAN takes effect only on the member ports in that VLAN.

·          You can enable IGMP snooping for the specified VLANs in IGMP-snooping view or for a VLAN in VLAN view. For a VLAN, the configuration in VLAN view has the same priority as the configuration in IGMP-snooping view, and the most recent configuration takes effect.

Enabling IGMP snooping in IGMP-snooping view

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IGMP snooping globally and enter IGMP-snooping view.

igmp-snooping

By default, IGMP snooping is disabled.

3.       Enable IGMP snooping for the specified VLANs.

enable vlan vlan-list

By default, IGMP snooping is disabled for the specified VLANs.

 

Enabling IGMP snooping in VLAN view

Step

Command

Remarks

4.       Enter system view.

system-view

N/A

5.       Enable IGMP snooping globally and enter IGMP-snooping view.

igmp-snooping

By default, IGMP snooping is disabled.

6.       Return to system view.

quit

N/A

7.       Enter VLAN view.

vlan vlan-id

N/A

8.       Enable IGMP snooping for the VLAN.

igmp-snooping enable

By default, IGMP snooping is disabled for the VLAN.

 

Specifying an IGMP snooping version

Different IGMP snooping versions can process different versions of IGMP messages.

·          IGMPv2 snooping can process IGMPv1 and IGMPv2 messages, but it floods IGMPv3 messages in the VLAN instead of processing them.

·          IGMPv3 snooping can process IGMPv1, IGMPv2, and IGMPv3 messages.

If you change IGMPv3 snooping to IGMPv2 snooping, the device does the following:

·          Clears all IGMP snooping forwarding entries that are dynamically added.

·          Keeps static IGMPv3 snooping forwarding entries (*, G).

·          Clears static IGMPv3 snooping forwarding entries (S, G), which will be restored when IGMP snooping is switched back to IGMPv3 snooping.

For more information about static IGMP snooping forwarding entries, see "Configuring static ports."

You can specify the version for the specified VLANs in IGMP-snooping view or for a VLAN in VLAN view. For a VLAN, the configuration in VLAN view has the same priority as the configuration in IGMP-snooping view, and the most recent configuration takes effect.

Specifying an IGMP snooping version in IGMP-snooping view

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IGMP snooping globally and enter IGMP-snooping view.

igmp-snooping

N/A

3.       Specify the IGMP snooping version for the specified VLANs

version version-number vlan vlan-list

The default setting is IGMPv2 snooping.

 

Specifying an IGMP snooping version in VLAN view

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter VLAN view.

vlan vlan-id

N/A

3.       Specify the version of IGMP snooping.

igmp-snooping version version-number

The default setting is IGMPv2 snooping.

 

Setting the maximum number of IGMP snooping forwarding entries

You can modify the maximum number of IGMP snooping forwarding entries, including dynamic entries and static entries. When the number of forwarding entries on the device reaches the upper limit, the device does not automatically remove any existing entries. As a best practice, manually remove some entries to allow new entries to be created.

To set the maximum number of IGMP snooping forwarding entries:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Set the maximum number of IGMP snooping forwarding entries.

entry-limit limit

The default setting is 4294967295.

 

Setting the IGMP last member query interval

A receiver host starts a report delay timer for a multicast group when it receives an IGMP group-specific query for the group. This timer is set to a random value in the range of 0 to the maximum response time advertised in the query. When the timer value decreases to 0, the host sends an IGMP report to the group.

The IGMP last member query interval defines the maximum response time advertised in IGMP group-specific queries. Set an appropriate value for the IGMP last member query interval to speed up hosts' responses to IGMP group-specific queries and avoid IGMP report traffic bursts.

Configuration restrictions and guidelines

When you set the IGMP last member query interval, follow these restrictions and guidelines:

·          The Layer 2 device does not send an IGMP group-specific query if it receives an IGMP leave message from a port enabled with fast-leave processing.

·          You can set the IGMP last member query interval globally for all VLANs in IGMP-snooping view or for a VLAN in VLAN view. For a VLAN, the VLAN-specific configuration takes priority over the global configuration.

Setting the IGMP last member query interval globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Set the IGMP last member query interval globally.

last-member-query-interval interval

The default setting is 1 second.

 

Setting the IGMP last member query interval in a VLAN

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter VLAN view.

vlan vlan-id

N/A

3.       Set the IGMP last member query interval in the VLAN.

igmp-snooping last-member-query-interval interval

The default setting is 1 second.

 

Configuring IGMP snooping port features

Before you configure IGMP snooping port features, complete the following tasks:

·          Enable IGMP snooping for the VLAN.

·          Determine the aging timer for dynamic router ports.

·          Determine the aging timer for dynamic member ports.

·          Determine the addresses of the multicast group and multicast source.

Setting aging timers for dynamic ports

When you set aging timers for dynamic ports, follow these guidelines:

·          If the memberships of multicast groups frequently change, you can set a relatively small value for the aging timer of the dynamic member ports. If the memberships of multicast groups rarely change, you can set a relatively large value.

·          If a dynamic router port receives a PIMv2 hello message, the aging timer for the port is specified by the hello message. In this case, the router-aging-time command or the igmp-snooping router-aging-time command does not take effect on the port.

·          IGMP group-specific queries originated by the Layer 2 device trigger the adjustment of aging timers for dynamic member ports. If a dynamic member port receives such a query, its aging timer is set to twice the IGMP last member query interval. For more information about setting the IGMP last member query interval on the Layer 2 device, see "Setting the IGMP last member query interval."

·          You can set the timers globally for all VLANs in IGMP-snooping view or for a VLAN in VLAN view. For a VLAN, the VLAN-specific configuration takes priority over the global configuration.

Setting the aging timers for dynamic ports globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Set the aging timer for dynamic router ports globally.

router-aging-time interval

The default setting is 260 seconds.

4.       Set the aging timer for dynamic member ports globally.

host-aging-time interval

The default setting is 260 seconds.

 

Setting the aging timers for dynamic ports in a VLAN

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter VLAN view.

vlan vlan-id

N/A

3.       Set the aging timer for dynamic router ports in the VLAN.

igmp-snooping router-aging-time interval

The default setting is 260 seconds.

4.       Set the aging timer for dynamic member ports in the VLAN.

igmp-snooping host-aging-time interval

The default setting is 260 seconds.

 

Configuring static ports

You can configure the following types of static ports:

·          Static member portWhen you configure a port as a static member port for a multicast group, all hosts attached to the port will receive multicast data for the group.

The static member port does not respond to IGMP queries. When you complete or cancel this configuration on a port, the port does not send an unsolicited IGMP report or leave message.

·          Static router port—When you configure a port as a static router port for a multicast group, all multicast data for the group received on the port will be forwarded.

Static member ports and static router ports never age out. To remove such a port, use the undo igmp-snooping static-group or undo igmp-snooping static-router-port command.

To configure static ports:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.

interface interface-type interface-number

N/A

3.       Configure the port as a static port.

·         Configure the port as a static member port:
igmp-snooping static-group
group-address [ source-ip source-address ] vlan vlan-id

·         Configure the port as a static router port.
igmp-snooping static-router-port vlan vlan-id

By default, a port is not a static member port or a static router port.

 

Configuring a port as a simulated member host

When a port is configured as a simulated member host, it is equivalent to an independent host in the following ways:

·          It sends an unsolicited IGMP report when you complete the configuration.

·          It responds to IGMP general queries with IGMP reports.

·          It sends an IGMP leave message when you cancel the configuration.

The version of IGMP running on the simulated member host is the same as the version of IGMP snooping running on the port. The port ages out in the same way as a dynamic member port.

To configure a port as a simulated member host:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.

interface interface-type interface-number

N/A

3.       Configure the port as a simulated member host.

igmp-snooping host-join group-address [ source-ip source-address ] vlan vlan-id

By default, the port is not a simulated member host.

 

Enabling fast-leave processing

This feature enables the switch to immediately remove the port that receives a leave massage from the forwarding entry of the multicast group in the message.

When you enable the IGMP snooping fast-leave processing feature, follow these guidelines:

·          As a best practice, enable this feature on a port that has only one receiver in a VLAN. If you enable this feature on a port that has multiple receivers, after a receiver leaves a group, other receivers cannot receive multicast data for the group.

·          You can enable fast-leave processing globally for all ports in IGMP-snooping view or for a port in interface view. For a port, the port-specific configuration takes priority over the global configuration.

Enabling IGMP snooping fast-leave processing globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Enable fast-leave processing globally.

fast-leave [ vlan vlan-list ]

By default, fast-leave processing is disabled.

 

Enabling IGMP snooping fast-leave processing on a port

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.

interface interface-type interface-number

N/A

3.       Enable IGMP snooping fast-leave processing on the port.

igmp-snooping fast-leave [ vlan vlan-list ]

By default, fast-leave processing is disabled.

 

Configuring the IGMP snooping querier

This section describes how to configure the IGMP snooping querier.

Configuration prerequisites

Before you configure the IGMP snooping querier, complete the following tasks:

·          Enable IGMP snooping for the VLAN.

·          Determine the IGMP general query interval.

·          Determine the maximum response time for IGMP general queries.

Enabling the IGMP snooping querier

This feature enables the switch to periodically send IGMP general queries to establish and maintain multicast forwarding entries at the data link Layer. You can enable the IGMP snooping querier on a network without Layer 3 multicast devices.

Configuration restrictions and guidelines

When you enable the IGMP snooping querier, follow these restrictions and guidelines:

·          Do not enable the IGMP snooping querier on a multicast network that runs IGMP. The IGMP snooping querier does not take part in IGMP querier elections. However, it might affect IGMP querier elections if it sends IGMP general queries with a low source IP address. For more information about the IGMP querier election, see "Configuring IGMP."

·          Assume that an RB acts as both the IGMP snooping querier and the AVF of a VLAN on a TRILL network. As a best practice, configure the appointed port of the VLAN as a static router port to ensure that IGMP snooping forwarding entries can be created. For more information about TRILL, RBs, AVFs, and appointed ports, see TRILL Configuration Guide.

Configuration procedure

To enable the IGMP snooping querier:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter VLAN view.

vlan vlan-id

N/A

3.       Enable the IGMP snooping querier.

igmp-snooping querier

By default, the IGMP snooping querier is disabled.

 

Configuring parameters for IGMP general queries and responses

CAUTION

CAUTION:

To avoid mistakenly deleting multicast group members, make sure the IGMP general query interval is greater than the maximum response time for IGMP general queries.

 

You can modify the IGMP general query interval for a VLAN based on the actual network condition.

A receiver host starts a timer for each multicast group that it has joined when it receives an IGMP general query. This timer is initialized to a random value in the range of 0 to the maximum response time advertised in the query. When the timer decreases to 0, the host sends an IGMP report to the multicast group.

Set an appropriate value for the maximum response time for IGMP general queries to speed up hosts' responses to IGMP general queries and avoid IGMP report traffic bursts.

You can configure the maximum response time for IGMP general queries globally for all VLANs in IGMP-snooping view or for a VLAN in VLAN view. For a VLAN, the VLAN-specific configuration takes priority over the global configuration.

Configuring parameters for IGMP general queries and responses globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Set the maximum response time for IGMP general queries.

max-response-time interval

The default setting is 10 seconds.

 

Configuring parameters for IGMP general queries and responses in a VLAN

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter VLAN view.

vlan vlan-id

N/A

3.       Set the IGMP general query interval in the VLAN.

igmp-snooping query-interval interval

The default setting is 125 seconds.

4.       Set the maximum response time for IGMP general queries in the VLAN.

igmp-snooping max-response-time interval

The default setting is 10 seconds.

 

Configuring parameters for IGMP messages

This section describes how to configure parameters for IGMP messages.

Configuration prerequisites

Before you configure parameters for IGMP messages, complete the following tasks:

·          Enable IGMP snooping for the VLAN.

·          Determine the source IP address of IGMP general queries.

·          Determine the source IP address of IGMP group-specific queries.

·          Determine the source IP address of IGMP reports.

·          Determine the source IP address of IGMP leave messages.

Configuring source IP addresses for IGMP messages

The IGMP snooping querier might send IGMP general queries with the source IP address 0.0.0.0. The port that receives such queries will not be maintained as a dynamic router port. This might cause the associated dynamic IGMP snooping forwarding entry unable to be correctly created at the data link layer and eventually cause multicast traffic forwarding failures.

To avoid this problem, you can configure a non-all-zero IP address as the source IP address of the IGMP queries on the IGMP snooping querier. This configuration might affect the IGMP querier election within the subnet.

You can also change the source IP address of IGMP reports or leave messages sent by a simulated member host or an IGMP snooping proxy.

To configure source IP addresses for IGMP messages in a VLAN:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter VLAN view.

vlan vlan-id

N/A

3.       Configure the source IP address for IGMP general queries.

igmp-snooping general-query source-ip ip-address

By default, the source IP address of IGMP general queries is the IP address of the current VLAN interface. If the current VLAN interface does not have an IP address, the source IP address is 0.0.0.0.

4.       Configure the source IP address for IGMP group-specific queries.

igmp-snooping special-query source-ip ip-address

By default, the source IP address of IGMP group-specific queries is one of the following:

·         The source address of IGMP group-specific queries if the IGMP snooping querier has received IGMP general queries.

·         The IP address of the current VLAN interface if the IGMP snooping querier does not receive an IGMP general query.

·         0.0.0.0 if the IGMP snooping querier does not receive an IGMP general query and the current VLAN interface does not have an IP address.

5.       Configure the source IP address for IGMP reports.

igmp-snooping report source-ip ip-address

By default, the source IP address of IGMP reports is the IP address of the current VLAN interface. If the current VLAN interface does not have an IP address, the source IP address is 0.0.0.0.

6.       Configure the source IP address for IGMP leave messages.

igmp-snooping leave source-ip ip-address

By default, the source IP address of IGMP leave messages is the IP address of the current VLAN interface. If the current VLAN interface does not have an IP address, the source IP address is 0.0.0.0.

 

Configuring IGMP snooping policies

Before you configure IGMP snooping policies, complete the following tasks:

·          Enable IGMP snooping for the VLAN.

·          Determine the ACL used as the multicast group policy.

·          Determine the maximum number of multicast groups that a port can join.

Configuring a multicast group policy

This feature enables the switch to filter IGMP reports by using an ACL that specifies the multicast groups and the optional sources. Use this feature to control the multicast groups that receiver hosts can join.

When you configure a multicast group policy, follow these guidelines:

·          This configuration takes effect only on the multicast groups that a port joins dynamically.

·          You can configure a multicast group policy globally for all ports in IGMP-snooping view or for a port in interface view. For a port, the port-specific configuration takes priority over the global configuration.

Configuring a multicast group policy globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Configure a multicast group policy globally.

group-policy acl-number [ vlan vlan-list ]

By default, no multicast group policies exist. Hosts can join any multicast groups.

 

Configuring a multicast group policy on a port

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.

interface interface-type interface-number

N/A

3.       Configure a multicast group policy on the port.

igmp-snooping group-policy acl-number [ vlan vlan-list ]

By default, no multicast group policies exist. Hosts attached to the port can join any multicast groups.

 

Configuring multicast source port filtering

This feature enables the switch to discard all multicast data packets and to accept multicast protocol packets. You can enable this feature on ports that connect only to multicast receivers.

You can enable this feature for the specified ports in IGMP-snooping view or for a port in interface view. For a port, the configuration in interface view has the same priority as the configuration in IGMP-snooping view, and the most recent configuration takes effect.

Configuring multicast source port filtering globally

Step

Command

Remarks

4.       Enter system view.

system-view

N/A

5.       Enter IGMP-snooping view.

igmp-snooping

N/A

6.       Enable multicast source port filtering.

source-deny port interface-list

By default, multicast source port filtering is disabled.

 

Configuring multicast source port filtering on a port

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view.

interface interface-type interface-number

N/A

3.       Enable multicast source port filtering on the port.

igmp-snooping source-deny

By default, multicast source port filtering is disabled on the port.

 

Enabling dropping unknown multicast data

This feature enables the switch to drop all unknown multicast data. Unknown multicast data refers to multicast data for which no forwarding entries exist in the IGMP snooping forwarding table.

If you do not enable this feature, unknown multicast data is flooded in the VLAN to which the data belongs.

To enable dropping unknown multicast data for a VLAN:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter VLAN view.

vlan vlan-id

N/A

3.       Enable dropping unknown multicast data for the VLAN.

igmp-snooping drop-unknown

By default, dropping unknown multicast data is disabled. Unknown multicast data is flooded.

 

Enabling IGMP report suppression

The feature enables the switch to forward only the first IGMP report for a multicast group to its directly connected Layer 3 device. Other reports for the same group in the same query interval are discarded. Use this feature to reduce multicast traffic.

To enable IGMP report suppression:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Enable IGMP report suppression.

report-aggregation

By default, IGMP report suppression is disabled.

 

Setting the maximum number of multicast groups on a port

You can set the maximum number of multicast groups on a port to regulate the port traffic.

When you set the maximum number of multicast groups on a port, follow these guidelines:

·          This configuration takes effect only on the multicast groups that a port joins dynamically.

·          If the number of multicast groups on a port exceeds the limit, the system removes all the forwarding entries related to that port. The receiver hosts attached to that port can join multicast groups again before the number of multicast groups on the port reaches the limit.

To set the maximum number of multicast groups on a port:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.

interface interface-type interface-number

N/A

3.       Set the maximum number of multicast groups on a port.

igmp-snooping group-limit limit [ vlan vlan-list ]

The default setting is 4294967295.

 

Enabling multicast group

This feature enables the switch to replace an existing group with a newly joined group when the number of groups exceeds the upper limit. This feature is typically used in channel switching applications. Without this feature, the Layer 2 device discards IGMP reports for new groups, and the user cannot change to a new channel.

Configuration restrictions and guidelines

When you enable multicast group replacement, follow these guidelines:

·          This configuration takes effect only on the multicast groups that a port joins dynamically.

·          You can enable this feature globally for all ports in IGMP-snooping view or for a port in interface view. For a port, the port-specific configuration takes priority over the global configuration.

Enabling multicast group replacement globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter IGMP-snooping view.

igmp-snooping

N/A

3.       Enable multicast group replacement globally.

overflow-replace [ vlan vlan-list ]

By default, multicast group replacement is disabled.

 

Enabling multicast group replacement on a port

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.

interface interface-type interface-number

N/A

3.       Enable multicast group replacement on a port.

igmp-snooping overflow-replace [ vlan vlan-list ]

By default, multicast group replacement is disabled.

 

Displaying and maintaining IGMP snooping

Execute display commands in any view and reset commands in user view.

 

Task

Command

Display IGMP snooping status.

display igmp-snooping [ global | vlan vlan-id ]

Display dynamic IGMP snooping forwarding entries (in standalone mode).

display igmp-snooping group [ group-address | source-address ] * [ vlan vlan-id ] [ verbose ] [ slot slot-number ]

Display dynamic IGMP snooping forwarding entries (in IRF mode).

display igmp-snooping group [ group-address | source-address ] * [ vlan vlan-id ] [ verbose ] [ chassis chassis-number slot slot-number ]

Display static IGMP snooping forwarding entries (in standalone mode).

display igmp-snooping static-group [ group-address | source-address ] * [ vlan vlan-id ] [ verbose ] [ slot slot-number ]

Display static IGMP snooping forwarding entries (in IRF mode).

display igmp-snooping static-group [ group-address | source-address ] * [ vlan vlan-id ] [ verbose ] [ chassis chassis-number slot slot-number ]

Display dynamic router port information (in standalone mode).

display igmp-snooping router-port [ vlan vlan-id ] [ slot slot-number ]

Display dynamic router port information (in IRF mode).

display igmp-snooping router-port [ vlan vlan-id ] [ chassis chassis-number slot slot-number ]

Display static router port information (in standalone mode).

display igmp-snooping static-router-port [ vlan vlan-id ] [ slot slot-number ]

Display static router port information (in IRF mode).

display igmp-snooping static-router-port [ vlan vlan-id ] [ chassis chassis-number slot slot-number ]

Display statistics for the IGMP messages learned through IGMP snooping.

display igmp-snooping statistics

Display information about Layer 2 IP multicast groups (in standalone mode).

display l2-multicast ip [ group group-address | source source-address ] * [ vlan vlan-id ] [ slot slot-number ]

Display information about Layer 2 IP multicast groups (in IRF mode).

display l2-multicast ip [ group group-address | source source-address ] * [ vlan vlan-id ] [ chassis chassis-number slot slot-number ]

Display Layer 2 IP multicast group entries (in standalone mode).

display l2-multicast ip forwarding [ group group-address | source source-address ] * [ vlan vlan-id ] [ slot slot-number ]

Display Layer 2 IP multicast group entries (in IRF mode).

display l2-multicast ip forwarding [ group group-address | source source-address ] * [ vlan vlan-id ] [ chassis chassis-number slot slot-number ]

Display information about Layer 2 MAC multicast groups (in standalone mode).

display l2-multicast mac [ mac-address ] [ vlan vlan-id ] [ slot slot-number ]

Display information about Layer 2 MAC multicast groups (in IRF mode).

display l2-multicast mac [ mac-address ] [ vlan vlan-id ] [ chassis chassis-number slot slot-number ]

Display Layer 2 MAC multicast group entries (in standalone mode).

display l2-multicast mac forwarding [ mac-address ] [ vlan vlan-id ] [ slot slot-number ]

Display Layer 2 MAC multicast group entries (in IRF mode).

display l2-multicast mac forwarding [ mac-address ] [ vlan vlan-id ] [ chassis chassis-number slot slot-number ]

Clear dynamic IGMP snooping forwarding entries.

reset igmp-snooping group { group-address [ source-address ] | all } [ vlan vlan-id ]

Clear dynamic router port information.

reset igmp-snooping router-port { all | vlan vlan-id }

Clear statistics for the IGMP messages learned through IGMP snooping.

reset igmp-snooping statistics

 

IGMP snooping configuration examples

Group policy configuration example

Network requirements

As shown in Figure 12, Router A runs IGMPv2 and acts as the IGMP querier. Switch A runs IGMPv2 snooping.

Configure a multicast group policy and simulate joining to meet the following requirements:

·          Host A and Host B receive only the multicast data addressed to the multicast group 224.1.1.1.

·          Switch A drops unknown multicast data instead of flooding it in VLAN 100.

Figure 12 Network diagram

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 12. (Details not shown.)

2.        On Router A:

# Enable IP multicast routing.

<RouterA> system-view

[RouterA] multicast routing

[RouterA-mrib] quit

# Enable IGMP and PIM-DM on FortyGigE 1/0/1.

[RouterA] interface fortygige 1/0/1

[RouterA-FortyGigE1/0/1] igmp enable

[RouterA-FortyGigE1/0/1] pim dm

[RouterA-FortyGigE1/0/1] quit

# Enable PIM-DM on FortyGigE 1/0/2.

[RouterA] interface fortygige 1/0/2

[RouterA-FortyGigE1/0/2] pim dm

[RouterA-FortyGigE1/0/2] quit

3.        Configure Switch A:

# Enable IGMP snooping globally.

<SwitchA> system-view

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] quit

# Create VLAN 100, and assign FortyGigE 1/0/1 through FortyGigE 1/0/4 to the VLAN.

[SwitchA] vlan 100

[SwitchA-vlan100] port fortygige 1/0/1 to fortygige 1/0/4

# Enable IGMP snooping, and enable dropping unknown multicast data for VLAN 100.

[SwitchA-vlan100] igmp-snooping enable

[SwitchA-vlan100] igmp-snooping drop-unknown

[SwitchA-vlan100] quit

# Configure a multicast group policy so that the hosts in VLAN 100 can join only the multicast group 224.1.1.1.

[SwitchA] acl number 2001

[SwitchA-acl-basic-2001] rule permit source 224.1.1.1 0

[SwitchA-acl-basic-2001] quit

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] group-policy 2001 vlan 100

[SwitchA-igmp-snooping] quit

Verifying the configuration

# Send IGMP reports from Host A and Host B to join the multicast groups 224.1.1.1 and 224.2.2.2. (Details not shown.)

# Display information about dynamic IGMP snooping forwarding entries in VLAN 100 on Switch A.

[SwitchA] display igmp-snooping group vlan 100

Total 1 entries.

 

VLAN 100: Total 1 entries.

  (0.0.0.0, 224.1.1.1)

    Host slots (1 in total):

      1

    Host ports (2 in total):

      FGE1/0/3

      FGE1/0/4

The output shows the following information:

·          Host A and Host B have joined the multicast group 224.1.1.1 through the member ports FortyGigE 1/0/4 and FortyGigE 1/0/3 on Switch A, respectively.

·          Host A and Host B have failed to join the multicast group 224.2.2.2. This means that the multicast group policy has taken effect.

Static port configuration example

Network requirements

As shown in Figure 13:

·          Router A runs IGMPv2 and serves as the IGMP querier. Switch A, Switch B, and Switch C run IGMPv2 snooping.

·          Host A and host C are permanent receivers of multicast group 224.1.1.1.

Configure static ports to meet the following requirements:

·          To enhance the reliability of multicast traffic transmission, configure FortyGigE 1/0/3 and FortyGigE 1/0/5 on Switch C as static member ports for multicast group 224.1.1.1.

·          Suppose the STP runs on the network. The forwarding path from Switch A to Switch C is blocked to avoid data loops. Multicast data flows to the receivers attached to Switch C only along the path of Switch A—Switch B—Switch C. When this path is blocked, a maximum of one IGMP query-response cycle must be completed before multicast data flows to the receivers along the path of Switch A—Switch C. In this case, the multicast delivery is interrupted. For more information about the STP, see Layer 2—LAN Switching Configuration Guide.

Configure FortyGigE 1/0/3 on Switch A as a static router port. Then, multicast data can flow to the receivers nearly uninterruptedly along the path of Switch A—Switch C when the path of Switch A—Switch B—Switch C is blocked.

Figure 13 Network diagram

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 13. (Details not shown.)

2.        Configure Router A:

# Enable IP multicast routing.

<RouterA> system-view

[RouterA] multicast routing

[RouterA-mrib] quit

# Enable IGMP and PIM-DM on FortyGigE 1/0/1.

[RouterA] interface fortygige 1/0/1

[RouterA-FortyGigE1/0/1] igmp enable

[RouterA-FortyGigE1/0/1] pim dm

[RouterA-FortyGigE1/0/1] quit

# Enable PIM-DM on FortyGigE 1/0/2.

[RouterA] interface fortygige 1/0/2

[RouterA-FortyGigE1/0/2] pim dm

[RouterA-FortyGigE1/0/2] quit

3.        Configure Switch A:

# Enable IGMP snooping globally.

<SwitchA> system-view

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] quit

# Create VLAN 100, assign FortyGigE 1/0/1 through FortyGigE 1/0/3 to the VLAN, and enable IGMP snooping for the VLAN.

[SwitchA] vlan 100

[SwitchA-vlan100] port fortygige 1/0/1 to fortygige 1/0/3

[SwitchA-vlan100] igmp-snooping enable

[SwitchA-vlan100] quit

# Configure FortyGigE 1/0/3 as a static router port.

[SwitchA] interface fortygige 1/0/3

[SwitchA-FortyGigE1/0/3] igmp-snooping static-router-port vlan 100

[SwitchA-FortyGigE1/0/3] quit

4.        Configure Switch B:

# Enable IGMP snooping globally.

<SwitchB> system-view

[SwitchB] igmp-snooping

[SwitchB-igmp-snooping] quit

# Create VLAN 100, assign FortyGigE 1/0/1 and FortyGigE 1/0/2 to the VLAN, and enable IGMP snooping for the VLAN.

[SwitchB] vlan 100

[SwitchB-vlan100] port fortygige 1/0/1 fortygige 1/0/2

[SwitchB-vlan100] igmp-snooping enable

[SwitchB-vlan100] quit

5.        Configure Switch C:

# Enable IGMP snooping globally.

<SwitchC> system-view

[SwitchC] igmp-snooping

[SwitchC-igmp-snooping] quit

# Create VLAN 100, assign FortyGigE 1/0/1 through FortyGigE 1/0/5 to the VLAN, and enable IGMP snooping for the VLAN.

[SwitchC] vlan 100

[SwitchC-vlan100] port fortygige 1/0/1 to fortygige 1/0/5

[SwitchC-vlan100] igmp-snooping enable

[SwitchC-vlan100] quit

# Configure FortyGigE 1/0/3 and FortyGigE 1/0/5 as static member ports for the multicast group 224.1.1.1.

[SwitchC] interface fortygige 1/0/3

[SwitchC-FortyGigE1/0/3] igmp-snooping static-group 224.1.1.1 vlan 100

[SwitchC-FortyGigE1/0/3] quit

[SwitchC] interface fortygige 1/0/5

[SwitchC-FortyGigE1/0/5] igmp-snooping static-group 224.1.1.1 vlan 100

[SwitchC-FortyGigE1/0/5] quit

Verifying the configuration

# Display information about static router ports in VLAN 100 on Switch A.

[SwitchA] display igmp-snooping static-router-port vlan 100

VLAN 100:

  Router slots (1 in total):

    1

  Router ports (1 in total):

    1

    FGE1/0/3

The output shows that FortyGigE 1/0/3 on Switch A has become a static router port.

# Display information about static IGMP snooping forwarding entries in VLAN 100 on Switch C.

[SwitchC] display igmp-snooping static-group vlan 100

Total 1 entries.

 

VLAN 100: Total 1 entries.

  (0.0.0.0, 224.1.1.1)

    Host slots (1 in total):

      1

    Host ports (2 in total):

      FGE1/0/3

      FGE1/0/5

The output shows that FortyGigE 1/0/3 and FortyGigE 1/0/5 on Switch C have become static member ports of the multicast group 224.1.1.1.

IGMP snooping querier configuration example

Network requirements

As shown in Figure 14:

·          The network is a Layer 2-only network.

·          Source 1 and Source 2 send multicast data to the multicast groups 224.1.1.1 and 225.1.1.1, respectively.

·          Host A and Host C are receivers of multicast group 224.1.1.1, and Host B and Host D are receivers of multicast group 225.1.1.1.

·          All host receivers run IGMPv2, and all switches run IGMPv2 snooping. Switch A (which is close to the multicast sources) acts as the IGMP snooping querier.

Configure the switches to meet the following requirements:

·          To prevent the switches from flooding unknown packets in VLAN 100, enable all the switches to drop unknown multicast packets.

·          A switch does not mark a port that receives an IGMP query with source IP address 0.0.0.0 as a dynamic router port. To ensure the establishment of Layer 2 forwarding entries and multicast traffic forwarding, configure the source IP addresses of IGMP queries as non-zero IP addresses.

Figure 14 Network diagram

 

Configuration procedure

1.        Configure Switch A:

# Enable IGMP snooping globally.

<SwitchA> system-view

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] quit

# Create VLAN 100, and assign FortyGigE 1/0/1 through FortyGigE 1/0/3 to the VLAN.

[SwitchA] vlan 100

[SwitchA-vlan100] port fortygige 1/0/1 to fortygige 1/0/3

# In VLAN 100, enable IGMP snooping, and enable dropping unknown multicast packets.

[SwitchA-vlan100] igmp-snooping enable

[SwitchA-vlan100] igmp-snooping drop-unknown

# In VLAN 100, configure Switch A as the IGMP snooping querier.

[SwitchA-vlan100] igmp-snooping querier

[SwitchA-vlan100] quit

# In VLAN 100, configure the source IP addresses of IGMP general queries and IGMP group-specific queries as 192.168.1.1.

[SwitchA-vlan100] igmp-snooping general-query source-ip 192.168.1.1

[SwitchA-vlan100] igmp-snooping special-query source-ip 192.168.1.1

[SwitchA-vlan100] quit

2.        Configure Switch B:

# Enable IGMP snooping globally.

<SwitchB> system-view

[SwitchB] igmp-snooping

[SwitchB-igmp-snooping] quit

# Create VLAN 100, and assign FortyGigE 1/0/1 through FortyGigE 1/0/4 to the VLAN.

[SwitchB] vlan 100

[SwitchB-vlan100] port fortygige 1/0/1 to fortygige 1/0/4

# In VLAN 100, enable IGMP snooping, and enable dropping unknown multicast packets.

[SwitchB-vlan100] igmp-snooping enable

[SwitchB-vlan100] igmp-snooping drop-unknown

[SwitchB-vlan100] quit

3.        Configure Switch C:

# Enable IGMP snooping globally.

<SwitchC> system-view

[SwitchC] igmp-snooping

[SwitchC-igmp-snooping] quit

# Create VLAN 100, and assign FortyGigE 1/0/1 through FortyGigE 1/0/3 to the VLAN.

[SwitchC] vlan 100

[SwitchC-vlan100] port fortygige 1/0/1 to fortygige 1/0/3

# In VLAN 100, enable IGMP snooping, and enable dropping unknown multicast packets.

[SwitchC-vlan100] igmp-snooping enable

[SwitchC-vlan100] igmp-snooping drop-unknown

[SwitchC-vlan100] quit

4.        Configure Switch D:

# Enable IGMP snooping globally.

<SwitchD> system-view

[SwitchD] igmp-snooping

[SwitchD-igmp-snooping] quit

# Create VLAN 100, and assign FortyGigE 1/0/1 and FortyGigE 1/0/2 to the VLAN.

[SwitchD] vlan 100

[SwitchD-vlan100] port fortygige 1/0/1 to fortygige 1/0/2

# In VLAN 100, enable IGMP snooping, and enable dropping unknown multicast packets.

[SwitchD-vlan100] igmp-snooping enable

[SwitchD-vlan100] igmp-snooping drop-unknown

[SwitchD-vlan100] quit

Verifying the configuration

# Display statistics for IGMP messages learned through IGMP snooping on Switch B.

[SwitchB] display igmp-snooping statistics

Received IGMP general queries:  3

Received IGMPv1 reports:  0

Received IGMPv2 reports:  12

Received IGMP leaves:  0

Received IGMPv2 specific queries:  0

Sent     IGMPv2 specific queries:  0

Received IGMPv3 reports:  0

Received IGMPv3 reports with right and wrong records:  0

Received IGMPv3 specific queries:  0

Received IGMPv3 specific sg queries:  0

Sent     IGMPv3 specific queries:  0

Sent     IGMPv3 specific sg queries:  0

Received error IGMP messages:  0

The output shows that all switches except Switch A can receive the IGMP general queries after Switch A acts as the IGMP snooping querier.

Troubleshooting IGMP snooping

Layer 2 multicast forwarding cannot function

Symptom

Layer 2 multicast forwarding cannot function on the switch.

Solution

To resolve the problem:

1.        Use the display igmp-snooping command to display IGMP snooping status.

2.        If IGMP snooping is not enabled, use the igmp-snooping command in system view to enable IGMP snooping globally. Then, use the igmp-snooping enable command in VLAN view to enable IGMP snooping for the VLAN.

3.        If IGMP snooping is enabled globally but not enabled for the VLAN, use the igmp-snooping enable command in VLAN view to enable IGMP snooping for the VLAN.

4.        If the problem persists, contact H3C Support.

Multicast group policy does not work

Symptom

Hosts can receive multicast data from multicast groups that are not permitted by the multicast group policy.

Solution

To resolve the problem:

1.        Use the display acl command to verify that the configured ACL meets the multicast group policy requirements.

2.        Use the display this command in IGMP-snooping view or in a corresponding interface view to verify that the correct multicast group policy has been applied. If the applied multicast group policy is not correct, use the group-policy or igmp-snooping group-policy command to apply the correct multicast group policy.

3.        Use the display igmp-snooping command to verify that dropping unknown multicast data is enabled. If dropping unknown multicast data is not enabled, use the igmp-snooping drop-unknown command to enable dropping unknown multicast data.

4.        If the problem persists, contact H3C Support.


Configuring multicast routing and forwarding

Overview

The following tables are involved in multicast routing and forwarding:

·          Multicast routing table of each multicast routing protocol, such as the PIM routing table.

·          General multicast routing table that summarizes multicast routing information generated by different multicast routing protocols. The multicast routing information from multicast sources to multicast groups are stored in a set of (S, G) routing entries.

·          Multicast forwarding table that guides multicast forwarding. The optimal routing entries in the multicast routing table are added to the multicast forwarding table.

The term "interface" in this chapter collectively refers to VLAN interfaces and Layer 3 Ethernet interfaces. You can set an Ethernet port as a Layer 3 interface by using the port link-mode route command (see Layer 2—LAN Switching Configuration Guide).

RPF check mechanism

A multicast routing protocol creates multicast routing entries based on the existing unicast routes or static multicast routes. During the process, the reverse path forwarding (RPF) check mechanism ensures the multicast data delivery along the correct paths and avoids data loops.

A multicast routing protocol uses the following tables to perform an RPF check:

·          Unicast routing table—Contains unicast routing information.

·          Static multicast routing table—Contains RPF routes that are manually configured.

Static multicast routing table is used for RPF check rather than multicast routing.

RPF check process

A multicast router performs the RPF check on a multicast packet as follows:

1.        The router chooses an optimal route back to the packet source separately from the unicast routing table and the static multicast routing table.

The term "packet source" means different things in different situations:

?  For a packet that travels along the SPT, the packet source is the multicast source.

?  For a packet that travels along the RPT, the packet source is the RP.

?  For a bootstrap message originated from the BSR, the packet source is the BSR.

For more information about the concepts of SPT, RPT, source-side RPT, RP, and BSR, see "Configuring PIM."

2.        The router selects one of the two optimal routes as the multicast RPF route as follows:

?  If the router uses the longest prefix match principle, the route with higher subnet mask becomes the RPF route. If the routes have the same mask, the route with higher route preference becomes the RPF route. If the routes have the same route preference, the unicast route becomes the RPF route.

For more information about the route preference, see Layer 3—IP Routing Configuration Guide.

?  If the router does not use the longest prefix match principle, the route with higher route preference becomes the RPF route. If the routes have the same preference, the unicast route becomes the RPF route.

The RPF route contains the RPF interface and RPF neighbor information.

?  If the RPF route is a unicast route, the outgoing interface is the RPF interface and the next hop is the RPF neighbor.

?  If the RPF route is a static multicast route, the RPF interface and RPF neighbor are specified in the route.

3.        The router checks whether the packet arrived at the RPF interface. If yes, the RPF check succeeds and the packet is forwarded. If not, the RPF check fails and the packet is discarded.

RPF check implementation in multicast

Implementing an RPF check on each received multicast packet brings a big burden to the router. The use of a multicast forwarding table is the solution to this issue. When the router creates a multicast forwarding entry for a multicast packet, it sets the RPF interface of the packet as the incoming interface of the forwarding entry. After the router receives a multicast packet on an interface, it looks up its multicast forwarding table for a matching entry as follows:

·          If no match is found, the router first determines the RPF route back to the packet source. Then, it creates a forwarding entry with the RPF interface as the incoming interface and performs one of the following actions:

?  If the receiving interface is the RPF interface, the RPF check succeeds and the router forwards the packet out of all the outgoing interfaces.

?  If the receiving interface is not the RPF interface, the RPF check fails and the router discards the packet.

·          If a match is found and the receiving interface that received the packet is the incoming interface of the forwarding entry, the router forwards the packet out of all the outgoing interfaces.

·          If a match is found but the receiving interface is not the incoming interface of the forwarding entry, the router first determines the RPF route back to the packet source. Then, the router performs one of the following actions:

?  If the RPF interface is the incoming interface, the forwarding entry is correct but the packet traveled along a wrong path. The router discards the packet.

?  If the RPF interface is not the incoming interface, the forwarding entry has expired. The router replaces the incoming interface with the RPF interface. In this case, if the receiving interface is the RPF interface, the router forwards the packet out of all outgoing interfaces. Otherwise, it discards the packet.

Figure 15 RPF check process

 

As shown in Figure 15, assume that unicast routes are available in the network, and no static multicast routes have been configured on Switch C. A multicast packet (S, G) travels along the SPT from the multicast source to the receivers. The multicast forwarding table on Switch C contains the (S, G) entry, with VLAN-interface 20 as the incoming interface.

·          If a multicast packet arrives at Switch C on VLAN-interface 20, the receiving interface is the incoming interface of the (S, G) entry. Switch C forwards the packet out of all outgoing interfaces.

·          If a multicast packet arrives at Switch C on VLAN-interface 20, the receiving interface is not the incoming interface of the (S, G) entry. Switch C looks up its unicast routing table and finds that the outgoing interface to the source is VLAN-interface 20. It means that the (S, G) entry is correct, but the packet traveled along a wrong path. The packet fails the RPF check, and Switch C discards the packet.

Static multicast routes

Depending on the application environment, a static multicast route can change an RPF route or create an RPF route.

Changing an RPF route

Typically, the topology structure of a multicast network is the same as that of a unicast network, and multicast traffic follows the same transmission path as unicast traffic does. You can configure a static multicast route for a multicast source to change the RPF route. In this way, the router creates a transmission path for multicast traffic that is different from the transmission path for unicast traffic.

Figure 16 Changing an RPF route

 

As shown in Figure 16, when no static multicast route is configured, Switch C's RPF neighbor on the path back to the source is Switch A. The multicast data from the source travels through Switch A to Switch C. You can configure a static multicast route on Router C and specify Router B as Router C's RPF neighbor on the path back to the source. The multicast data from the source will travel along the path: Switch A to Switch B and then to Switch C.

Creating an RPF route

When a unicast route is blocked, multicast forwarding might be stopped due to lack of an RPF route. You can configure a static multicast route for a multicast source to create an RPF route. In this way, a multicast routing entry is created to guide multicast forwarding.

Figure 17 Creating an RPF route

 

As shown in Figure 17, the RIP domain and the OSPF domain are unicast isolated from each other. When no static multicast route is configured, the receiver hosts in the OSPF domain cannot receive the multicast packets from the multicast source in the RIP domain. You can configure a static multicast route on Router C and Router D, and specify Router B and Router C as the RPF neighbors of Router C and Router D, respectively. The receiver hosts will receive the multicast data from the multicast source.

 

 

NOTE:

A static multicast route takes effect only on the multicast router on which it is configured, and will not be advertised throughout the network or redistributed to other routers.

 

Configuration task list

Tasks at a glance

(Required.) Enabling IP multicast routing

(Optional.) Configuring multicast routing and forwarding:

·         (Optional.) Configuring static multicast routes

·         (Optional.) Specifying the longest prefix match principle

·         (Optional.) Configuring multicast load splitting

·         (Optional.) Configuring a multicast forwarding boundary

·         (Optional.) Configuring static multicast MAC address entries

 

 

NOTE:

The device can route and forward multicast data only through the primary IP addresses of interfaces, rather than their secondary addresses or unnumbered IP addresses. For more information about primary and secondary IP addresses, and IP unnumbered, see Layer 3—IP Services Configuration Guide.

 

Enabling IP multicast routing

Enable IP multicast routing before you configure any Layer 3 multicast functionality on the public network or VPN instance.

To enable IP multicast routing:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IP multicast routing and enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

By default, IP multicast routing is disabled.

 

Configuring multicast routing and forwarding

Before you configure multicast routing and forwarding, complete the following tasks:

·          Configure a unicast routing protocol so that all devices in the domain are interoperable at the network layer.

·          Enable PIM-DM or PIM-SM.

Configuring static multicast routes

To configure a static multicast route for a multicast source, you can specify an RPF interface or an RPF neighbor for the multicast traffic from that source.

To configure a static multicast route:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure a static multicast route.

ip rpf-route-static  [ vpn-instance vpn-instance-name ] source-address { mask-length | mask } { rpf-nbr-address | interface-type interface-number } [ preference preference ]

By default, no static multicast route exists.

3.       (Optional.) Delete static multicast routes.

·         Delete a specific static multicast route:
undo ip rpf-route-static [ vpn-instance vpn-instance-name ]
source-address { mask-length | mask } { rpf-nbr-address | interface-type interface-number }

·         Delete all static multicast routes:
delete ip rpf-route-static
[ vpn-instance vpn-instance-name ]

N/A

 

Specifying the longest prefix match principle

You can configure the switch to use the longest prefix match principle for RPF route selection. For more information about RPF route selection, see "RPF check process."

To specify the longest prefix match principle:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

N/A

3.       Specify the longest prefix match principle.

longest-match

By default, the route preference principle is used.

 

Configuring multicast load splitting

You can enable the switch to split multiple data flows on a per-source basis or on a per-source-and-group basis. This optimizes the traffic delivery.

To configure multicast load splitting:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

N/A

3.       Configure multicast load splitting.

load-splitting { source | source-group }

By default, load splitting is disabled.

 

Configuring a multicast forwarding boundary

You can configure an interface as a multicast forwarding boundary for a multicast group range. The interface cannot receive or forward multicast packets for the group range.

 

TIP:

You do not need to enable IP multicast routing before this configuration.

 

To configure a multicast forwarding boundary:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the interface as a multicast forwarding boundary for a multicast group range.

multicast boundary group-address { mask-length | mask }

By default, the interface is not configured as a multicast forwarding boundary.

 

Configuring static multicast MAC address entries

In Layer 2 multicast, multicast MAC address entries can be dynamically created or added through Layer 2 multicast protocols (such as IGMP snooping). You can also manually configure static multicast MAC address entries to bind multicast MAC addresses and ports to control the destination ports of the multicast data.

 

TIP

TIP:

·      You do not need to enable IP multicast routing before this configuration.

·      The multicast MAC address that can be manually configured in a multicast MAC address entry must be an unused multicast MAC address except 0100-5Exx-xxxx. The octet x represents any hexadecimal number from 0 to F. A multicast MAC address is the MAC address in which the least significant bit of the most significant octet is 1.

 

You can configure static multicast MAC address entries on the specified interface in system view, or on the current interface in interface view.

To configure a static multicast MAC address entry in system view:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure a static multicast MAC address entry.

mac-address multicast mac-address interface interface-list vlan vlan-id

By default, no static multicast MAC address entries exist.

 

To configure a static multicast MAC address entry in interface view:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Ethernet interface/Layer 2 aggregate interface view.

interface interface-type interface-number

N/A

3.       Configure a static multicast MAC address entry.

mac-address multicast mac-address vlan vlan-id

By default, no static multicast MAC address entries exist.

 

Displaying and maintaining multicast routing and forwarding

CAUTION:

The reset commands might cause multicast data transmission failures.

 

Execute display commands in any view and reset commands in user view.

 

Task

Command

Display static multicast MAC address entries.

display mac-address [ mac-address [ vlan vlan-id ] | [ multicast ] [ vlan vlan-id ] [ count ] ]

Display information about the interfaces maintained by the MRIB.

display mrib [ vpn-instance vpn-instance-name ] interface [ interface-type interface-number ]

Display multicast boundary information.

display multicast [ vpn-instance vpn-instance-name ] boundary [ group-address [ mask-length | mask ] ] [ interface interface-type interface-number ]

Display statistics for multicast forwarding events (in standalone mode).

display multicast [ vpn-instance vpn-instance-name ] forwarding event [ slot slot-number ]

Display statistics for multicast forwarding events (in IRF mode).

display multicast [ vpn-instance vpn-instance-name ] forwarding event [ chassis chassis-number slot slot-number ]

Display multicast forwarding entries (in standalone mode).

display multicast [ vpn-instance vpn-instance-name ] forwarding-table [ source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface interface-type interface-number | outgoing-interface { exclude | include | match } interface-type interface-number | slot slot-number | statistics ] *

Display multicast forwarding entries (in IRF mode).

display multicast [ vpn-instance vpn-instance-name ] forwarding-table [ source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | chassis chassis-number slot slot-number | incoming-interface interface-type interface-number | outgoing-interface { exclude | include | match } interface-type interface-number | statistics ] *

Display multicast routing entries.

display multicast [ vpn-instance vpn-instance-name ] routing-table [ source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface interface-type interface-number | outgoing-interface { exclude | include | match } interface-type interface-number ] *

Display static multicast routing entries.

display multicast [ vpn-instance vpn-instance-name ] routing-table static [ source-address { mask-length | mask } ]

Display RPF route information for a multicast source.

display multicast [ vpn-instance vpn-instance-name ] rpf-info source-address [ group-address ]

Clear statistics for multicast forwarding events.

reset multicast [ vpn-instance vpn-instance-name ] forwarding event

Clear multicast forwarding entries.

reset multicast [ vpn-instance vpn-instance-name ] forwarding-table { { source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface { interface-type interface-number } } * | all }

Clear multicast routing entries.

reset multicast [ vpn-instance vpn-instance-name ] routing-table { { source-address [ mask { mask-length | mask } ] | group-address [ mask { mask-length | mask } ] | incoming-interface interface-type interface-number } * | all }

 

 

NOTE:

·      When a routing entry is removed, the associated forwarding entry is also removed.

·      When a forwarding entry is removed, the associated routing entry is also removed.

 

Configuration examples

Changing an RPF route

Network requirements

As shown in Figure 18:

·          PIM-DM runs in the network.

·          All switches in the network support multicast.

·          Switch A, Switch B and Switch C run OSPF.

·          Typically, the receiver host can receive the multicast data from the source through the path: Switch A to Switch B, which is the same as the unicast route.

Configure the switches so that the multicast data from Source travels to the receiver along the following path: Switch A to Switch C to Switch B. This is different than the unicast route.

Figure 18 Network diagram

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 18. (Details not shown.)

2.        Enable OSPF on the switches in the PIM-DM domain to make sure the following conditions are met: (Details not shown.)

?  The switches are interoperable at the network layer.

?  The switches can dynamically update their routing information.

3.        Enable IP multicast routing, and enable IGMP and PIM-DM:

# On Switch B, enable IP multicast routing.

<SwitchB> system-view

[SwitchB] multicast routing

[SwitchB-mrib] quit

# Enable IGMP on the receiver-side interface VLAN-interface 100.

[SwitchB] interface vlan-interface 100

[SwitchB-Vlan-interface100] igmp enable

[SwitchB-Vlan-interface100] quit

# Enable PIM-DM on the other interfaces.

[SwitchB] interface vlan-interface 101

[SwitchB-Vlan-interface101] pim dm

[SwitchB-Vlan-interface101] quit

[SwitchB] interface vlan-interface 102

[SwitchB-Vlan-interface102] pim dm

[SwitchB-Vlan-interface102] quit

# On Switch A, enable IP multicast routing, and enable PIM-DM on each interface.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

[SwitchA] interface vlan-interface 200

[SwitchA-Vlan-interface200] pim dm

[SwitchA-Vlan-interface200] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim dm

[SwitchA-Vlan-interface102] quit

[SwitchA] interface vlan-interface 103

[SwitchA-Vlan-interface103] pim dm

[SwitchA-Vlan-interface103] quit

# Enable IP multicast routing and PIM-DM on Switch C in the same way Switch A is configured. (Details not shown.)

4.        Display the RPF route to Source on Switch B.

[SwitchB] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface102, RPF neighbor: 30.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: igp

     Route selection rule: preference-preferred

     Load splitting rule: disable

The output shows that the current RPF route on Switch B is contributed by a unicast routing protocol and the RPF neighbor is Switch A.

5.        Configure a static multicast route on Switch B and specify Switch C as its RPF neighbor on the route to the source.

[SwitchB] ip rpf-route-static 50.1.1.100 24 20.1.1.2

Verifying the configuration

# Display RPF information for Source on Switch B.

[SwitchB] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface101, RPF neighbor: 20.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: multicast static

     Route selection rule: preference-preferred

     Load splitting rule: disable

The output shows the following information:

·          The RPF route on Switch B is the configured static multicast route.

·          The RPF neighbor of Switch B is Switch C.

Creating an RPF route

Network requirements

As shown in Figure 19:

·          PIM-DM runs in the network and all switches in the network support IP multicast.

·          Switch B and Switch C run OSPF, and have no unicast routes to Switch A.

·          Typically, the receiver host receives the multicast data from Source 1 in the OSPF domain.

Configure the switches so that the receiver host receives multicast data from Source 2, which is outside the OSPF domain.

Figure 19 Network diagram

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 19. (Details not shown.)

2.        Enable OSPF on Switch B and Switch C to make sure the following conditions are met: (Details not shown.)

?  The switches are interoperable at the network layer.

?  The switches can dynamically update their routing information.

3.        Enable IP multicast routing, and enable IGMP and PIM-DM:

# On Switch C, enable IP multicast routing.

<SwitchC> system-view

[SwitchC] multicast routing

[SwitchC-mrib] quit

# Enable IGMP on the receiver-side interface VLAN-interface 100.

[SwitchC] interface vlan-interface 100

[SwitchC-Vlan-interface100] igmp enable

[SwitchC-Vlan-interface100] quit

# Enable PIM-DM on VLAN-interface 101.

[SwitchC] interface vlan-interface 101

[SwitchC-Vlan-interface101] pim dm

[SwitchC-Vlan-interface101] quit

# On Switch A, enable IP multicast routing, and enable PIM-DM on each interface.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

[SwitchA] interface vlan-interface 300

[SwitchA-Vlan-interface300] pim dm

[SwitchA-Vlan-interface300] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim dm

[SwitchA-Vlan-interface102] quit

# Enable IP multicast routing and PIM-DM on Switch B in the same way Switch A is configured. (Details not shown.)

4.        Display information about their RPF routes to Source 2 on Switch B and Switch C.

[SwitchB] display multicast rpf-info 50.1.1.100

[SwitchC] display multicast rpf-info 50.1.1.100

No output is displayed, because no RPF routes to the source 2 exist on Switch B or Switch C.

5.        Configure a static multicast route:

# Configure a static multicast route on Switch B, and specify Switch A as its RPF neighbor on the route to Source 2.

[SwitchB] ip rpf-route-static 50.1.1.100 24 30.1.1.2

# Configure a static multicast route on Switch C, and specify Switch B as its RPF neighbor on the route to Source 2.

[SwitchC] ip rpf-route-static 10.1.1.100 24 20.1.1.2

Verifying the configuration

# Display RPF information for Source 2 on Switch B and Switch C.

[SwitchB] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface102, RPF neighbor: 30.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: multicast static

     Route selection rule: preference-preferred

     Load splitting rule: disable

[SwitchC] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface101, RPF neighbor: 20.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: multicast static

     Route selection rule: preference-preferred

     Load splitting rule: disable

The output shows that the RPF routes to Source 2 exist on Switch B and Switch C. The routes are the configured static routes.

Troubleshooting multicast routing and forwarding

Static multicast route failure

Symptom

No dynamic routing protocol is enabled on the routers, and the physical status and link layer status of interfaces are both up, but the static multicast route fails.

Solution

To resolve the problem:

1.        Use the display multicast routing-table static command to display information about static multicast routes. Verify that the static multicast route has been correctly configured and the route entry exists in the static multicast routing table.

2.        Check the type of the interface that connects to the RPF neighbor. If the interface is not a point-to-point interface, specify the RPF neighbor by its IP address in the static multicast route.

3.        If the problem persists, contact H3C Support.

 


Configuring IGMP

Overview

Internet Group Management Protocol (IGMP) establishes and maintains the multicast group memberships between a Layer 3 multicast device and the hosts on the directly connected subnet.

IGMP has three versions:

·          IGMPv1 (defined by RFC 1112).

·          IGMPv2 (defined by RFC 2236).

·          IGMPv3 (defined by RFC 3376).

All IGMP versions support the ASM model. In addition to the ASM model, IGMPv3 can directly implement the SSM model. IGMPv1 and IGMPv2 must work with the IGMP SSM mapping feature to implement the SSM model. For more information about the ASM and SSM models, see "Multicast overview."

The term "interface" in this chapter collectively refers to VLAN interfaces and Layer 3 Ethernet interfaces. You can set an Ethernet port as a Layer 3 interface by using the port link-mode route command (see Layer 2—LAN Switching Configuration Guide).

IGMPv1 overview

IGMPv1 manages multicast group memberships based on the query and response mechanism.

All routers that run IGMP on the same subnet can get IGMP membership report messages (called reports) from hosts. However, only one router can act as the IGMP querier to send IGMP query messages (called queries). The querier election mechanism determines which router acts as the IGMP querier on the subnet.

In IGMPv1, the designated router (DR) elected by the multicast routing protocol (such as PIM) serves as the IGMP querier. For more information about DR, see "Configuring PIM."

Figure 20 IGMP queries and reports

 

As shown in Figure 20, Host B and Host C are interested in the multicast data addressed to the multicast group G1. Host A is interested in the multicast data addressed to G2. The following process describes how the hosts join the multicast groups and how the IGMP querier (Router B in Figure 20) maintains the multicast group memberships:

1.        The hosts send unsolicited IGMP reports to the multicast groups they want to join without having to wait for the IGMP queries from the IGMP querier.

2.        The IGMP querier periodically multicasts IGMP queries (with the destination address of 224.0.0.1) to all hosts and routers on the local subnet.

3.        After receiving a query message, Host B or Host C (the host whose delay timer expires first) sends an IGMP report to G1 to announce its membership for G1. This example assumes that Host B sends the report message.

After receiving the report from Host B, Host C suppresses its own report for G1. Because Router A and Router B already know that G1 has at least one member host on the local subnet, other members do not need to report their memberships. This IGMP report suppression mechanism helps reduce traffic on the local subnet.

4.        At the same time, Host A sends a report to G2 after receiving a query message.

5.        Through the query and response process, the IGMP routers (Router A and Router B) determine that the local subnet has members of G1 and G2. The multicast routing protocol (PIM, for example) on the routers generates (*, G1) and (*, G2) multicast forwarding entries, where asterisk (*) represents any multicast source. These entries are the basis for subsequent multicast forwarding.

6.        When the multicast data addressed to G1 or G2 reaches an IGMP router, the router looks up the multicast forwarding table. Based on the (*, G1) or (*, G2) entry, the router forwards the multicast data to the local subnet. Then, the receivers on the subnet can receive the data.

IGMPv1 does not define a leave group message (often called a "leave message"). When an IGMPv1 host is leaving a multicast group, it stops sending reports to that multicast group. If the subnet has no members for a multicast group, the IGMP routers will not receive any report addressed to that multicast group. In this case, the routers clear the information for that multicast group after a period of time.

IGMPv2 enhancements

Backwards-compatible with IGMPv1, IGMPv2 has introduced a querier election mechanism and a leave-group mechanism.

Querier election mechanism

In IGMPv1, the DR elected by the Layer 3 multicast routing protocol (such as PIM) serves as the querier among multiple routers that run IGMP on the same subnet.

IGMPv2 introduced an independent querier election mechanism. The querier election process is as follows:

1.        Initially, every IGMPv2 router assumes itself to be the querier and sends IGMP general query messages (often called "general queries") to all hosts and routers on the local subnet. The destination address is 224.0.0.1.

2.        After receiving a general query, every IGMPv2 router compares the source IP address of the query message with its own interface address. After comparison, the router with the lowest IP address becomes the querier and all the other IGMPv2 routers become non-queriers.

3.        All the non-queriers start a timer, known as an "other querier present timer." If a router receives an IGMP query from the querier before the timer expires, it resets this timer. Otherwise, it considers the querier to have timed out and initiates a new querier election process.

"Leave group" mechanism

In IGMPv1, when a host leaves a multicast group, it does not send any notification to the multicast routers. The multicast routers determine whether a group has members by using the maximum response time. This adds to the leave latency.

In IGMPv2, when a host leaves a multicast group, the following process occurs:

1.        The host sends a leave message (with the destination of 224.0.0.2) to all routers on the local subnet.

2.        After receiving the leave message, the querier sends a configurable number of group-specific queries to the group that the host is leaving. Both the destination address field and the group address field of the message are the address of the multicast group that is being queried.

3.        One of the remaining members (if any on the subnet) of the group should send a membership report within the maximum response time advertised in the query messages.

4.        If the querier receives a membership report for the group before the maximum response time expires, it maintains the memberships for the group. Otherwise, the querier assumes that the local subnet has no member hosts for the group and stops maintaining the memberships for the group.

IGMPv3 enhancements

IGMPv3 is based on and is compatible with IGMPv1 and IGMPv2. It provides hosts with enhanced control capabilities and provides enhancements of query and report messages.

Enhancements in control capability of hosts

IGMPv3 introduced two source filtering modes (Include and Exclude). These modes allow a host to join a designated multicast group and to choose whether to receive or reject multicast data from a designated multicast source. When a host joins a multicast group, one of the following occurs:

·          If the host expects to receive multicast data from specific sources like S1, S2, …, it sends a report with the Filter-Mode denoted as "Include Sources (S1, S2, …)."

·          If the host expects to reject multicast data from specific sources like S1, S2, …, it sends a report with the Filter-Mode denoted as "Exclude Sources (S1, S2, …)".

As shown in Figure 21, the network has two multicast sources, Source 1 (S1) and Source 2 (S2). Both of them can send multicast data to the multicast group G. Host B is interested in the multicast data that Source 1 sends to G but not in the data from Source 2.

Figure 21 Flow paths of source-and-group-specific multicast traffic

 

In IGMPv1 or IGMPv2, Host B cannot select multicast sources when it joins the multicast group G. Multicast streams from both Source 1 and Source 2 flow to Host B whether or not it needs them.

When IGMPv3 runs between the hosts and routers, Host B can explicitly express that it needs to receive the multicast data that Source 1 sends to the multicast group G (denoted as (S1, G)). It also can explicitly express that it does not want to receive the multicast data that Source 2 sends to multicast group G (denoted as (S2, G)). Finally, only multicast data from Source 1 is delivered to Host B.

Enhancements in query and report capabilities

·          Query message carrying the source addresses

Compatible with IGMPv1 and IGMPv2, IGMPv3 supports general queries and group-specific queries. In addition, it introduces group-and-source-specific queries.

?  A general query does not carry a group address or a source address.

?  A group-specific query carries a group address, but no source address.

?  A group-and-source-specific query carries a group address and one or more source addresses.

·          Reports containing multiple group records

Unlike an IGMPv1 or IGMPv2 report message, an IGMPv3 report message is destined to 224.0.0.22 and contains one or more group records. Each group record contains a multicast group address and a multicast source address list.

Group records include the following categories:

?  IS_IN—The source filtering mode is Include. The report sender requests the multicast data from only the sources defined in the multicast source address list.

?  IS_EX—The source filtering mode is Exclude. The report sender requests the multicast data from any sources except those defined in the multicast source address list.

?  TO_IN—The filtering mode has changed from Exclude to Include.

?  TO_EX—The filtering mode has changed from Include to Exclude.

?  ALLOW—The Source Address field contains a list of additional sources from which the receiver wants to obtain data. If the current filtering mode is Include, these sources are added to the INCLUDE source address list. If the current filtering mode is Exclude, these sources are deleted from the EXCLUDE source address list.

?  BLOCK—The Source Address field contains a list of the sources from which the receiver no longer wants to obtain data. If the current filtering mode is Include, these sources are deleted from the INCLUDE source address list. If the current filtering mode is Exclude, these sources are added to the EXCLUDE source address list.

IGMP support for VPNs

IGMP maintains group memberships on a per-interface base. After receiving an IGMP message on an interface, IGMP processes the packet within the VPN to which the interface belongs. IGMP only communicates with other multicast protocols within the same VPN instance.

Protocols and standards

·          RFC 1112, Host Extensions for IP Multicasting

·          RFC 2236, Internet Group Management Protocol, Version 2

·          RFC 3376, Internet Group Management Protocol, Version 3

IGMP configuration task list

Tasks at a glance

Configuring basic IGMP features:

·         (Required.) Enabling IGMP

·         (Optional.) Specifying an IGMP version

·         (Optional.) Configuring an interface as a static member interface

·         (Optional.) Configuring a multicast group policy

Adjusting IGMP performance:

(Optional.) Enabling fast-leave processing

 

Configuring basic IGMP features

Before you configure basic IGMP features, complete the following tasks:

·          Configure any unicast routing protocol so that all devices are interoperable at the network layer.

·          Configure PIM.

·          Determine the IGMP version.

·          Determine the multicast group and multicast source addresses for static group member configuration.

·          Determine the ACL used in the multicast group policy.

Enabling IGMP

To configure IGMP, enable IGMP on the interface where the multicast group memberships are established and maintained.

To enable IGMP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IP multicast routing and enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

By default, IP multicast is disabled.

3.       Return to system view.

quit

N/A

4.       Enter interface view.

interface interface-type interface-number

N/A

5.       Enable IGMP.

igmp enable

By default, IGMP is disabled.

 

Specifying an IGMP version

Because the protocol packets of different IGMP versions are different in structures and types, you must specify the same IGMP version for all routers on the same subnet. Otherwise, IGMP cannot operate correctly.

To specify an IGMP version:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Specify an IGMP version.

igmp version version-number

The default setting is IGMPv2.

 

Configuring an interface as a static member interface

You can configure an interface as a static member of a multicast group. Then, the interface can receive multicast data addressed to that multicast group.

When you complete or cancel this configuration on an interface, the interface does not send an unsolicited IGMP report or leave message. This is because the interface is not a real member host of the multicast group.

A static group member does not respond to IGMP queries.

The interface to be configured as a static member interface has the following restrictions:

·          If the interface is IGMP and PIM-SM enabled, it must be a PIM-SM DR.

·          If the interface is IGMP enabled but not PIM-SM enabled, it must be an IGMP querier.

For more information about PIM-SM and DR, see "Configuring PIM."

To configure an interface as a static member interface:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the interface as a static member interface.

igmp static-group group-address [ source source-address ]

By default, an interface is not a static member of any multicast group or multicast source and group.

 

Configuring a multicast group policy

This feature enables an interface to filter IGMP reports by using an ACL that specifies multicast groups and the optional sources. It is used to control the multicast groups that the hosts attached to an interface can join.

This feature does not take effect on static member interfaces because static member interfaces do not send IGMP reports.

To configure a multicast group policy:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure a multicast group policy.

igmp group-policy acl-number [ version-number ]

By default, no multicast group policy exists. Hosts attached to the interface can join any multicast groups.

 

Adjusting IGMP performance

Before adjusting IGMP performance, complete the following tasks:

·          Configure any unicast routing protocol so that all devices are interoperable at the network layer.

·          Configure basic IGMP features.

Enabling fast-leave processing

This feature enables the IGMP querier to send a leave notification directly to the upstream without sending IGMP group-specific queries or IGMP group-and-source-specific queries after receiving a leave message. This reduces leave latency and preserves the network bandwidth.

The fast-leave processing configuration is effective only when the device runs IGMPv2 or IGMPv3.

To enable fast-leave processing:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Enable fast-leave processing.

igmp fast-leave [ group-policy acl-number ]

By default, fast-leave processing is disabled.

 

Displaying and maintaining IGMP

CAUTION:

The reset igmp group command might cause multicast data transmission failures.

 

Execute display commands in any view and reset commands in user view.

 

Task

Command

Display IGMP group information.

display igmp [ vpn-instance vpn-instance-name ] group [ group-address | interface interface-type interface-number ] [ static | verbose ]

Display IGMP information.

display igmp[ vpn-instance vpn-instance-name ] interface [ interface-type interface-number ] [ host ] [ verbose ]

Clear dynamic IGMP group entries.

reset igmp [ vpn-instance vpn-instance-name ] group { all | interface interface-type interface-number { all | group-address [ mask { mask | mask-length } ] [ source-address [ mask { mask | mask-length } ] ] } }

 

IGMP configuration examples

Network requirements

As shown in Figure 22:

·          VOD streams are sent to receiver hosts in multicast.

·          Receiver hosts of different organizations form stub networks N1 and N2. Host A and Host C are receiver hosts in N1 and N2, respectively.

·          IGMPv2 runs between Switch A and N1, and between the other two switches and N2.

·          Switch A acts as the IGMP querier in N1. Switch B acts as the IGMP querier in N2 because it has a lower IP address.

Configure the switches to achieve the following goals:

·          The hosts in N1 join only the multicast group 224.1.1.1.

·          The hosts in N2 can join any multicast groups.

Figure 22 Network diagram

 

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 22. (Details not shown.)

2.        Configure OSPF on the PIM network to make sure the following conditions are met: (Details not shown.)

?  The switches are interoperable at the network layer.

?  The switches can update their routing information.

3.        Enable IP multicast routing, and enable IGMP and PIM-DM:

# On Switch A, enable IP multicast routing.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

# Enable IGMP on the receiver-side interface VLAN-interface 100.

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] quit

# Enable PIM-DM on VLAN-interface 101.

[SwitchA] interface vlan-interface 101

[SwitchA-Vlan-interface101] pim dm

[SwitchA-Vlan-interface101] quit

# On Switch B, enable IP multicast routing.

<SwitchB> system-view

[SwitchB] multicast routing

[SwitchB-mrib] quit

# Enable IGMP on the receiver-side interface VLAN-interface 200.

[SwitchB] interface vlan-interface 200

[SwitchB-Vlan-interface200] igmp enable

[SwitchB-Vlan-interface200] quit

# Enable PIM-DM on VLAN-interface 201.

[SwitchB] interface vlan-interface 201

[SwitchB-Vlan-interface201] pim dm

[SwitchB-Vlan-interface201] quit

# On Switch C, enable IP multicast routing.

<SwitchC> system-view

[SwitchC] multicast routing

[SwitchC-mrib] quit

# Enable IGMP on the receiver-side interface VLAN-interface 200.

[SwitchC] interface vlan-interface 200

[SwitchC-Vlan-interface200] igmp enable

[SwitchC-Vlan-interface200] quit

# Enable PIM-DM on VLAN-interface 202.

[SwitchC] interface vlan-interface 202

[SwitchC-Vlan-interface202] pim dm

[SwitchC-Vlan-interface202] quit

4.        Configure a multicast group policy on Switch A so that the hosts connected to VLAN-interface 100 can join the multicast group 224.1.1.1 only.

[SwitchA] acl number 2001

[SwitchA-acl-basic-2001] rule permit source 224.1.1.1 0

[SwitchA-acl-basic-2001] quit

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp group-policy 2001

[SwitchA-Vlan-interface100] quit

Verifying the configuration

# Display IGMP information on VLAN-interface 200 of Switch B.

[SwitchB] display igmp interface vlan-interface 200

 Vlan-interface200(10.110.2.1):

   IGMP is enabled.

   IGMP version: 2

   Query interval for IGMP: 125s

   Other querier present time for IGMP: 255s

   Maximum query response time for IGMP: 10s

   Querier for IGMP: 10.110.2.1 (This router)

  IGMP groups reported in total: 1

Troubleshooting IGMP

No membership information on the receiver-side router

Symptom

When a host sends a report for joining the multicast group G, no membership information of the multicast group G exists on the router closest to that host.

Solution

To resolve the problem:

1.        Use the display igmp interface command to verify that the networking, interface connection, and IP address configuration are correct.

2.        Use the display current-configuration command to verify that multicast routing is enabled. If it is not enabled, use the multicast routing command in system view to enable IP multicast routing. In addition, verify that IGMP is enabled on the associated interfaces.

3.        Use the display igmp interface command to verify that the IGMP version on the interface is lower than that on the host.

4.        Use the display current-configuration interface command to verify that no ACL rule has been configured to filter out the reports for the multicast group G.

5.        If the problem persists, contact H3C Support.

Inconsistent membership information on the routers on the same subnet

Symptom

Different memberships are maintained on different IGMP routers on the same subnet.

Solution

To resolve the problem:

1.        Use the display current-configuration command to verify the IGMP information on the interfaces.

2.        Use the display igmp interface command on all routers on the same subnet to verify that the IGMP-related timer settings are consistent on all the routers.

3.        Use the display igmp interface command to verify that all the routers on the same subnet are running the same IGMP version.

4.        If the problem persists, contact H3C Support.


Configuring PIM

Overview

Protocol Independent Multicast (PIM) provides IP multicast forwarding by leveraging unicast static routes or unicast routing tables generated by any unicast routing protocol, such as RIP, OSPF, IS-IS, or BGP. PIM is not dependent on any particular unicast routing protocol, and it uses the underlying unicast routing to generate a routing table with routes.

PIM uses the RPF mechanism to implement multicast forwarding. When a multicast packet arrives on an interface of the device, it undergoes an RPF check. If the RPF check succeeds, the device creates a multicast routing entry and forwards the packet. If the RPF check fails, the device discards the packet. For more information about RPF, see "Configuring multicast routing and forwarding."

Based on the implementation mechanism, PIM includes the following categories:

·          Protocol Independent Multicast–Dense Mode (PIM-DM)

·          Protocol Independent Multicast–Sparse Mode (PIM-SM)

·          Protocol Independent Multicast Source-Specific Multicast (PIM-SSM)

The term "PIM domain" in this chapter refers to a network composed of PIM routers.

The term "interface" in this chapter collectively refers to VLAN interfaces and Layer 3 Ethernet interfaces. You can set an Ethernet port as a Layer 3 interface by using the port link-mode route command (see Layer 2—LAN Switching Configuration Guide).

PIM-DM overview

PIM-DM uses the push mode for multicast forwarding, and is suitable for small-sized networks with densely distributed multicast members.

The following describes the basic implementation of PIM-DM:

·          PIM-DM assumes that all downstream nodes want to receive multicast data from a source, so multicast data is flooded to all downstream nodes on the network.

·          Branches without downstream receivers are pruned from the forwarding trees, leaving only those branches that contain receivers.

·          The pruned state of a branch has a finite holdtime timer. When the timer expires, multicast data is again forwarded to the pruned branch. This flood-and-prune cycle takes place periodically to maintain the forwarding branches.

·          The graft mechanism is used to reduce the latency for resuming the forwarding capability of a previously pruned branch.

In PIM-DM, the multicast forwarding paths for a multicast group constitutes a source tree, which is rooted at the multicast source and has multicast group members as its "leaves." Because the source tree consists of the shortest paths from the multicast source to the receivers, it is also called a "shortest path tree (SPT)."

Neighbor discovery

In a PIM domain, each interface that runs PIM on a router periodically multicasts PIM hello messages to all other PIM routers (identified by the address 224.0.0.13) on the local subnet. Through the exchanging of hello messages, all PIM routers on the subnet determine their PIM neighbors, maintain PIM neighboring relationship with other routers, and build and maintain SPTs.

SPT building

The process of building an SPT is the flood-and-prune process:

1.        In a PIM-DM domain, when the multicast source S sends multicast data to the multicast group G, the multicast data is flooded throughout the domain. A router performs an RPF check for the multicast data. If the RPF check succeeds, the router creates an (S, G) entry and forwards the data to all downstream nodes in the network. In the flooding process, all the routers in the PIM-DM domain create the (S, G) entry.

2.        The nodes without downstream receivers are pruned. A router that has no downstream receivers sends a prune message to the upstream node to remove the interface that receives the prune message from the (S, G) entry. In this way, the upstream stream node stops forwarding subsequent packets addressed to that multicast group down to this node.

 

 

NOTE:

An (S, G) entry contains a multicast source address S, a multicast group address G, an outgoing interface list, and an incoming interface.

 

A prune process is initiated by a leaf router. As shown in Figure 23, the router interface that does not have any downstream receivers initiates a prune process by sending a prune message toward the multicast source. This prune process goes on until only necessary branches are left in the PIM-DM domain, and these necessary branches constitute an SPT.

Figure 23 SPT building

 

The pruned state of a branch has a finite holdtime timer. When the timer expires, multicast data is again forwarded to the pruned branch. The flood-and-prune cycle takes place periodically to maintain the forwarding branches.

Graft

A previously pruned branch might have new downstream receivers. To reduce the latency for resuming the forwarding capability of this branch, a graft mechanism is used as follows:

1.        The node that needs to receive the multicast data sends a graft message to its upstream node, telling it to rejoin the SPT.

2.        After receiving this graft message on an interface, the upstream node adds the receiving interface into the outgoing interface list of the (S, G) entry. It also sends a graft-ack message to the graft sender.

3.        If the graft sender receives a graft-ack message, the graft process finishes. Otherwise, the graft sender continues to send graft messages at a graft retry interval until it receives an acknowledgment from its upstream node.

Assert

On a subnet with more than one multicast router, the assert mechanism shuts off duplicate multicast flows to the network. It does this by electing a unique multicast forwarder for the subnet.

Figure 24 Assert mechanism

 

As shown in Figure 24, after Router A and Router B receive an (S, G) packet from the upstream node, they both forward the packet to the local subnet. As a result, the downstream node Router C receives two identical multicast packets. Both Router A and Router B, on their own downstream interfaces, receive a duplicate packet forwarded by the other. After detecting this condition, both routers send an assert message to all PIM routers (224.0.0.13) on the local subnet through the interface that received the packet. The assert message contains the multicast source address (S), the multicast group address (G), and the metric preference and metric of the unicast route/static multicast route to the multicast source. By comparing these parameters, either Router A or Router B becomes the unique forwarder of the subsequent (S, G) packets on the shared-media LAN. The comparison process is as follows:

1.        The router with a higher preference to the multicast source wins.

2.        If both routers have the same metric preference to the source, the router with a smaller metric wins.

3.        If both routers have the same metric, the router with a higher IP address on the downstream interface wins.

PIM-SM overview

PIM-DM uses the flood-and-prune cycles to build SPTs for multicast data forwarding. Although an SPT has the shortest paths from the multicast source to the receivers, it is built with a low efficiency and is not suitable for large- and medium-sized networks.

PIM-SM uses the pull mode for multicast forwarding, and it is suitable for large- and medium-sized networks with sparsely and widely distributed multicast group members.

The basic implementation of PIM-SM is as follows:

·          PIM-SM assumes that no hosts need multicast data. In the PIM-SM mode, a host must express its interest in the multicast data for a multicast group before the data is forwarded to it. PIM-SM implements multicast forwarding by building and maintaining rendezvous point trees (RPTs). An RPT is rooted at a router that has been configured as the rendezvous point (RP) for a multicast group. The multicast data to the group is forwarded by the RP to the receivers along the RPT.

·          When a receiver host joins a multicast group, the receiver-side designated router (DR) sends a join message to the RP for the multicast group. The path along which the message goes hop by hop to the RP forms a branch of the RPT.

·          When a multicast source sends multicast data to a multicast group, the source-side DR must register the multicast source with the RP by unicasting register messages to the RP. The multicast source stops sending until it receives a register-stop message from the RP. When the RP receives the register message, it triggers the establishment of an SPT. Then, the multicast source sends subsequent multicast packets along the SPT to the RP. After reaching the RP, the multicast packet is duplicated and delivered to the receivers along the RPT.

Multicast data is replicated wherever the RPT branches, and this process automatically repeats until the multicast data reaches the receivers.

Neighbor discovery

PIM-SM uses the same neighbor discovery mechanism as PIM-DM does. For more information, see "Neighbor discovery."

DR election

On a shared-media LAN like Ethernet, only a DR forwards the multicast data. A DR is required in both the source-side network and receiver-side network. A source-side DR acts on behalf of the multicast source to send register messages to the RP. The receiver-side DR acts on behalf of the receiver hosts to send join messages to the RP.

PIM-DM does not require a DR. However, if IGMPv1 runs on any shared-media LAN in a PIM-DM domain, a DR must be elected to act as the IGMPv1 querier for the LAN. For more information about IGMP, see "Configuring IGMP."

 

IMPORTANT:

IGMP must be enabled on the device that acts as the receiver-side DR. Otherwise, the receiver hosts attached to the DR cannot join any multicast groups.

 

Figure 25 DR election

 

As shown in Figure 25, the DR election process is as follows:

1.        The routers on the shared-media LAN send hello messages to one another. The hello messages contain the priority for DR election. The router with the highest DR priority is elected as the DR.

2.        The router with the highest IP address wins the DR election under one of following conditions:

?  All the routers have the same DR election priority.

?  A router does not support carrying the DR-election priority in hello messages.

If the DR fails, its PIM neighbor lifetime expires and the other routers will initiate to elect a new DR.

RP discovery

An RP is the core of a PIM-SM domain. For a small-sized, simple network, one RP is enough for multicast forwarding throughout the network. In this case, you can specify a static RP on each router in the PIM-SM domain. However, in a PIM-SM network that covers a wide area, a huge amount of multicast data is forwarded by the RP. To lessen the RP burden and optimize the topological structure of the RPT, you can configure multiple candidate-RPs (C-RPs) in a PIM-SM domain. The bootstrap mechanism is used to dynamically elect RPs from multiple C-RPs. An elected RP provides services for a different multicast group. For this purpose, you must configure a bootstrap router (BSR). A BSR serves as the administrative core of a PIM-SM domain. A PIM-SM domain has only one BSR, but can have multiple candidate-BSRs (C-BSRs) so that, if the BSR fails, a new BSR can be automatically elected from the C-BSRs and avoid service interruption.

 

 

NOTE:

·      An RP can provide services for multiple multicast groups, but a multicast group only uses one RP.

·      A device can act as a C-RP and a C-BSR at the same time.

 

As shown in Figure 26, each C-RP periodically unicasts its advertisement messages (C-RP-Adv messages) to the BSR. An advertisement message contains the address of the advertising C-RP and the multicast group range to which it is designated. The BSR collects these advertisement messages and organizes the C-RP information into an RP-set, which is a database of mappings between multicast groups and RPs. The BSR encapsulates the RP-set information in the bootstrap messages (BSMs) and floods the BSMs to the entire PIM-SM domain.

Figure 26 Information exchange between C-RPs and BSR

 

Based on the information in the RP-set, all routers in the network can select an RP for a specific multicast group based on the following rules:

1.        The C-RP that is designated to a smallest group range wins.

2.        If the C-RPs are designated to the same group range, the C-RP with the highest priority wins.

3.        If the C-RPs have the same priority, the C-RP with the largest hash value wins. The hash value is calculated through the hash algorithm.

4.        If the C-RPs have the same hash value, the C-RP with the highest IP address wins.

RPT building

Figure 27 RPT building in a PIM-SM domain

 

As shown in Figure 27, the process of building an RPT is as follows:

1.        When a receiver wants to join the multicast group G, it uses an IGMP message to inform the receiver-side DR.

2.        After getting the receiver information, the DR sends a join message, which is forwarded hop by hop to the RP for the multicast group.

3.        The routers along the path from the DR to the RP form an RPT branch. Each router on this branch adds to its forwarding table a (*, G) entry, where the asterisk (*) represents any multicast source. The RP is the root of the RPT, and the DR is a leaf of the RPT.

When the multicast data addressed to the multicast group G reaches the RP, the RP forwards the data to the DR along the established RPT, and finally to the receiver.

When a receiver is no longer interested in the multicast data addressed to the multicast group G, the receiver-side DR sends a prune message. The prune message goes hop by hop along the RPT to the RP. After receiving the prune message, the upstream node deletes the interface that connects to this downstream node from the outgoing interface list. It also determines whether it has receivers for that multicast group. If not, the router continues to forward the prune message to its upstream router.

Multicast source registration

The multicast source uses the registration process to inform an RP of its presence.

Figure 28 Multicast source registration

 

As shown in Figure 28, the multicast source registers with the RP as follows:

1.        The multicast source S sends the first multicast packet to the multicast group G. When receiving the multicast packet, the source-side DR encapsulates the packet in a PIM register message and unicasts the message to the RP.

2.        After the RP receives the register message, it decapsulates the register message and forwards the register message down to the RPT. Meanwhile, it sends an (S, G) source-specific join message toward the multicast source. The routers along the path from the RP to the multicast source constitute an SPT branch. Each router on this branch creates an (S, G) entry in its forwarding table.

3.        The subsequent multicast data from the multicast source are forwarded to the RP along the established SPT. When the multicast data reaches the RP along the SPT, the RP forwards the data to the receivers along the RPT. Meanwhile, it unicasts a register-stop message to the source-side DR to prevent the DR from unnecessarily encapsulating the data.

Switchover to SPT

In a PIM-SM domain, only one RP and one RPT provide services for a specific multicast group. Before the switchover to SPT occurs, the source-side DR encapsulates all multicast data addressed to the multicast group in register messages and sends them to the RP. After receiving these register messages, the RP decapsulates them and forwards them to the receivers-side DR along the RPT.

Switchover to SPT has the following weaknesses:

·          Encapsulation and decapsulation are complex on the source-side DR and the RP.

·          The path for a multicast packet might not be the shortest one.

·          The RP might be overloaded by multicast traffic bursts.

To eliminate these weaknesses, PIM-SM allows an RP or the receiver-side DR to initiate the switchover to SPT.

·          The RP initiates the switchover to SPT:

After receiving the first (S, G) multicast packet, the RP sends an (S, G) source-specific join message toward the multicast source immediately. The routers along the path from the RP to the multicast source constitute an SPT. The subsequent multicast packets are forwarded to the RP along the SPT without being encapsulated into register messages.

For more information about the switchover to SPT initiated by the RP, see "Multicast source registration."

·          The receiver-side DR initiates the switchover to SPT:

After receiving the first (S, G) multicast packet, the receiver-side DR initiates the switchover to SPT immediately, as follows:

a.    The receiver-side DR sends an (S, G) source-specific join message toward the multicast source. The routers along the path create an (S, G) entry in their forwarding table to constitute an SPT branch.

b.    When the multicast packets reach the router where the RPT and the SPT branches, the router drops the multicast packets that travel along the RPT. It then sends a prune message with the RP bit to the RP.

c.    After receiving the prune message, the RP forwards it toward the multicast source (supposed only one receiver exists). Thus, the switchover to SPT is completed. The subsequent multicast packets travel along the SPT from the multicast source to the receiver hosts.

With the switchover to SPT, PIM-SM builds SPTs more economically than PIM-DM does.

Assert

PIM-SM uses the same assert mechanism as PIM-DM does. For more information, see "Assert."

Administrative scoping overview

Typically, a PIM-SM domain contains only one BSR, which is responsible for advertising RP-set information within the entire PIM-SM domain. The information about all multicast groups is forwarded within the network that the BSR administers. This is called the "non-scoped BSR mechanism."

Administrative scoping mechanism

To implement refined management, you can divide a PIM-SM domain into a global-scoped zone and multiple administratively-scoped zones (admin-scoped zones). This is called the "administrative scoping mechanism."

The administrative scoping mechanism effectively releases stress on the management in a single-BSR domain and enables provision of zone-specific services through private group addresses.

Admin-scoped zones are divided for multicast groups. Zone border routers (ZBRs) form the boundary of an admin-scoped zone. Each admin-scoped zone maintains one BSR for multicast groups within a specific range. Multicast protocol packets, such as assert messages and BSMs, for a specific group range cannot cross the boundary of the admin-scoped zone for the group range. Multicast group ranges that are associated with different admin-scoped zones can have intersections. However, the multicast groups in an admin-scoped zone are valid only within the local zone, and theses multicast groups are regarded as private group addresses.

The global-scoped zone maintains a BSR for the multicast groups that do not belong to any admin-scoped zones.

Relationship between admin-scoped zones and the global-scoped zone

The global-scoped zone and each admin-scoped zone have their own C-RPs and BSRs. These devices are effective only on their respective zones, and the BSR election and the RP election are implemented independently. Each admin-scoped zone has its own boundary. The multicast information within a zone cannot cross this boundary in either direction. You can have a better understanding of the global-scoped zone and admin-scoped zones based on geographical locations and multicast group address ranges.

·          In view of geographical locations:

An admin-scoped zone is a logical zone for particular multicast groups. The multicast packets for such multicast groups are confined within the local admin-scoped zone and cannot cross the boundary of the zone.

Figure 29 Relationship in view of geographical locations

 

As shown in Figure 29, for the multicast groups in a specific group address range, the admin-scoped zones must be geographically separated and isolated. A router cannot belong to multiple admin-scoped zones. An admin-scoped zone contains routers that are different from other admin-scoped zones. However, the global-scoped zone includes all routers in the PIM-SM domain. Multicast packets that do not belong to any admin-scoped zones are forwarded in the entire PIM-SM domain.

·          In view of multicast group address ranges:

Each admin-scoped zone is designated to specific multicast groups, of which the multicast group addresses are valid only within the local zone. The multicast groups of different admin-scoped zones might have intersections. All the multicast groups other than those of the admin-scoped zones use the global-scoped zone.

Figure 30 Relationship in view of multicast group address ranges

 

As shown in Figure 30, the admin-scoped zones 1 and 2 have no intersection, but the admin-scoped zone 3 is a subset of the admin-scoped zone 1. The global-scoped zone provides services for all the multicast groups that are not covered by the admin-scoped zones 1 and 2, G−G1−G2 in this case.

PIM-SSM overview

The ASM model includes PIM-DM and PIM-SM. The SSM model can be implemented by leveraging part of the PIM-SM technique. It is also called "PIM-SSM."

The SSM model provides a solution for source-specific multicast. It maintains the relationship between hosts and routers through IGMPv3.

In actual applications, part of IGMPv3 and PIM-SM techniques are adopted to implement the SSM model. In the SSM model, because receivers have located a multicast source, no RP or RPT is required and multicast sources do not register for discovering multicast sources in other PIM domains.

Neighbor discovery

PIM-SSM uses the same neighbor discovery mechanism as PIM-SM. For more information, see "Neighbor discovery."

DR election

PIM-SSM uses the same DR election mechanism as PIM-SM. For more information, see "DR election."

SPT building

The decision to build an RPT for PIM-SM or an SPT for PIM-SSM depends on whether the multicast group that the multicast receiver joins is in the SSM group range. The SSM group range reserved by IANA is 232.0.0.0/8.

Figure 31 SPT building in PIM-SSM

 

As shown in Figure 31, Host B and Host C are receivers. They send IGMPv3 report messages to their DRs to express their interest in the multicast information that the multicast source S sends to the multicast group G.

After receiving a report message, the DR first checks whether the group address in this message is in the SSM group range and does the following:

·          If the group address is in the SSM group range, the PIM-SSM service is provided.

The DR sends a subscribe message toward the multicast source. All routers along the path from the DR to the source create an (S, G) entry to build an SPT. The SPT is rooted at the multicast source and has the receivers as its leaves.

·          If the group address is not in the SSM group range, the PIM-SM service is provided.

The receiver-side DR sends a (*, G) join message to the RP, and the multicast source registers with the source-side DR.

In PIM-SSM, the term "channel" refers to a multicast group, and the term "subscribe message" refers to a join message.

PIM support for VPNs

To support PIM for VPNs, a multicast router that runs PIM maintains an independent set of PIM neighbor table, multicast routing table, BSR information, and RP-set information for each VPN.

After receiving a multicast data packet, the multicast router checks to which VPN the data packet belongs. Then, the router forwards the packet according to the multicast routing table for that VPN or creates a multicast routing entry for that VPN.

Protocols and standards

·          RFC 3973, Protocol Independent Multicast-Dense Mode (PIM-DM): Protocol Specification(Revised)

·          RFC 4601, Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol Specification (Revised)

·          RFC 5059, Bootstrap Router (BSR) Mechanism for Protocol Independent Multicast (PIM)

·          RFC 4607, Source-Specific Multicast for IP

·          Draft-ietf-ssm-overview-05, An Overview of Source-Specific Multicast (SSM)

Configuring PIM-DM

This section describes how to configure PIM-DM.

PIM-DM configuration task list

Tasks at a glance

(Required.) Enabling PIM-DM

(Optional.) Enabling the state refresh feature

(Optional.) Configuring state refresh parameters

(Optional.) Configuring PIM-DM graft retry timer

(Optional.) Configuring common PIM features

 

Configuration prerequisites

Before you configure PIM-DM, configure a unicast routing protocol so that all devices in the domain are interoperable at the network layer

Enabling PIM-DM

Enable IP multicast routing before you configure PIM.

With PIM-DM enabled on interfaces, routers can establish PIM neighbor relationship and process PIM messages from their PIM neighbors. As a best practice, enable PIM-DM on all non-border interfaces of the routers when you deploy a PIM-DM domain.

 

IMPORTANT:

All the interfaces on a device must operate in the same PIM mode in the public network or the same VPN instance.

 

To enable PIM-DM:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IP multicast routing and enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

By default, IP multicast routing is disabled.

3.       Return to system view.

quit

N/A

4.       Enter interface view.

interface interface-type interface-number

N/A

5.       Enable PIM-DM.

pim dm

By default, PIM-DM is disabled.

 

Enabling the state refresh feature

In a PIM-DM domain, this feature enables the PIM router that is directly connected to the source to periodically send state refresh messages. It also enables other PIM routers to refresh pruned state timers after receiving the state refresh messages. It prevents the pruned interfaces from resuming multicast forwarding. You must enable this feature on all PIM routers on a subnet.

To enable the state refresh feature on all routers in PIM-DM domain:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Enable the state refresh feature.

pim state-refresh-capable

By default, the state refresh feature is enabled.

 

Configuring state refresh parameters

The router directly connected with the multicast source periodically sends state refresh messages. You can configure the interval for sending such messages on that router.

A router might receive duplicate state refresh messages within a short time. To prevent this situation, you can configure the amount of time that the router must wait before it receives next state refresh message. If the router receives a new state refresh message within the specified waiting time, it discards the message. If this timer times out, the router accepts a new state refresh message, refreshes its own PIM-DM state, and resets the waiting timer.

The TTL value of a state refresh message decrements by 1 whenever it passes a router before it is forwarded to the downstream node. The state refresh message stops being forwarded when the TTL value comes down to 0. A state refresh message with a large TTL value might cycle on a small network. To effectively control the propagation scope of state refresh messages, configure an appropriate TTL value based on the network size on the router directly connected with the multicast source.

To configure state refresh parameters:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Configure the interval to send state refresh messages.

state-refresh-interval interval

The default setting is 60 seconds.

4.       Configure the time to wait before receiving a new state refresh message.

state-refresh-rate-limit time

The default setting is 30 seconds.

5.       Configure the TTL value of state refresh messages.

state-refresh-ttl ttl-value

The default setting 255.

 

Configuring PIM-DM graft retry timer

To configure the graft retry timer:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the graft retry timer.

pim timer graft-retry interval

The default setting is 3 seconds.

 

For more information about the configuration of other timers in PIM-DM, see "Configuring common PIM timers."

Configuring PIM-SM

This section describes how to configure PIM-SM.

PIM-SM configuration task list

Tasks at a glance

Remarks

(Required.) Enabling PIM-SM

N/A

(Required.) Configuring an RP:

·         Configuring a static RP

·         Configuring a C-RP

·         (Optional.) Enabling Auto-RP listening

You must configure a static RP, a C-RP, or both in a PIM-SM domain.

Skip the task of configuring a BSR in a network without C-RPs.

Configure a BSR:

·         (Required.) Configuring a C-BSR

·         (Optional.) Configuring a PIM domain border

(Optional.) Disabling BSM semantic fragmentation

N/A

(Optional.) Configuring multicast source registration

N/A

(Optional.) Configuring the switchover to SPT

N/A

(Optional.) Configuring common PIM features

N/A

 

Configuration prerequisites

Before you configure PIM-SM, configure a unicast routing protocol so that all devices in the domain are interoperable at the network layer.

Enabling PIM-SM

Enable IP multicast routing before you configure PIM.

With PIM-SM enabled on interfaces, routers can establish PIM neighbor relationship and process PIM messages from their PIM neighbors. As a best practice, enable PIM-SM on all non-border interfaces of routers when you deploy a PIM-SM domain.

 

IMPORTANT:

All the interfaces on the same router must operate in the same PIM mode in the public network or the same VPN instance.

 

To enable PIM-SM:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IP multicast routing and enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

By default, IP multicast routing is disabled.

3.       Return to system view.

quit

N/A

4.       Enter interface view.

interface interface-type interface-number

N/A

5.       Enable PIM-SM.

pim sm

By default, PIM-SM is disabled.

 

Configuring an RP

An RP can provide services for multiple or all multicast groups. However, only one RP can forward multicast traffic for a multicast group at a moment.

An RP can be manually configured or dynamically elected through the BSR mechanism. For a large-scaled PIM network, configuring static RPs is a tedious job. Generally, static RPs are backups for dynamic RPs to enhance the robustness and operational manageability on a multicast network.

Configuring a static RP

If only one dynamic RP exists on a network, you can configure a static RP to avoid communication interruption caused by single-point failures. The static RP also prevents frequent message exchange between C-RPs and the BSR for RP election.

The static RP configuration must be the same on all routers in the PIM-SM domain.

To configure a static RP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Configure a static RP for PIM-SM.

static-rp rp-address [ acl-number | preferred ] *

By default, no static RPs exist.

 

Configuring a C-RP

IMPORTANT:

When you configure a C-RP, reserve a relatively large bandwidth between the C-RP and other devices in the PIM-SM domain.

 

In a PIM-SM domain, if you want a router to become the RP, you can configure the router as a C-RP. As a best practice, configure C-RPs on backbone routers.

The C-RPs periodically send advertisement messages to the BSR, which collects RP set information. You can configure the interval for sending the advertisement messages.

The holdtime option in C-RP advertisement messages defines the C-RP lifetime for the advertising C-RP. The BSR starts a holdtime timer for a C-RP after the BSR receives an advertisement message. If the BSR does not receive any advertisement message when the timer expires, it regards the C-RP failed or unreachable.

A C-RP policy enables the BSR to filter C-RP advertisement messages by using an ACL that specifies the packet source addresses and multicast groups. It is used to guard against C-RP spoofing. You must configure the same C-RP policy on all C-BSRs in the PIM-SM domain.

To configure a C-RP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Configure a C-RP.

c-rp ip-address [ advertisement-interval adv-interval | group-policy acl-number | holdtime hold-time | priority priority ] *

By default, no C-RPs exist.

4.       (Optional.) Configure a C-RP policy.

crp-policy acl-number

By default, no C-RP policy exists.

 

Enabling Auto-RP listening

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

This feature enables the router to receive Auto-RP announcement and discovery messages and learn RP information. The destination IP addresses for Auto-RP announcement and discovery messages are 224.0.1.39 and 224.0.1.40, respectively.

To enable Auto-RP listening:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Enable Auto-RP listening.

auto-rp enable

By default, Auto-RP listening is disabled.

 

Configuring a BSR

You must configure a BSR if C-RPs are configured to dynamically select the RP. You do not need to configure a BSR when you have configured only a static RP but no C-RPs.

A PIM-SM domain can have only one BSR, but must have a minimum of one C-BSR. Any router can be configured as a C-BSR. Elected from C-BSRs, the BSR is responsible for collecting and advertising RP information in the PIM-SM domain.

Configuring a C-BSR

The BSR election process is summarized as follows:

1.        Initially, each C-BSR regards itself as the BSR of the PIM-SM domain and sends BSMs to other routers in the domain.

2.        When a C-BSR receives the BSM from another C-BSR, it compares its own priority with the priority carried in the message. The C-BSR with a higher priority wins the BSR election. If a tie exists in the priority, the C-BSR with a higher IP address wins. The loser uses the winner's BSR address to replace its own BSR address and no longer regards itself as the BSR, and the winner retains its own BSR address and continues to regard itself as the BSR.

The elected BSR distributes the RP-set information collected from C-RPs to all routers in the PIM-SM domain. All routers use the same hash algorithm to get an RP for a specific multicast group.

A BSR policy enables a PIM-SM router to filter BSR messages by using an ACL that specifies the legal BSR addresses. It is used to guard against the following BSR spoofing cases:

·          Some maliciously configured hosts can forge BSMs to fool routers and change RP mappings. Such attacks often occur on border routers.

·          When an attacker controls a router on the network, the attacker can configure the router as a C-BSR to win the BSR election. Through this router, the attacker controls the advertising of RP information.

When you configure a C-BSR, follow these guidelines:

·          Configure C-BSRs on routers that are on the backbone network.

·          Reserve a relatively large bandwidth between the C-BSR and the other devices in the PIM-SM domain.

·          You must configure the same BSR policy on all routers in the PIM-SM domain. The BSR policy discards illegal BSR messages, but it partially guards against BSR attacks on the network. If an attacker controls a legal BSR, the problem still exists.

To configure a C-BSR:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Configure a C-BSR.

c-bsr ip-address [ scope group-address { mask-length | mask } ] [ hash-length hash-length | priority priority ] *

By default, no C-BSRs exist.

4.       (Optional.) Configure a BSR policy.

bsr-policy acl-number

By default, no BSR policy exists.

 

Configuring a PIM domain border

A PIM domain border determines the transmission boundary of bootstrap messages. Bootstrap messages cannot cross the domain border in either direction. A number of PIM domain border interfaces partition a network into different PIM-SM domains.

To configure a PIM domain border:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure a PIM domain border.

pim bsr-boundary

By default, no PIM domain border exists.

 

Disabling BSM semantic fragmentation

BSM semantic fragmentation enables a BSR to split a BSM into several BSM fragments (BSMF) if the BSM exceeds the MTU. In this way, a non-BSR router can update the RP-set information for a group range after receiving all BSMFs for the group range. The loss of one BSMF only affects the RP-set information of the group ranges that the fragment contains.

BSM semantic fragmentation is enabled by default. A device that does not support this feature might regard a fragment as an entire BSM and thus learns only part of the RP-set information. If such devices exist in the PIM-SM domain, you must disable BSM semantic fragmentation on the C-BSRs.

To disable BSM semantic fragmentation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Disable BSM semantic fragmentation.

undo bsm-fragment enable

By default, BSM semantic fragmentation is enabled.

 

 

NOTE:

Generally, a BSR performs BSM semantic fragmentation according to the MTU of its BSR interface. For BSMs originated due to learning of a new PIM neighbor, semantic fragmentation is performed according to the MTU of the interface that sends the BSMs.

 

Configuring multicast source registration

A PIM register policy enables an RP to filter register messages by using an ACL that specifies the multicast sources and groups. The policy limits the multicast groups to which the RP is designated. If a register message is denied by the ACL or does not match the ACL, the RP discards the register message and sends a register-stop message to the source-side DR. The registration process stops.

You can configure the switch to calculate the checksum based on the entire register message to ensure information integrity of a register message in the transmission process. If a device that does not support this feature is present on the network, configure the switch to calculate the checksum based on the register message header.

To configure multicast source registration:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Configure a PIM register policy.

register-policy acl-number

By default, no PIM register policy exists.

4.       Configure the switch to calculate the checksum based on the entire register message.

register-whole-checksum

By default, the switch calculates the checksum based on the header of a register message.

 

Configuring the switchover to SPT

CAUTION:

If the switch is an RP, disabling the switchover to SPT might cause multicast traffic forwarding failures on the source-side DR. When disabling the switchover to SPT, make sure you fully understand its impact on your network.

 

To configure the switchover to SPT:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Configure the switchover to SPT.

spt-switch-threshold { immediacy | infinity } [ group-policy acl-number ]

By default, the switch immediately triggers the switchover to SPT after receiving the first multicast packet.

 

Configuring PIM-SSM

PIM-SSM requires IGMPv3 support. Enable IGMPv3 on PIM routers that connect to multicast receivers.

PIM-SSM configuration task list

Tasks at a glance

(Required.) Enabling PIM-SM

(Optional.) Configuring the SSM group range

(Optional.) Configuring common PIM features

 

Configuration prerequisites

Before you configure PIM-SSM, configure a unicast routing protocol so that all devices in the domain are interoperable at the network layer.

Enabling PIM-SM

The implementation of the SSM model is based on subsets of PIM-SM. Therefore, you must enable PIM-SM before configuring PIM-SSM.

When you deploy a PIM-SSM domain, enable PIM-SM on non-border interfaces of the routers.

 

IMPORTANT

IMPORTANT:

All the interfaces on a device must be enabled with the same PIM mode.

 

To enable PIM-SM:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IP multicast routing, and enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

By default, IP multicast routing is disabled.

3.       Return to system view.

quit

N/A

4.       Enter interface view.

interface interface-type interface-number

N/A

5.       Enable PIM-SM.

pim sm

By default, PIM-SM is disabled.

 

Configuring the SSM group range

When a PIM-SM enabled interface receives a multicast packet, it checks whether the multicast group address of the packet is in the SSM group range. If the multicast group address is in this range, the PIM mode for this packet is PIM-SSM. If the multicast group address is not in this range, the PIM mode is PIM-SM.

Configuration guidelines

When you configure the PIM-SSM group range, follow these guidelines:

·          Configure the same SSM group range is configured on all routers in the entire PIM-SSM domain. Otherwise, multicast information cannot be delivered through the SSM model.

·          When a member of a multicast group in the SSM group range sends an IGMPv1 or IGMPv2 report message, the switch does not trigger a (*, G) join.

Configuration procedure

To configure an SSM group range:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim

N/A

3.       Configure the SSM group range.

ssm-policy acl-number

The default range is 232.0.0.0/8.

 

Configuring common PIM features

Configuration task list

Tasks at a glance

(Optional.) Configuring a multicast source policy

(Optional.) Configuring a PIM hello policy

(Optional.) Configuring PIM hello message options

(Optional.) Configuring common PIM timers

(Optional.) Setting the maximum size of each join or prune message

(Optional.) Enabling BFD for PIM

(Optional.) Enabling PIM passive mode

 

Configuration prerequisites

Before you configure common PIM features, complete the following tasks:

·          Configure a unicast routing protocol so that all devices in the domain are interoperable at the network layer.

·          Configure PIM-DM, or PIM-SM.

Configuring a multicast source policy

This feature enables the switch to filter multicast data by using an ACL that specifies the multicast sources and the optional groups. It filters not only data packets but also register messages with multicast data encapsulated. It controls the information available to downstream receivers.

To configure a multicast source policy:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Configure a multicast source policy:

source-policy acl-number

By default, no multicast source policy exists.

 

Configuring a PIM hello policy

This feature enables the switch to filter PIM hello messages by using an ACL that specifies the packet source addresses. It is used to guard against PIM message attacks and to establish correct PIM neighboring relationships.

To configure a PIM hello policy:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure a PIM hello policy.

pim neighbor-policy acl-number

By default, no PIM hello policy exists.

If a PIM neighbor's hello messages cannot pass the policy, the neighbor is automatically removed when its maximum number of hello attempts is reached.

 

Configuring PIM hello message options

In either a PIM-DM domain or a PIM-SM domain, hello messages exchanged among routers contain the following configurable options:

·          DR_Priority (for PIM-SM only)—Priority for DR election. The device with the highest priority wins the DR election. You can configure this option for all the routers in a shared-media LAN that directly connects to the multicast source or the receivers.

·          Holdtime—PIM neighbor lifetime. If a router does not receive a hello message from a neighbor when the neighbor lifetime timer expires, it regards the neighbor failed or unreachable.

·          LAN_Prune_Delay—Delay of pruning a downstream interface on a shared-media LAN. This option has LAN delay, override interval, and neighbor tracking support (namely, the capability to disable join message suppression).

The LAN delay defines the PIM message propagation delay. The override interval defines a period for a router to override a prune message. If the propagation delay or override interval on different PIM routers on a shared-media LAN are different, the largest ones apply.

On a shared-media LAN, the propagation delay and override interval are used as follows:

?  If a router receives a prune message on its upstream interface, it means that there are downstream routers on the shared-media LAN. If this router still needs to receive multicast data, it must send a join message to override the prune message within the override interval.

?  When a router receives a prune message from its downstream interface, it does not immediately prune this interface. Instead, it starts a timer (the propagation delay plus the override interval). If interface receives a join message before the timer expires, the router does not prune the interface. Otherwise, the router prunes the interface.

You can enable neighbor tracking on an upstream router to track the states of the downstream nodes for which the joined state holdtime timer has not expired. If you want to enable neighbor tracking, you must enable it on all PIM routers on a shared-media LAN. Otherwise, the upstream router cannot track join messages from every downstream routers.

·          Generation ID—A router generates a generation ID for hello messages when an interface is enabled with PIM. The generation ID is a random value, but only changes when the status of the router changes. If a PIM router finds that the generation ID in a hello message from the upstream router has changed, it assumes that the status of the upstream router has changed. In this case, it sends a join message to the upstream router for status update. You can configure an interface to drop hello messages without the generation ID options to promptly know the status of an upstream router.

You can configure hello message options for all interfaces in PIM view or for the current interface in interface view. The configuration made in interface view takes priority over the configuration made in PIM view.

Configuring hello message options globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Set the DR priority.

hello-option dr-priority priority

The default setting is 1.

4.       Set the neighbor lifetime.

hello-option holdtime time

The default setting is 105 seconds.

5.       Set the PIM message propagation delay.

hello-option lan-delay delay

The default setting is 500 milliseconds.

6.       Set the override interval.

hello-option override-interval interval

The default setting is 2500 milliseconds.

7.       Enable neighbor tracking.

hello-option neighbor-tracking

By default, neighbor tracking is disabled.

 

Configuring hello message options on an interface

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Set the DR priority.

pim hello-option dr-priority priority

The default setting is 1.

4.       Set the neighbor lifetime.

pim hello-option holdtime time

The default setting is 105 seconds.

5.       Set the PIM message propagation delay.

pim hello-option lan-delay delay

The default setting is 500 milliseconds.

6.       Set the override interval.

pim hello-option override-interval interval

The default setting is 2500 milliseconds.

7.       Enable neighbor tracking.

pim hello-option neighbor-tracking

By default, neighbor tracking is disabled.

8.       Enable dropping hello messages without the Generation ID option.

pim require-genid

By default, an interface accepts hello messages without the Generation ID option.

 

Configuring common PIM timers

CAUTION

CAUTION:

To prevent the upstream neighbors from aging out, you must configure the interval for sending join/prune message less than the joined/pruned state holdtime timer.

 

The following are common timers in PIM:

·          Hello intervalInterval at which a PIM router sends hello messages to discover PIM neighbors and to maintain PIM neighbor relationship.

·          Triggered hello delay—Maximum delay for sending a hello message to avoid collisions caused by simultaneous hello messages. After receiving a hello message, a PIM router waits for a random time before sending a hello message. This random time is in the range of 0 to the triggered hello delay.

·          Join/Prune intervalInterval at which a PIM router sends join/prune messages to its upstream routers for state update.

·          Joined/Pruned state holdtime—Time for which a PIM router keeps the joined/pruned state for the downstream interfaces. This joined/pruned state holdtime is specified in a join/prune message.

·          Multicast source lifetimeLifetime that a PIM router maintains for a multicast source. If the router does not receive subsequent multicast data from the multicast source S when the timer expires, it deletes the (S, G) entry for the multicast source.

You can configure common PIM timers for all interfaces in PIM view or for the current interface in interface view. The configuration made in interface view takes priority over the configuration made in PIM view.

 

TIP

TIP:

As a best practice, use the default settings for a network without special requirements.

 

Configuring common PIM timers globally

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Set the hello interval.

timer hello interval

The default setting is 30 seconds.

4.       Set the join/prune interval.

timer join-prune interval

The default setting is 60 seconds.

NOTE:

This configuration takes effect after the current interval ends.

5.       Set the joined/pruned state holdtime.

holdtime join-prune time

The default setting is 210 seconds.

6.       Set the multicast source lifetime.

source-lifetime time

The default setting is 210 seconds.

 

Configuring common PIM timers on an interface

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Set the hello interval.

pim timer hello interval

The default setting is 30 seconds.

4.       Set the triggered hello delay.

pim triggered-hello-delay delay

The default setting is 5 seconds.

5.       Set the join/prune interval.

pim timer join-prune interval

The default setting is 60 seconds.

NOTE:

This configuration takes effect after the current interval ends.

6.       Set the joined/pruned state holdtime.

pim holdtime join-prune time

The default setting is 210 seconds.

 

Setting the maximum size of each join or prune message

The loss of an oversized join or prune message might result in loss of massive information. You can set a small value for the size of each join or prune message to reduce the impact.

To set the maximum size of each join or prune message:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter PIM view.

pim [ vpn-instance vpn-instance-name ]

N/A

3.       Set the maximum size of each join or prune message.

jp-pkt-size size

The default setting is 8100 bytes.

 

Enabling BFD for PIM

If the DR on a shared-media network fails, a new DR election process will start after the DR ages out. However, it might take a long period of time before other routers detect the link failures and trigger a new DR election. To start a new DR election process immediately after the original DR fails, enable BFD for PIM on a shared-media network to detect link failures among PIM neighbors.

You must enable BFD for PIM on all PIM-capable routers on a shared-media network. For more information about BFD, see High Availability Configuration Guide.

You must enable PIM-DM or PIM-SM on an interface before you configure this feature.

To enable BFD for PIM:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Enable BFD for PIM.

pim bfd enable

By default, BFD is disabled for PIM.

 

Enabling PIM passive mode

To guard against PIM hello spoofing, you can enable PIM passive mode on an interface which is directly connected to user hosts. The PIM passive interface cannot receive or forward PIM protocol messages (excluding register, register-stop and C-RP-Adv messages), and it acts as the DR on the subnet. In BIDIR-PIM, it also acts as the DF.

Configuration guidelines

When you enable PIM passive mode, follow these restrictions and guidelines:

·          This feature takes effect only when PIM-DM or PIM-SM is enabled on the interface.

·          Do not enable this feature on a shared-media LAN with multiple PIM routers. If you do this, the PIM passive interface might become a second DR and DF on the subnet. This will cause duplicate data and flow loop.

Configuration procedure

To enable PIM passive mode on an interface:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Enable PIM passive mode on the interface.

pim passive

By default, PIM passive mode is disabled.

 

Displaying and maintaining PIM

Execute display commands in any view.

 

Task

Command

Display register-tunnel interface information.

display interface [ register-tunnel [ interface-number ] ] [ brief [ description | down ] ]

Display BSR information in the PIM-SM domain.

display pim [ vpn-instance vpn-instance-name ] bsr-info

Display information about the routes used by PIM.

display pim [ vpn-instance vpn-instance-name ] claimed-route [ source-address ]

Display C-RP information in the PIM-SM domain.

display pim [ vpn-instance vpn-instance-name ] c-rp [ local ]

Display PIM information on an interface.

display pim [ vpn-instance vpn-instance-name ] interface [ interface-type interface-number ] [ verbose ]

Display PIM neighbor information.

display pim [ vpn-instance vpn-instance-name ] neighbor [ neighbor-address | interface interface-type interface-number | verbose ] *

Display information about PIM routing entries.

display pim [ vpn-instance vpn-instance-name ] routing-table [ group-address [ mask { mask-length | mask } ] | source-address [ mask { mask-length | mask } ] | flags flag-value | fsm | incoming-interface interface-type interface-number | mode mode-type | outgoing-interface { exclude | include | match } interface-type interface-number ] *

Display RP information in the PIM-SM domain.

display pim [ vpn-instance vpn-instance-name ] rp-info [ group-address ]

Display statistics for PIM packets.

display pim statistics

 

PIM configuration examples

PIM-DM configuration example

Network requirements

As shown in Figure 32:

·          VOD streams are sent to receiver hosts in multicast. The receiver groups of different organizations form stub networks, and one or more receiver hosts exist in each stub network.

·          The entire PIM domain operates in the dense mode.

·          Host A and Host C are multicast receivers in two stub networks.

·          IGMPv2 runs between Switch A and N1 and between Switch B, Switch C, and N2.

Figure 32 Network diagram

 

Table 6 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int100

10.110.1.1/24

Switch C

Vlan-int102

192.168.3.1/24

Switch A

Vlan-int103

192.168.1.1/24

Switch D

Vlan-int300

10.110.5.1/24

Switch B

Vlan-int200

10.110.2.1/24

Switch D

Vlan-int103

192.168.1.2/24

Switch B

Vlan-int101

192.168.2.1/24

Switch D

Vlan-int101

192.168.2.2/24

Switch C

Vlan-int200

10.110.2.2/24

Switch D

Vlan-int102

192.168.3.2/24

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 32. (Details not shown.)

2.        Configure OSPF on the switches in the PIM-DM domain to make sure the following conditions are met: (Details not shown.)

?  The switches are interoperable at the network layer.

?  The switches can dynamically update their routing information.

3.        Enable IP multicast routing, IGMP, and PIM-DM:

# On Switch A, enable IP multicast routing.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

# Enable IGMP on VLAN-interface 100 (the interface that connects to the stub network).

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] quit

# Enable PIM-DM on VLAN-interface 103.

[SwitchA] interface vlan-interface 103

[SwitchA-Vlan-interface103] pim dm

[SwitchA-Vlan-interface103] quit

# Enable IP multicast routing, IGMP, and PIM-DM on Switch B and Switch C in the same way Switch A is configured. (Details not shown.)

# On Switch D, enable IP multicast routing, and enable PIM-DM on each interface.

<SwitchD> system-view

[SwitchD] multicast routing

[SwitchD-mrib] quit

[SwitchD] interface vlan-interface 300

[SwitchD-Vlan-interface300] pim dm

[SwitchD-Vlan-interface300] quit

[SwitchD] interface vlan-interface 103

[SwitchD-Vlan-interface103] pim dm

[SwitchD-Vlan-interface103] quit

[SwitchD] interface vlan-interface 101

[SwitchD-Vlan-interface101] pim dm

[SwitchD-Vlan-interface101] quit

[SwitchD] interface vlan-interface 102

[SwitchD-Vlan-interface102] pim dm

[SwitchD-Vlan-interface102] quit

Verifying the configuration

# Display PIM information on Switch D.

[SwitchD] display pim interface

 Interface           NbrCnt HelloInt   DR-Pri     DR-Address

 Vlan300             0      30         1          10.110.5.1     (local)

 Vlan103             1      30         1          192.168.1.2    (local)

 Vlan101             1      30         1          192.168.2.2    (local)

 Vlan102             1      30         1          192.168.3.2    (local)

# Display PIM neighboring relationships on Switch D.

[SwitchD] display pim neighbor

 Total Number of Neighbors = 3

 

 Neighbor        Interface           Uptime   Expires  Dr-Priority

 192.168.1.1     Vlan103             00:02:22 00:01:27 1

 192.168.2.1     Vlan101             00:00:22 00:01:29 3

 192.168.3.1     Vlan102             00:00:23 00:01:31 5

# Send an IGMP report from Host A to join the multicast group 225.1.1.1. (Details not shown.)

# Send multicast data from the multicast source 10.110.5.100/24 to the multicast group 225.1.1.1. (Details not shown).

# Display the PIM routing table information on Switch A.

[SwitchA] display pim routing-table

 Total 1 (*, G) entry; 1 (S, G) entry

 

 (*, 225.1.1.1)

     Protocol: pim-dm, Flag: WC

     UpTime: 00:04:25

     Upstream interface: NULL

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface100

             Protocol: igmp, UpTime: 00:04:25, Expires: -

 

 (10.110.5.100, 225.1.1.1)

     Protocol: pim-dm, Flag: ACT

     UpTime: 00:06:14

     Upstream interface: Vlan-interface103

         Upstream neighbor: 192.168.1.2

         RPF prime neighbor: 192.168.1.2

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface100

             Protocol: pim-dm, UpTime: 00:04:25, Expires: -

# Display the PIM routing table information on Switch D.

[SwitchD] display pim routing-table

 Total 0 (*, G) entry; 1 (S, G) entry

 

 (10.110.5.100, 225.1.1.1)

     Protocol: pim-dm, Flag: LOC ACT

     UpTime: 00:03:27

     Upstream interface: Vlan-interface300

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

     Total number of downstreams: 2

         1: Vlan-interface103

             Protocol: pim-dm, UpTime: 00:03:27, Expires: -

         2: Vlan-interface102

             Protocol: pim-dm, UpTime: 00:03:27, Expires: -

The output shows the following information:

·          Switches on the SPT path (Switch A and Switch D) have the correct (S, G) entries.

·          Switch A has the correct (*, G) entry.

PIM-SM non-scoped zone configuration example

Network requirements

As shown in Figure 33:

·          VOD streams are sent to receiver hosts in multicast. The receivers of different subnets form stub networks, and a minimum of one receiver host exist in each stub network. The entire PIM-SM domain contains only one BSR.

·          Host A and Host C are multicast receivers in two stub networks N1 and N2.

·          Both VLAN-interface 105 on Switch D and VLAN-interface 102 on Switch E act as C-BSRs and C-RPs. The C-BSR on Switch E has a higher priority. The C-RPs are designated to the multicast group range 225.1.1.0/24. Modify the hash mask length to map the multicast group range to the two C-RPs.

·          IGMPv2 runs between Switch A and N1, and between Switch B, Switch C, and N2.

Figure 33 Network diagram

 

Table 7 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int100

10.110.1.1/24

Switch D

Vlan-int300

10.110.5.1/24

Switch A

Vlan-int101

192.168.1.1/24

Switch D

Vlan-int101

192.168.1.2/24

Switch A

Vlan-int102

192.168.9.1/24

Switch D

Vlan-int105

192.168.4.2/24

Switch B

Vlan-int200

10.110.2.1/24

Switch E

Vlan-int104

192.168.3.2/24

Switch B

Vlan-int103

192.168.2.1/24

Switch E

Vlan-int103

192.168.2.2/24

Switch C

Vlan-int200

10.110.2.2/24

Switch E

Vlan-int102

192.168.9.2/24

Switch C

Vlan-int104

192.168.3.1/24

Switch E

Vlan-int105

192.168.4.1/24

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 33. (Details not shown.)

2.        Enable OSPF on all switches on the PIM-SM network to make sure the following conditions are met: (Details not shown.)

?  The switches are interoperable at the network layer.

?  The switches can dynamically update their routing information.

3.        Enable IP multicast routing, and enable IGMP and PIM-SM:

# On Switch A, enable IP multicast routing globally.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

# Enable IGMP on VLAN-interface 100 (the interface that connects to the stub network).

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] quit

# Enable PIM-DM on the other interfaces.

[SwitchA] interface vlan-interface 101

[SwitchA-Vlan-interface101] pim sm

[SwitchA-Vlan-interface101] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim sm

[SwitchA-Vlan-interface102] quit

# Enable IP multicast routing, IGMP, and PIM-SM on Switch B and Switch C in the same way Switch A is configured. (Details not shown.)

# Enable IP multicast routing and PIM-SM on Switch D and Switch E in the same way Switch A is configured. (Details not shown.)

4.        Configure C-BSRs and C-RPs:

# On Switch D, configure the service scope of RP advertisements.

<SwitchD> system-view

[SwitchD] acl number 2005

[SwitchD-acl-basic-2005] rule permit source 225.1.1.0 0.0.0.255

[SwitchD-acl-basic-2005] quit

# Configure VLAN-interface 105 as a C-BSR and a C-RP, and set the hash mask length to 32 and the priority of the C-BSR to 10.

[SwitchD] pim

[SwitchD-pim] c-bsr 192.168.4.2 hash-length 32 priority 10

[SwitchD-pim] c-rp 192.168.4.2 group-policy 2005

[SwitchD-pim] quit

# On Switch E, configure the service scope of RP advertisements.

<SwitchE> system-view

[SwitchE] acl number 2005

[SwitchE-acl-basic-2005] rule permit source 225.1.1.0 0.0.0.255

[SwitchE-acl-basic-2005] quit

# Configure VLAN-interface 102 as a C-BSR and a C-RP, and set the hash mask length to 32 and the priority of the C-BSR to 20.

[SwitchE] pim

[SwitchE-pim] c-bsr 192.168.9.2 hash-length 32 priority 20

[SwitchE-pim] c-rp 192.168.9.2 group-policy 2005

[SwitchE-pim] quit

Verifying the configuration

# Display PIM information on Switch A.

[SwitchA] display pim interface

 Interface           NbrCnt HelloInt   DR-Pri     DR-Address

 Vlan100             0      30         1          10.110.1.1     (local)

 Vlan101             1      30         1          192.168.1.2

 Vlan102             1      30         1          192.168.9.2

# Display BSR information on Switch A.

[SwitchA] display pim bsr-info

 Scope: non-scoped

     State: Accept Preferred

     Bootstrap timer: 00:01:44

     Elected BSR address: 192.168.9.2

       Priority: 20

       Hash mask length: 32

       Uptime: 00:40:40

# Display BSR information on Switch D.

[SwitchD] display pim bsr-info

 Scope: non-scoped

     State: Candidate

     Bootstrap timer: 00:01:44

     Elected BSR address: 192.168.9.2

       Priority: 20

       Hash mask length: 32

       Uptime: 00:05:26

     Candidate BSR address: 192.168.4.2

       Priority: 10

       Hash mask length: 32

# Display BSR information on Switch E.

[SwitchE] display pim bsr-info

 Scope: non-scoped

     State: Elected

     Bootstrap timer: 00:01:44

     Elected BSR address: 192.168.9.2

       Priority: 20

       Hash mask length: 32

       Uptime: 00:01:18

     Candidate BSR address: 192.168.9.2

       Priority: 20

       Hash mask length: 32

# Display RP information on Switch A.

[SwitchA] display pim rp-info

 BSR RP information:

   Scope: non-scoped

     Group/MaskLen: 225.1.1.0/24

       RP address               Priority  HoldTime  Uptime    Expires

       192.168.4.2              192       150       00:51:45  00:02:22

       192.168.9.2              192       150       00:51:45  00:02:22

PIM-SM admin-scoped zone configuration example

Network requirements

As shown in Figure 34:

·          VOD streams are sent to receiver hosts in multicast. The entire PIM-SM domain is divided into admin-scoped zone 1, admin-scoped zone 2, and the global-scoped zone. Switch B, Switch C, and Switch D are ZBRs of the three zones, respectively.

·          Source 1 and Source 2 send different multicast data to the multicast group 239.1.1.1. Host A receives the multicast data only from Source 1, and Host B receives the multicast data only from Source 2. Source 3 sends multicast data to the multicast group 224.1.1.1. Host C is a multicast receiver for the multicast group 224.1.1.1.

·          VLAN-interface 101 of Switch B acts as a C-BSR and a C-RP for admin-scoped zone 1. VLAN-interface 105 of Switch D acts as a C-BSR and a C-RP for admin-scoped zone 2. Both of the two interfaces are designated to the multicast group range 239.0.0.0/8. VLAN-interface 109 of Switch F acts as a C-BSR and a C-RP for the global-scoped zone, and is designated to all the multicast groups that are not in the range 239.0.0.0/8.

·          IGMPv2 runs between Switch A, Switch E, Switch I, and the receivers that directly connect to them, respectively.

Figure 34 Network diagram

 

Table 8 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int100

192.168.1.1/24

Switch D

Vlan-int105

10.110.5.2/24

Switch A

Vlan-int101

10.110.1.1/24

Switch D

Vlan-int108

10.110.7.1/24

Switch B

Vlan-int200

192.168.2.1/24

Switch D

Vlan-int107

10.110.8.1/24

Switch B

Vlan-int101

10.110.1.2/24

Switch E

Vlan-int400

192.168.4.1/24

Switch B

Vlan-int103

10.110.2.1/24

Switch E

Vlan-int104

10.110.4.2/24

Switch B

Vlan-int102

10.110.3.1/24

Switch E

Vlan-int108

10.110.7.2/24

Switch C

Vlan-int300

192.168.3.1/24

Switch F

Vlan-int109

10.110.9.1/24

Switch C

Vlan-int104

10.110.4.1/24

Switch F

Vlan-int107

10.110.8.2/24

Switch C

Vlan-int105

10.110.5.1/24

Switch F

Vlan-int102

10.110.3.2/24

Switch C

Vlan-int103

10.110.2.2/24

Switch G

Vlan-int500

192.168.5.1/24

Switch C

Vlan-int106

10.110.6.1/24

Switch G

Vlan-int109

10.110.9.2/24

Switch H

Vlan-int110

10.110.10.1/24

Source 1

192.168.2.10/24

Switch H

Vlan-int106

10.110.6.2/24

Source 2

192.168.3.10/24

Switch I

Vlan-int600

192.168.6.1/24

Source 3

192.168.5.10/24

Switch I

Vlan-int110

10.110.10.2/24

 

 

 

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 34. (Details not shown.)

2.        Configure OSPF on all switches on the PIM-SM network to make sure the following conditions are met: (Details not shown.)

?  The switches are interoperable at the network layer.

?  The switches can dynamically update their routing information.

3.        Enable IP multicast routing, and enable IGMP and PIM-SM:

# On Switch A, enable IP multicast routing.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

# Enable IGMP on VLAN-interface 100 (the interface that connects to the receiver host).

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] quit

# Enable PIM-SM on VLAN-interface 101.

[SwitchA] interface vlan-interface 101

[SwitchA-Vlan-interface101] pim sm

[SwitchA-Vlan-interface101] quit

# Enable IP multicast routing, IGMP, and PIM-SM on Switch E and Switch I in the same way Switch A is configured. (Details not shown.)

# On Switch B, enable IP multicast routing, and enable PIM-SM on each interface.

<SwitchB> system-view

[SwitchB] multicast routing

[SwitchB-mrib] quit

[SwitchB] interface vlan-interface 200

[SwitchB-Vlan-interface200] pim sm

[SwitchB-Vlan-interface200] quit

[SwitchB] interface vlan-interface 101

[SwitchB-Vlan-interface101] pim sm

[SwitchB-Vlan-interface101] quit

[SwitchB] interface vlan-interface 102

[SwitchB-Vlan-interface102] pim sm

[SwitchB-Vlan-interface102] quit

[SwitchB] interface vlan-interface 103

[SwitchB-Vlan-interface103] pim sm

[SwitchB-Vlan-interface103] quit

# Enable IP multicast routing and PIM-SM on Switch C, Switch D, Switch F, Switch G, and Switch H in the same way Switch B is configured. (Details not shown.)

4.        Configure admin-scoped zone boundaries:

# On Switch B, configure VLAN-interface 102 and VLAN-interface 103 as the boundaries of admin-scoped zone 1.

[SwitchB] interface vlan-interface 102

[SwitchB-Vlan-interface102] multicast boundary 239.0.0.0 8

[SwitchB-Vlan-interface102] quit

[SwitchB] interface vlan-interface 103

[SwitchB-Vlan-interface103] multicast boundary 239.0.0.0 8

[SwitchB-Vlan-interface103] quit

# On Switch C, configure VLAN-interface 103 and VLAN-interface 106 as the boundaries of admin-scoped zone 2.

<SwitchC> system-view

[SwitchC] interface vlan-interface 103

[SwitchC-Vlan-interface103] multicast boundary 239.0.0.0 8

[SwitchC-Vlan-interface103] quit

[SwitchC] interface vlan-interface 106

[SwitchC-Vlan-interface106] multicast boundary 239.0.0.0 8

[SwitchC-Vlan-interface106] quit

# On Switch D, configure VLAN-interface 107 as the boundary of admin-scoped zone 2.

<SwitchD> system-view

[SwitchD] interface vlan-interface 107

[SwitchD-Vlan-interface107] multicast boundary 239.0.0.0 8

[SwitchD-Vlan-interface107] quit

5.        Configure C-BSRs and C-RPs:

# On Switch B, configure the service scope of RP advertisements.

[SwitchB] acl number 2001

[SwitchB-acl-basic-2001] rule permit source 239.0.0.0 0.255.255.255

[SwitchB-acl-basic-2001] quit

# Configure VLAN-interface 101 as a C-BSR and a C-RP for admin-scoped zone 1.

[SwitchB] pim

[SwitchB-pim] c-bsr 10.110.1.2 scope 239.0.0.0 8

[SwitchB-pim] c-rp 10.110.1.2 group-policy 2001

[SwitchB-pim] quit

# On Switch D, configure the service scope of RP advertisements.

[SwitchD] acl number 2001

[SwitchD-acl-basic-2001] rule permit source 239.0.0.0 0.255.255.255

[SwitchD-acl-basic-2001] quit

# Configure VLAN-interface 105 as a C-BSR and a C-RP for admin-scoped zone 2.

[SwitchD] pim

[SwitchD-pim] c-bsr 10.110.5.2 scope 239.0.0.0 8

[SwitchD-pim] c-rp 10.110.5.2 group-policy 2001

[SwitchD-pim] quit

# On Switch F, configure VLAN-interface 109 as a C-BSR and a C-RP for the global-scoped zone.

<SwitchF> system-view

[SwitchF] pim

[SwitchF-pim] c-bsr 10.110.9.1

[SwitchF-pim] c-rp 10.110.9.1

[SwitchF-pim] quit

Verifying the configuration

# Display BSR information on Switch B.

[SwitchB] display pim bsr-info

 Scope: non-scoped

     State: Accept Preferred

     Bootstrap timer: 00:01:44

     Elected BSR address: 10.110.9.1

       Priority: 64

       Hash mask length: 30

       Uptime: 00:01:45

 

 Scope: 239.0.0.0/8

     State: Elected

     Bootstrap timer: 00:00:06

     Elected BSR address: 10.110.1.2

       Priority: 64

       Hash mask length: 30

       Uptime: 00:04:54

     Candidate BSR address: 10.110.1.2

       Priority: 64

       Hash mask length: 30

# Display BSR information on Switch D.

[SwitchD] display pim bsr-info

 Scope: non-scoped

     State: Accept Preferred

     Bootstrap timer: 00:01:44

     Elected BSR address: 10.110.9.1

       Priority: 64

       Hash mask length: 30

       Uptime: 00:01:45

 

 Scope: 239.0.0.0/8

     State: Elected

     Bootstrap timer: 00:01:12

     Elected BSR address: 10.110.5.2

       Priority: 64

       Hash mask length: 30

       Uptime: 00:03:48

     Candidate BSR address: 10.110.5.2

       Priority: 64

       Hash mask length: 30

# Display BSR information on Switch F.

[SwitchF] display pim bsr-info

 Scope: non-scoped

     State: Elected

     Bootstrap timer: 00:00:49

     Elected BSR address: 10.110.9.1

       Priority: 64

       Hash mask length: 30

       Uptime: 00:11:11

     Candidate BSR address: 10.110.9.1

       Priority: 64

       Hash mask length: 30

# Display RP information on Switch B.

[SwitchB] display pim rp-info

 BSR RP information:

   Scope: non-scoped

     Group/MaskLen: 224.0.0.0/4

       RP address               Priority  HoldTime  Uptime    Expires

       10.110.9.1               192       150       00:03:39  00:01:51

   Scope: 239.0.0.0/8

     Group/MaskLen: 239.0.0.0/8

       RP address               Priority  HoldTime  Uptime    Expires

       10.110.1.2 (local)       192       150       00:07:44  00:01:51

# Display RP information on Switch D.

[SwitchD] display pim rp-info

 BSR RP information:

   Scope: non-scoped

     Group/MaskLen: 224.0.0.0/4

       RP address               Priority  HoldTime  Uptime    Expires

       10.110.9.1               192       150       00:03:42  00:01:48

   Scope: 239.0.0.0/8

     Group/MaskLen: 239.0.0.0/8

       RP address               Priority  HoldTime  Uptime    Expires

       10.110.5.2 (local)       192       150       00:06:54  00:02:41

# Display RP information on Switch F.

[SwitchF] display pim rp-info

 BSR RP information:

   Scope: non-scoped

     Group/MaskLen: 224.0.0.0/4

       RP address               Priority  HoldTime  Uptime    Expires

       10.110.9.1 (local)       192       150       00:00:32  00:01:58

PIM-SSM configuration example

Network requirements

As shown in Figure 35:

·          The receivers receive VOD information through multicast. The receiver groups of different organizations form stub networks, and one or more receiver hosts exist in each stub network. The entire PIM domain operates in the SSM mode.

·          Host A and Host C are multicast receivers in two stub networks.

·          The SSM group range is 232.1.1.0/24.

·          IGMPv3 runs between Switch A and N1 and between Switch B, Switch C, and N2.

Figure 35 Network diagram

 

Table 9 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int100

10.110.1.1/24

Switch D

Vlan-int300

10.110.5.1/24

Switch A

Vlan-int101

192.168.1.1/24

Switch D

Vlan-int101

192.168.1.2/24

Switch A

Vlan-int102

192.168.9.1/24

Switch D

Vlan-int105

192.168.4.2/24

Switch B

Vlan-int200

10.110.2.1/24

Switch E

Vlan-int104

192.168.3.2/24

Switch B

Vlan-int103

192.168.2.1/24

Switch E

Vlan-int103

192.168.2.2/24

Switch C

Vlan-int200

10.110.2.2/24

Switch E

Vlan-int102

192.168.9.2/24

Switch C

Vlan-int104

192.168.3.1/24

Switch E

Vlan-int105

192.168.4.1/24

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 35. (Details not shown.)

2.        Configure OSPF on the switches in the PIM-SSM domain to make sure the following conditions are met: (Details not shown.)

?  The switches are interoperable at the network layer.

?  The switches can dynamically update their routing information.

3.        Enable IP multicast routing, IGMP, and PIM-SM:

# On Switch A, enable IP multicast routing.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

# Enable IGMPv3 on VLAN-interface 100 (the interface that connects to the stub network).

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] igmp version 3

[SwitchA-Vlan-interface100] quit

# Enable PIM-SM on the other interfaces.

[SwitchA] interface vlan-interface 101

[SwitchA-Vlan-interface101] pim sm

[SwitchA-Vlan-interface101] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim sm

[SwitchA-Vlan-interface102] quit

# Enable IP multicast routing, IGMP, and PIM-SM on Switch B and Switch C in the same way Switch A is configured. (Details not shown.)

# Enable IP multicast routing and PIM-SM on Switch D and Switch E in the same way Switch A is configured. (Details not shown.)

4.        Configure the SSM group range:

# Configure the SSM group range to be 232.1.1.0/24 on Switch A.

[SwitchA] acl number 2000

[SwitchA-acl-basic-2000] rule permit source 232.1.1.0 0.0.0.255

[SwitchA-acl-basic-2000] quit

[SwitchA] pim

[SwitchA-pim] ssm-policy 2000

[SwitchA-pim] quit

# Configure the SSM group range on Switch B, Switch C, Switch D and Switch E in the same way Switch A is configured. (Details not shown.)

Verifying the configuration

# Display PIM information on Switch A.

[SwitchA] display pim interface

 Interface           NbrCnt HelloInt   DR-Pri     DR-Address

 Vlan100             0      30         1          10.110.1.1     (local)

 Vlan101             1      30         1          192.168.1.2

 Vlan102             1      30         1          192.168.9.2

# Send an IGMPv3 report from Host A to join the multicast source and group (10.110.5.100/24, 232.1.1.1). (Details not shown.)

# Display PIM routing table information on Switch A.

[SwitchA] display pim routing-table

 Total 0 (*, G) entry; 1 (S, G) entry

 

 (10.110.5.100, 232.1.1.1)

     Protocol: pim-ssm, Flag:

     UpTime: 00:13:25

     Upstream interface: Vlan-interface101

         Upstream neighbor: 192.168.1.2

         RPF prime neighbor: 192.168.1.2

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface100

             Protocol: igmp, UpTime: 00:13:25, Expires: 00:03:25

# Display PIM routing table information on Switch D.

[SwitchD] display pim routing-table

 Total 0 (*, G) entry; 1 (S, G) entry

 

 (10.110.5.100, 232.1.1.1)

     Protocol: pim-ssm, Flag: LOC

     UpTime: 00:12:05

     Upstream interface: Vlan-interface300

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface105

             Protocol: pim-ssm, UpTime: 00:12:05, Expires: 00:03:25

The output shows that switches on the SPT path (Switch A and Switch D) have generated the correct (S, G) entries.

Troubleshooting PIM

A multicast distribution tree cannot be built correctly

Symptom

No multicast forwarding entries are established on the routers (including routers directly connected with multicast sources or receivers) in a PIM network. A multicast distribution tree cannot be built correctly.

Solution

To resolve the problem:

1.        Use display ip routing-table to verify that a unicast route to the multicast source or the RP is available.

2.        Use display pim interface to verify PIM information on each interface, especially on the RPF interface. If PIM is not enabled on the interfaces, use pim dm or pim sm to enable PIM-DM or PIM-SM for the interfaces.

3.        Use display pim neighbor to verify that the RPF neighbor is a PIM neighbor.

4.        Verify that PIM and IGMP are enabled on the interfaces that directly connect to the multicast sources or the receivers.

5.        Use display pim interface verbose to verify that the same PIM mode is enabled on the RPF interface on a router and the connected interface of the router's RPF neighbor.

6.        Use display current-configuration to verify that the same PIM mode is enabled on all routers. For PIM-SM, verify that the BSR and C-RPs are correctly configured.

7.        If the problem persists, contact H3C Support.

Multicast data is abnormally terminated on an intermediate router

Symptom

An intermediate router can receive multicast data successfully, but the data cannot reach the last-hop router. An interface on the intermediate router receives multicast data but does not create an (S, G) entry in the PIM routing table.

Solution

To resolve the problem:

1.        Use display current-configuration to verify the multicast forwarding boundary settings. Use multicast boundary to change the multicast forwarding boundary settings to make the multicast packet able to cross the boundary.

2.        Use display current-configuration to verify the multicast source policy. Change the ACL rule defined in the source-policy command so that the source/group address of the multicast data can pass ACL filtering.

3.        If the problem persists, contact H3C Support.

An RP cannot join an SPT in PIM-SM

Symptom

An RPT cannot be correctly built, or an RP cannot join the SPT toward the multicast source.

Solution

To resolve the problem:

1.        Use display ip routing-table to verify that a unicast route to the RP is available on each router.

2.        Use display pim rp-info to verify that the dynamic RP information is consistent on all routers.

3.        Use display pim rp-info to verify that the same static RPs are configured on all routers on the network.

4.        If the problem persists, contact H3C Support.

An RPT cannot be built or multicast source registration fails in PIM-SM

Symptom

The C-RPs cannot unicast advertisement messages to the BSR. The BSR does not advertise BSMs containing C-RP information and has no unicast route to any C-RP. An RPT cannot be correctly established, or the source-side DR cannot register the multicast source with the RP.

Solution

To resolve the problem:

1.        Use display ip routing-table to verify the following information:

?  The unicast routes to the C-RPs and the BSR are available on each router.

?  A route is available between each C-RP and the BSR.

2.        Use display pim bsr-info to verify that the BSR information exists on each router.

3.        Use display pim rp-info to verify that the RP information is correct on each router.

4.        Use display pim neighbor to verify that PIM neighboring relationship has been correctly established among the routers.

5.        If the problem persists, contact H3C Support.


Configuring MSDP

Overview

MSDP is an inter-domain multicast solution that addresses the interconnection of PIM-SM domains. It discovers multicast source information in other PIM-SM domains.

In the basic PIM-SM mode, a multicast source registers only with the RP in the local PIM-SM domain, and the multicast source information in each domain is isolated. As a result, both of the following occur:

·          The RP obtains the source information only within the local domain.

·          A multicast distribution tree is built only within the local domain to deliver multicast data locally.

MSDP enables the RPs of different PIM-SM domains to share their multicast source information. The local RP can then join the SPT rooted at the multicast source across the PIM-SM domains. This allows multicast data to be transmitted among different domains.

With MSDP peer relationships established between appropriate routers in the network, the RPs of different PIM-SM domains are interconnected with one another. These MSDP peers exchange source active (SA) messages, so that the multicast source information is shared among these domains.

MSDP is applicable only if the intra-domain multicast protocol is PIM-SM. MSDP takes effect only for the ASM model.

For more information about the concepts of DR, BSR, C-BSR, RP, C-RP, SPT, and RPT mentioned in this document, see "Configuring PIM."

How MSDP works

MSDP peers

One or more pairs of MSDP peers in the network form an MSDP interconnection map. In the map, the RPs of different PIM-SM domains interconnect in a series. An SA message from an RP is relayed to all other RPs by these MSDP peers.

Figure 36 MSDP peer locations in the network

 

As shown in Figure 36, an MSDP peer can be created on any PIM-SM router. MSDP peers created on PIM-SM routers that assume different roles function differently.

·          MSDP peers created on RPs:

?  Source-side MSDP peer—MSDP peer closest to the multicast source, such as RP 1. The source-side RP creates and sends SA messages to its remote MSDP peer to notify the MSDP peer of the locally registered multicast source information.

A source-side MSDP peer must be created on the source-side RP. Otherwise, it cannot advertise the multicast source information out of the PIM-SM domain.

?  Receiver-side MSDP peer—MSDP peer closest to the receivers, typically the receiver-side RP, such as RP 3. After receiving an SA message, the receiver-side MSDP peer resolves the multicast source information carried in the message. Then, it joins the SPT rooted at the multicast source across the PIM-SM domains. When multicast data from the multicast source arrives, the receiver-side MSDP peer forwards the data to the receivers along the RPT.

?  Intermediate MSDP peer—MSDP peer with multiple remote MSDP peers, such as RP 2. An intermediate MSDP peer forwards SA messages received from one remote MSDP peer to other remote MSDP peers. The intermediate MSDP peer acts as a relay for forwarding multicast source information.

·          MSDP peers created on common PIM-SM routers (other than RPs):

Router A and Router B are MSDP peers on common multicast routers. Such MSDP peers just forward received SA messages.

In a PIM-SM network using the BSR mechanism, the RP is dynamically elected from C-RPs. A PIM-SM network typically has multiple C-RPs to ensure network robustness. Because the RP election result is unpredictable, MSDP peering relationships must be built among all C-RPs to always keep the winning C-RP on the MSDP interconnection map. Losing C-RPs assume the role of common PIM-SM routers on this map.

Inter-domain multicast delivery through MSDP

As shown in Figure 37, an active source (Source) exists in the domain PIM-SM 1, and RP 1 has learned the existence of Source through multicast source registration. RPs in PIM-SM 2 and PIM-SM 3 also seek the location of Source so that multicast traffic from Source can be sent to their receivers. MSDP peering relationships must be established between RP 1 and RP 3 and between RP 3 and RP 2.

Figure 37 Inter-domain multicast delivery through MSDP

 

The process of implementing PIM-SM inter-domain multicast delivery by leveraging MSDP peers is as follows:

1.        The multicast source in PIM-SM 1 sends the first multicast packet to multicast group G. When DR 1 receives this multicast packet, it encapsulates the multicast data within a register message and sends the register message to RP 1. Then, RP 1 obtains information about the multicast source.

2.        As the source-side RP, RP 1 creates SA messages and periodically sends them to its MSDP peer.

An SA message contains the addresses of the multicast source (S), the multicast group (G), and the RP that has created this SA message (RP 1).

3.        On MSDP peers, each SA message undergoes an RPF check and multicast policy-based filtering. Only SA messages that have arrived along the correct path and passed the filtering are received and forwarded. This avoids delivery loops of SA messages. In addition, the MSDP mesh group mechanism can avoid SA message flooding between MSDP peers.

 

 

NOTE:

An MSDP mesh group refers to a group of MSDP peers that establish MSDP peering relationships with each other and share the same group name.

 

4.        SA messages are forwarded from one MSDP peer to another. Finally, information about the multicast source traverses all PIM-SM domains with MSDP peers (PIM-SM 2 and PIM-SM 3, in this example).

5.        After receiving the SA message, RP 2 in PIM-SM 2 examines whether any receivers for the multicast group exist in the domain.

?  If a receiver exists in the domain, the RPT for the multicast group G is maintained between RP 2 and the receivers. RP 2 creates an (S, G) entry and sends an (S, G) join message. The join message travels hop by hop toward the multicast source, and the SPT is established across the PIM-SM domains.

The subsequent multicast data flows to RP 2 along the SPT, and from RP 2 to the receiver-side DR along the RPT. After receiving the multicast data, the receiver-side DR determines whether to initiate an RPT-to-SPT switchover process based on its configuration.

?  If no receivers exist in the domain, RP 2 neither creates an (S, G) entry nor sends a join message toward the multicast source.

In inter-domain multicasting using MSDP, once an RP gets information about a multicast source in another PIM-SM domain, it no longer relies on RPs in other PIM-SM domains. The receivers can override the RPs in other domains and directly join the multicast SPT rooted at the source.

Anycast RP through MSDP

PIM-SM requires only one active RP to serve each multicast group. If the active RP fails, the multicast traffic might be interrupted. The Anycast RP mechanism enables redundancy backup between two or more RPs by configuring multiple RPs with the same IP address for one multicast group. A multicast source registers with the closest RP or a receiver joins the closest RP to implement source information synchronization.

Anycast RP has the following benefits:

·          Optimal RP path—A multicast source registers with the closest RP to build an optimal SPT. A receiver joins the closest RP to build an optimal RPT.

·          Redundancy backup among RPs—When an RP fails, the RP-related multicast sources and receiver-side DRs will register with or join their closest available RPs. This achieves redundancy backup among RPs.

Anycast RP can be implemented through MSDP. In this method, you can configure multiple RPs with the same IP address for one multicast group and establish MSDP peering relationships between the RPs.

As shown in Figure 38, within a PIM-SM domain, a multicast source sends multicast data to multicast group G, and the receiver joins the multicast group.

To implement Anycast RP:

1.        Configure the same IP address (known as Anycast RP address, typically a private address) to an interface on Router A and Router B.

?  An Anycast RP address is usually configured on a logical interface, such as a loopback interface.

?  Make sure the Anycast RP address is a host address (with the subnet mask 255.255.255.255).

2.        Configure these interfaces as C-RPs.

3.        Establish an MSDP peering relationship between Router A and Router B. An MSDP peer address must be different from the Anycast RP address.

Figure 38 Intra-domain Anycast RP through MSDP

 

The operating process of Anycast RP through MSDP is as follows:

4.        After receiving the multicast data from Source, the source-side DR registers with the closest RP (RP 1).

5.        After receiving the IGMP report message from the receiver, the receiver-side DR sends a join message toward the closest RP (RP 2). Therefore, an RPT rooted at this RP is established.

6.        The RPs share the registered multicast source information through SA messages. After obtaining the multicast source information, RP 2 sends an (S, G) source-specific join message toward the source to create an SPT.

7.        When the multicast data reaches RP 2 along the SPT, the RP forwards the data along the RPT to the receiver. After receiving the multicast data, the receiver-side DR determines whether to initiate an RPT-to-SPT switchover process based on its configuration.

MSDP support for VPNs

Interfaces on the multicast routers in a VPN can set up MSDP peering relationships with each other. By exchanging SA messages between MSDP peers, multicast data can be transmitted in a VPN across different PIM-SM domains.

To support MSDP for VPNs, a multicast router that runs MSDP maintains an independent set of MSDP mechanism for each VPN that it supports. These mechanisms include SA message cache, peering connection, timers, sending cache, and cache for exchanging PIM messages.

One VPN is isolated from another, and MSDP and PIM-SM messages can be exchanged only within the same VPN.

Protocols and standards

·          RFC 3618, Multicast Source Discovery Protocol (MSDP)

·          RFC 3446, Anycast Rendezvous Point (RP) mechanism using Protocol Independent Multicast (PIM) and Multicast Source Discovery Protocol (MSDP)

MSDP configuration task list

Tasks at a glance

Configuring basic MSDP functions:

·         (Required.) Enabling MSDP

·         (Required.) Creating an MSDP peering connection

·         (Optional.) Configuring a static RPF peer

Configuring an MSDP peering connection:

·         (Optional.) Configuring the description for an MSDP peer

·         (Optional.) Configuring an MSDP mesh group

·         (Optional.) Controlling MSDP peering connections

Configuring SA message-related parameters:

·         (Optional.) Configuring SA message contents

·         (Optional.) Configuring SA request messages

·         (Optional.) Configuring SA message policies

·         (Optional.) Configuring the SA cache mechanism

 

Configuring basic MSDP functions

All the configuration tasks in this section should be performed on RPs in PIM-SM domains, and each of these RPs acts as an MSDP peer.

Configuration prerequisites

Before you configure basic MSDP functions, complete the following tasks:

·          Configure a unicast routing protocol so that all devices in the domain can interoperate at the network layer.

·          Configure PIM-SM to enable intra-domain multicast.

Enabling MSDP

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable IP multicast routing and enter MRIB view.

multicast routing [ vpn-instance vpn-instance-name ]

By default, IP multicast routing is disabled.

3.       Return to system view.

quit

N/A

4.       Enable MSDP and enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

By default, MSDP is disabled.

 

Creating an MSDP peering connection

An MSDP peering relationship is identified by an address pair (the addresses of the local MSDP peer and the remote MSDP peer). To create an MSDP peering connection, you must perform the creation operation on both devices that are a pair of MSDP peers.

If an MSDP peer and a BGP peer share the same interface, configure the same IP address for the MSDP peer and the BGP peer as a best practice.

To create an MSDP peering connection:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Create an MSDP peering connection.

peer peer-address connect-interface interface-type interface-number

By default, MSDP peering connections are not created.

 

Configuring a static RPF peer

Configuring static RPF peers can avoid RPF check for SA messages.

If only one MSDP peer is configured on a router, this MSDP peer acts as a static RPF peer.

To configure a static RPF peer:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Configure a static RPF peer.

static-rpf-peer peer-address [ rp-policy ip-prefix-name ]

By default, static RPF peers are not configured.

 

Configuring an MSDP peering connection

This section describes how to configure an MSDP peering connection.

Configuration prerequisites

Before you configure an MSDP peering connection, complete the following tasks:

·          Configure a unicast routing protocol so that all devices in the domain can interoperate at the network layer.

·          Configure basic MSDP functions.

Configuring the description for an MSDP peer

MSDP peer descriptions help administrators easily distinguish between different MSDP peers and better manage them.

To describe an MSDP peer:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Configure the description for an MSDP peer

peer peer-address description text

By default, an MSDP peer is not configured with a description.

 

Configuring an MSDP mesh group

An AS might contain multiple MSDP peers. You can use the MSDP mesh group mechanism to avoid SA message flooding among these MSDP peers and to optimize the multicast traffic.

In an MSDP mesh group, member MSDP peers forward SA messages that passed the RPF check from outside the mesh group to the other members in the mesh group. If a mesh group member receives an SA message from another member, it neither performs an RPF check on the message nor forwards the message to the other members.

This mechanism not only avoids SA message flooding but also simplifies the RPF check mechanism because you do not need to run BGP between these MSDP peers.

To organize multiple MSDP peers in a mesh group, assign these MSDP peers to the same mesh group. Before doing this, make sure the routers are interconnected with one another.

To create an MSDP mesh group:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Assign an MSDP peer to the mesh group.

peer peer-address mesh-group name

By default, an MSDP peer does not belong to any mesh group.

If you assign an MSDP peer to multiple mesh groups, the most recent configuration takes effect.

 

Controlling MSDP peering connections

MSDP peers are interconnected over TCP (port number 639). You can tear down or re-establish MSDP peering connections to control SA message exchange between the MSDP peers. When the connection between two MSDP peers is torn down, SA messages are no longer delivered between them. The MSDP peers will not attempt to re-establish the connection. The configuration information, however, remains unchanged.

A TCP connection is required when one of the following conditions exists:

·          A new MSDP peer is created.

·          A previously deactivated MSDP peering connection is reactivated.

·          A previously failed MSDP peer attempts to resume operation.

You can adjust the interval between MSDP peering connection attempts.

To enhance MSDP security, configure a password for MD5 authentication used by both MSDP peers to establish a TCP connection. If the MD5 authentication fails, the TCP connection cannot be established.

 

IMPORTANT:

The MSDP peers involved in MD5 authentication must be configured with the same authentication method and password. Otherwise, the authentication fails and the TCP connection cannot be established.

 

To control MSDP peering connections:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Tear down an MSDP peering connection.

shutdown peer-address

By default, an MSDP peering connection is active.

4.       Configure the interval between MSDP peering connection attempts.

timer retry interval

The default setting is 30 seconds.

5.       Configure MD5 authentication for both MSDP peers to establish a TCP connection.

peer peer-address password { cipher | simple } password

By default, MD5 authentication is not performed before a TCP connection is established.

 

Configuring SA message-related parameters

This section describes how to configure SA message-related parameters.

Configuration prerequisites

Before you configure SA message delivery, complete the following tasks:

·          Configure a unicast routing protocol so that all devices in the domain can interoperate at the network layer.

·          Configure basic MSDP functions.

Configuring SA message contents

Some multicast sources send multicast data at an interval longer than the aging time of (S, G) entries. In this case, the source-side DR must encapsulate multicast data packet-by-packet in register messages and send them to the source-side RP. The source-side RP transmits the (S, G) information to the remote RP through SA messages. Then, the remote RP sends join messages to the source-side DR and builds an SPT. Because the (S, G) entries have timed out, remote receivers can never receive the multicast data from the multicast source.

To avoid this problem, you can configure the source-side RP to encapsulate multicast data in SA messages. As a result, the source-side RP can forward the multicast data in SA messages to its remote MSDP peers. After receiving the SA messages, the remote RP decapsulates the SA messages and forwards the multicast data to the receivers in the local domain along the RPT.

The MSDP peers deliver SA messages to one another. After receiving an SA message, a router performs an RPF check on the message. If the router finds that the remote RP address is the same as the local RP address, it discards the SA message.

However, in the Anycast RP application, you must configure the same IP address for the RPs in the same PIM-SM domain and configure these RPs as MSDP peers. To make sure SA messages can pass the RPF check, you must assign SA messages a logical RP address that is different than the actual RP address. A logical RP address is the address of a logical interface on the router where the RP resides.

To configure the SA message contents:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Enable multicast data encapsulation in SA messages.

encap-data-enable

By default, an SA message contains only (S, G) entries, but not the multicast data.

4.       Configure the interface address as the RP address in SA messages.

originating-rp interface-type interface-number

By default, PIM RP address is the RP address in SA messages.

 

Configuring SA request messages

By default, after receiving a new join message, a router does not automatically send an SA request message to any MSDP peer. Instead, it waits for the next SA message from its MSDP peer. This will cause the receiver to delay obtaining multicast source information.

An SA request policy enables the switch to filter SA request messages by using an ACL that specifies the multicast groups. This reduces the join latency.

 

IMPORTANT:

Before you enable the router to send SA requests, make sure you disable the SA message cache mechanism.

 

To configure SA request transmission and filtering:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Enable the device to send SA request messages.

peer peer-address request-sa-enable

By default, after receiving a new join message, a device does not send an SA request message to any MSDP peer. Instead, it waits for the next SA message from its MSDP peer.

4.       Configure an SA request policy.

peer peer-address sa-request-policy [ acl acl-number ]

By default, no SA request policy exists.

 

Configuring SA message policies

To control the propagation of multicast source information, you can configure the following policies:

·          SA creation policy—Limits the multicast source information advertised in SA messages. This policy enables the router to advertise (S, G) entries by using an ACL that specifies the multicast sources and groups.

·          SA incoming or outgoing policy—Limits the receipt or forwarding of SA messages. This policy enables the router to receive or forward SA messages by using an ACL that specifies the multicast sources and groups.

By default, multicast data packets are encapsulated in SA messages and forwarded to MSDP peers only if the TTL values in the packets are larger than zero. You can set the lower TTL threshold for multicast data packets encapsulated in SA messages that are sent to an MSDP peer. Then, only multicast data packets whose TTL values are larger than or equal to the configured value are encapsulated in SA messages. Only SA messages whose TTL values are larger than or equal to the configured value are forwarded to the specified MSDP peer. This controls the multicast data packet encapsulation and limits the propagation range of the SA messages.

To configure SA message policies:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Configure an SA creation policy.

import-source [ acl acl-number ]

By default, no SA creation policy exists.

4.       Configure an SA incoming or outgoing policy.

peer peer-address sa-policy { export | import } [ acl acl-number ]

By default, no SA incoming or outgoing policy exists.

5.       Set the lower TTL threshold for multicast data packets encapsulated in SA messages.

peer peer-address minimum-ttl ttl-value

The default setting is 0.

 

Configuring the SA cache mechanism

The SA cache mechanism enables the router to locally cache (S, G) entries contained in SA messages. It reduces the time for obtaining multicast source information, but increases memory occupation.

With the SA message cache mechanism enabled, when the router receives a new (*, G) join message, it searches its SA message cache first.

·          If no matching (S, G) entry is found, the router waits for the SA message that its MSDP peer sends in the next cycle.

·          If a matching (S, G) entry is found in the cache, the router joins the SPT rooted at S.

To protect the router against DoS attacks, you can set a limit on the number of (S, G) entries that the router can cache.

To configure the SA message cache:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter MSDP view.

msdp [ vpn-instance vpn-instance-name ]

N/A

3.       Enable the SA message cache mechanism.

cache-sa-enable

By default, the SA message cache mechanism is enabled. The device caches the (S, G) entries contained in the received SA messages.

4.       Configure the maximum number of (S, G) entries learned from the specified MSDP peer that the router can cache.

peer peer-address sa-cache-maximum sa-limit

The default setting is 4294967295.

 

Displaying and maintaining MSDP

Execute display commands in any view and reset commands in user view.

 

Task

Command

Display brief information about MSDP peers.

display msdp [ vpn-instance vpn-instance-name ] brief [ state { connect | disabled | established | listen | shutdown } ]

Display detailed status of MSDP peers.

display msdp [ vpn-instance vpn-instance-name ] peer-status [ peer-address ]

Display (S, G) entries in the SA message cache.

display msdp [ vpn-instance vpn-instance-name ] sa-cache [ group-address | source-address | as-number ] *

Display the number of (S, G) entries in the SA message cache.

display msdp [ vpn-instance vpn-instance-name ] sa-count [ as-number ]

Reset the TCP connection with an MSDP peer and clear statistics for the MSDP peer.

reset msdp [ vpn-instance vpn-instance-name ] peer [ peer-address ]

Clear (S, G) entries in the SA message cache.

reset msdp [ vpn-instance vpn-instance-name ] sa-cache [ group-address ]

Clear statistics for an MSDP peer without resetting the TCP connection with the MSDP peer.

reset msdp [ vpn-instance vpn-instance-name ] statistics [ peer-address ]

 

MSDP configuration examples

This section provides examples of configuring MSDP on switches.

PIM-SM inter-domain multicast configuration

Network requirements

As shown in Figure 39, OSPF runs within AS 100 and AS 200 and BGP runs between them. Each PIM-SM domain has at least one multicast source or receiver.

Set up MSDP peering relationships between the RPs in the PIM-SM domains to share multicast source information among the PIM-SM domains.

Figure 39 Network diagram

 

Table 10 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int103

10.110.1.2/24

Switch D

Vlan-int104

10.110.4.2/24

Switch A

Vlan-int100

10.110.2.1/24

Switch D

Vlan-int300

10.110.5.1/24

Switch A

Vlan-int200

10.110.3.1/24

Switch E

Vlan-int105

10.110.6.1/24

Switch B

Vlan-int103

10.110.1.1/24

Switch E

Vlan-int102

192.168.3.2/24

Switch B

Vlan-int101

192.168.1.1/24

Switch E

Loop0

3.3.3.3/32

Switch B

Loop0

1.1.1.1/32

Switch F

Vlan-int105

10.110.6.2/24

Switch C

Vlan-int104

10.110.4.1/24

Switch F

Vlan-int400

10.110.7.1/24

Switch C

Vlan-int102

192.168.3.1/24

Source 1

10.110.2.100/24

Switch C

Vlan-int101

192.168.1.2/24

Source 2

10.110.5.100/24

Switch C

Loop0

2.2.2.2/32

 

 

 

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 39. (Details not shown.)

2.        Configure OSPF on all the switches in the ASs. (Details not shown.)

3.        Enable IP multicast routing, enable PIM-SM on each interface, and configure a PIM-SM domain border:

# On Switch A, enable IP multicast routing.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

# Enable PIM-SM on VLAN-interface 103 and VLAN-interface 100.

[SwitchA] interface vlan-interface 103

[SwitchA-Vlan-interface103] pim sm

[SwitchA-Vlan-interface103] quit

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] pim sm

[SwitchA-Vlan-interface100] quit

# Enable IGMP on the receiver-side interface VLAN-interface 200.

[SwitchA] interface vlan-interface 200

[SwitchA-Vlan-interface200] igmp enable

[SwitchA-Vlan-interface200] quit

# Enable IP multicast routing and PIM-SM on Switch B, Switch C, Switch D, Switch E, and Switch F in the same way Switch A is configured. (Details not shown.)

# Configure a PIM domain border on Switch B.

[SwitchB] interface vlan-interface 101

[SwitchB-Vlan-interface101] pim bsr-boundary

[SwitchB-Vlan-interface101] quit

# Configure a PIM domain border on Switch C and Switch E in the same way Switch B is configured. (Details not shown.)

4.        Configure C-BSRs and C-RPs:

# Configure Loopback 0 as a C-BSR and a C-RP on Switch B.

[SwitchB] pim

[SwitchB-pim] c-bsr 1.1.1.1

[SwitchB-pim] c-rp 1.1.1.1

[SwitchB-pim] quit

# Configure C-BSRs and C-RPs on Switch C and Switch E in the same way Switch B is configured. (Details not shown.)

5.        Configure BGP for mutual route redistribution between BGP and OSPF:

# On Switch B, configure an eBGP peer, and redistribute OSPF routes.

[SwitchB] bgp 100

[SwitchB-bgp] router-id 1.1.1.1

[SwitchB-bgp] peer 192.168.1.2 as-number 200

[SwitchB-bgp] address-family ipv4 unicast

[SwitchB-bgp-ipv4] import-route ospf 1

[SwitchB-bgp-ipv4] peer 192.168.1.2 enable

[SwitchB-bgp-ipv4] quit

# On Switch C, configure an eBGP peer, and redistribute OSPF routes.

[SwitchC] bgp 200

[SwitchC-bgp] router-id 2.2.2.2

[SwitchC-bgp] peer 192.168.1.1 as-number 100

[SwitchC-bgp] address-family ipv4 unicast

[SwitchC-bgp-ipv4] import-route ospf 1

[SwitchC-bgp-ipv4] peer 192.168.1.1 enable

[SwitchC-bgp-ipv4] quit

# Redistribute BGP routes into OSPF on Switch B.

[SwitchB] ospf 1

[SwitchB-ospf-1] import-route bgp

[SwitchB-ospf-1] quit

# Redistribute BGP routes into OSPF on Switch C.

[SwitchB] ospf 1

[SwitchB-ospf-1] import-route bgp

[SwitchB-ospf-1] quit

6.        Configure MSDP peers:

# Configure an MSDP peer on Switch B.

[SwitchB] msdp

[SwitchB-msdp] peer 192.168.1.2 connect-interface vlan-interface 101

[SwitchB-msdp] quit

# Configure an MSDP peer on Switch C.

[SwitchC] msdp

[SwitchC-msdp] peer 192.168.1.1 connect-interface vlan-interface 101

[SwitchC-msdp] peer 192.168.3.2 connect-interface vlan-interface 102

[SwitchC-msdp] quit

# Configure MSDP peers on Switch E.

[SwitchE] msdp

[SwitchE-msdp] peer 192.168.3.1 connect-interface vlan-interface 102

[SwitchE-msdp] quit

Verifying the configuration

# Display information about BGP peer groups on Switch B.

[SwitchB] display bgp peer ipv4

 

 BGP local router ID: 1.1.1.1

 Local AS number: 100

 Total number of peers: 1                  Peers in established state: 1

 

  Peer                    AS  MsgRcvd  MsgSent OutQ PrefRcv Up/Down  State

 

  192.168.1.2             200 24       21      0    6       00:20:07 Established

# Display information about BGP peer groups on Switch C.

[SwitchC] display bgp peer ipv4

 

 BGP local router ID: 2.2.2.2

 Local AS number: 1

 Total number of peers: 1                  Peers in established state: 1

 

  Peer                    AS  MsgRcvd  MsgSent OutQ PrefRcv Up/Down  State

 

  192.168.1.1             100 18       16      0    1       00:20:07 Established

# Display the BGP routing table on Switch C.

[SwitchC] display bgp routing-table ipv4

 

 Total number of routes: 5

 

 BGP local router ID is 2.2.2.2

 Status codes: * - valid, > - best, d - dampened, h - history,

               s - suppressed, S - stale, i - internal, e - external

 

     Network            NextHop         MED        LocPrf     PrefVal Path/Ogn

 

* >  1.1.1.1/32         192.168.1.1     0                     0       100?

* >i 2.2.2.2/32         0.0.0.0         0                     0       ?

* >  192.168.1.0        0.0.0.0         0                     0       ?

* >  192.168.1.1/32     0.0.0.0         0                     0       ?

* >  192.168.1.2/32     0.0.0.0         0                     0       ?

When Source 1 in PIM-SM 1 and Source 2 in PIM-SM 2 send multicast information, receivers in PIM-SM 1 and PIM-SM 3 can receive the multicast data.

# Display brief information about MSDP peer groups on Switch B.

[SwitchB] display msdp brief

Configured   Established  Listen       Connect      Shutdown     Disabled

1            1            0            0            0            0

 

Peer address    State       Up/Down time    AS         SA count   Reset count

192.168.1.2     Established 00:12:57        200        13         0

# Display brief information about MSDP peer groups on Switch C.

[SwitchC] display msdp brief

Configured   Established  Listen       Connect      Shutdown     Disabled

1            1            0            0            0            0

 

Peer address    State       Up/Down time    AS         SA count   Reset count

192.168.3.2     Established 01:43:57        ?          8          0

192.168.1.1     Established 01:43:57        ?          13         0

# Display brief information about MSDP peer groups on Switch E.

[SwitchE] display msdp brief

Configured   Established  Listen       Connect      Shutdown     Disabled

1            1            0            0            0            0

 

Peer address    State       Up/Down time    AS         SA count   Reset count

192.168.3.1     Established 01:07:57        200        8          0

# Display detailed MSDP peer information on Switch B.

[SwitchB] display msdp peer-status

MSDP Peer 192.168.1.2; AS 200

 Description:

 Information about connection status:

   State: Established

   Up/down time: 00:15:47

   Resets: 0

   Connection interface: Vlan-interface101 (192.168.1.1)

   Received/sent messages: 16/16

   Discarded input messages: 0

   Discarded output messages: 0

   Elapsed time since last connection or counters clear: 00:17:40

   Mesh group peer joined: momo

   Last disconnect reason: Hold timer expired with truncated message

   Truncated packet: 5 bytes in buffer, type: 1, length: 20, without packet time: 75s

 Information about (Source, Group)-based SA filtering policy:

   Import policy: None

   Export policy: None

 Information about SA-Requests:

   Policy to accept SA-Requests: None

   Sending SA-Requests status: Disable

 Minimum TTL to forward SA with encapsulated data: 0

 SAs learned from this peer: 0, SA cache maximum for the peer: 4294967295

 Input queue size: 0, Output queue size: 0

 Counters for MSDP messages:

   RPF check failure: 0

   Incoming/outgoing SA: 0/0

   Incoming/outgoing SA-Request: 0/0

   Incoming/outgoing SA-Response: 0/0

   Incoming/outgoing Keepalive: 867/867

   Incoming/outgoing Notification: 0/0

   Incoming/outgoing Traceroutes in progress: 0/0

   Incoming/outgoing Traceroute reply: 0/0

   Incoming/outgoing Unknown: 0/0

   Incoming/outgoing data packet: 0/0

Anycast RP configuration

Network requirements

As shown in Figure 40, OSPF runs within the domain to provide unicast routes.

Configure the Anycast RP application so that the receiver-side DRs and the source-side DRs can initiate a join process to their respective RPs that are topologically closest to them.

The router ID of Switch B is 1.1.1.1, and the router ID of Switch D is 2.2.2.2. Set up an MSDP peering relationship between Switch B and Switch D.

Figure 40 Network diagram

 

Table 11 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Source 1

10.110.5.100/24

Switch C

Vlan-int101

192.168.1.2/24

Source 2

10.110.6.100/24

Switch C

Vlan-int102

192.168.2.2/24

Switch A

Vlan-int300

10.110.5.1/24

Switch D

Vlan-int200

10.110.3.1/24

Switch A

Vlan-int103

10.110.2.2/24

Switch D

Vlan-int104

10.110.4.1/24

Switch B

Vlan-int100

10.110.1.1/24

Switch D

Vlan-int102

192.168.2.1/24

Switch B

Vlan-int103

10.110.2.1/24

Switch D

Loop0

2.2.2.2/32

Switch B

Vlan-int101

192.168.1.1/24

Switch D

Loop10

4.4.4.4/32

Switch B

Loop0

1.1.1.1/32

Switch D

Loop20

10.1.1.1/32

Switch B

Loop10

3.3.3.3/32

Switch E

Vlan-int400

10.110.6.1/24

Switch B

Loop20

10.1.1.1/32

Switch E

Vlan-int104

10.110.4.2/24

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 40. (Details not shown.)

2.        Configure OSPF on the switches in the PIM-SM domain. (Details not shown.)

3.        Enable IP multicast routing, IGMP, and PIM-SM:

# On Switch B, enable IP multicast routing.

<SwitchB> system-view

[SwitchB] multicast routing

[SwitchB-mrib] quit

# Enable IGMP on the receiver-side interface VLAN-interface 100.

[SwitchB] interface vlan-interface 100

[SwitchB-Vlan-interface100] igmp enable

[SwitchB-Vlan-interface100] quit

# Enable PIM-SM on the other interfaces.

[SwitchB] interface vlan-interface 103

[SwitchB-Vlan-interface103] pim sm

[SwitchB-Vlan-interface103] quit

[SwitchB] interface Vlan-interface 101

[SwitchB-Vlan-interface101] pim sm

[SwitchB-Vlan-interface101] quit

[SwitchB] interface loopback 0

[SwitchB-LoopBack0] pim sm

[SwitchB-LoopBack0] quit

[SwitchB] interface loopback 10

[SwitchB-LoopBack10] pim sm

[SwitchB-LoopBack10] quit

[SwitchB] interface loopback 20

[SwitchB-LoopBack20] pim sm

[SwitchB-LoopBack20] quit

# Enable IP multicast routing, IGMP, and PIM-SM on Switch A, Switch C, Switch D, and Switch E in the same way Switch B is configured. (Details not shown.)

4.        Configure C-BSRs and C-RPs:

# Configure Loopback 10 as a C-BSR and Loopback 20 as a C-RP on Switch B.

[SwitchB] pim

[SwitchB-pim] c-bsr 3.3.3.3

[SwitchB-pim] c-rp 10.1.1.1

[SwitchB-pim] quit

# Configure a C-BSR and a C-RP on Switch D in the same way Switch B is configured. (Details not shown.)

5.        Configure MSDP peers:

# Configure an MSDP peer on Loopback 0 of Switch B.

[SwitchB] msdp

[SwitchB-msdp] originating-rp loopback 0

[SwitchB-msdp] peer 2.2.2.2 connect-interface loopback 0

[SwitchB-msdp] quit

# Configure an MSDP peer on Loopback 0 of Switch D.

[SwitchD] msdp

[SwitchD-msdp] originating-rp loopback 0

[SwitchD-msdp] peer 1.1.1.1 connect-interface loopback 0

[SwitchD-msdp] quit

Verifying the configuration

1.        Verify that the MSDP peer configurations are correct.

# Display brief information about MSDP peers on Switch B.

[SwitchB] display msdp brief

Configured   Established  Listen       Connect      Shutdown     Disabled

1            1            0            0            0            0

 

Peer address    State       Up/Down time    AS         SA count   Reset count

2.2.2.2         Established 00:10:57        ?          0          0

# Display brief information about MSDP peers on Switch D.

[SwitchD] display msdp brief

Configured   Established  Listen       Connect      Shutdown     Disabled

1            1            0            0            0            0

 

Peer address    State       Up/Down time    AS         SA count   Reset count

1.1.1.1         Established 00:10:57        ?          0          0

2.        Verify that Switch B acts as the RP for Source 1 and Host A.

# Send an IGMP report from Host A to join the multicast group 225.1.1.1. (Details not shown.)

# Send multicast data from Source 1 to the multicast group. (Details not shown.)

# Display the PIM routing table on Switch B.

[SwitchB] display pim routing-table

 Total 1 (*, G) entry; 1 (S, G) entry

 

 (*, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: WC

     UpTime: 00:15:04

     Upstream interface: Register

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface100

             Protocol: igmp, UpTime: 00:15:04, Expires: -

 

 (10.110.5.100, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: SPT 2MSDP ACT

     UpTime: 00:46:28

     Upstream interface: Vlan-interface103

         Upstream neighbor: 10.110.2.2

         RPF prime neighbor: 10.110.2.2

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface100

             Protocol: pim-sm, UpTime:  - , Expires:  -

# Display the PIM routing table on Switch D.

[SwitchD] display pim routing-table

No information is output on Switch D.

3.        Verify that Switch D acts as the RP for Source 2 and Host B.

# Send an IGMP leave message and an IGMP report to the multicast group 225.1.1.1 from Host A and Host B, respectively. (Details not shown.)

# Send multicast data from Source 2 to the multicast group. (Details not shown.)

# Display the PIM routing table on Switch B.

[SwitchB] display pim routing-table

No information is output on Switch B.

# Display the PIM routing table on Switch D.

[SwitchD] display pim routing-table

 Total 1 (*, G) entry; 1 (S, G) entry

 

 (*, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: WC

     UpTime: 00:12:07

     Upstream interface: Register

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface200

             Protocol: igmp, UpTime: 00:12:07, Expires: -

 

 (10.110.6.100, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: SPT 2MSDP ACT

     UpTime: 00:40:22

     Upstream interface: Vlan-interface104

         Upstream neighbor: 10.110.4.2

         RPF prime neighbor: 10.110.4.2

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface200

             Protocol: pim-sm, UpTime:  - , Expires:  -

SA message filtering configuration

Network requirements

As shown in Figure 41, OSPF runs within and among the PIM-SM domains to provide unicast routing.

Set up an MSDP peering relationship between Switch A and Switch C and between Switch C and Switch D.

Source 1 sends multicast data to multicast groups 225.1.1.0/30 and 226.1.1.0/30, and Source 2 sends multicast data to the multicast group 227.1.1.0/30.

Configure SA message policies to meet the following requirements:

·          Host A and Host B receive the multicast data only addressed to multicast groups 225.1.1.0/30 and 226.1.1.0/30.

·          Host C receives the multicast data only addressed to multicast groups 226.1.1.0/30 and 227.1.1.0/30.

Figure 41 Network diagram

 

Table 12 Interface and IP address assignment

Device

Interface

IP address

Device

Interface

IP address

Source 1

10.110.3.100/24

Switch C

Vlan-int300

10.110.4.1/24

Source 2

10.110.6.100/24

Switch C

Vlan-int104

10.110.5.1/24

Switch A

Vlan-int100

10.110.1.1/24

Switch C

Vlan-int101

192.168.1.2/24

Switch A

Vlan-int102

10.110.2.1/24

Switch C

Vlan-int103

192.168.2.2/24

Switch A

Vlan-int101

192.168.1.1/24

Switch C

Loop0

2.2.2.2/32

Switch A

Loop0

1.1.1.1/32

Switch D

Vlan-int400

10.110.6.1/24

Switch B

Vlan-int200

10.110.3.1/24

Switch D

Vlan-int500

10.110.7.1/24

Switch B

Vlan-int102

10.110.2.2/24

Switch D

Vlan-int104

10.110.5.2/24

Switch B

Vlan-int103

192.168.2.1/24

Switch D

Loop0

3.3.3.3/32

 

Configuration procedure

1.        Assign an IP address and subnet mask to each interface according to Figure 41. (Details not shown.)

2.        Configure OSPF on the switches in the PIM-SM domains. (Details not shown.)

3.        Enable IP multicast routing, IGMP, and PIM-SM, and configure a PIM domain border:

# On Switch A, enable IP multicast routing.

<SwitchA> system-view

[SwitchA] multicast routing

[SwitchA-mrib] quit

# Enable IGMP on the receiver-side interface VLAN-interface 100.

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] quit

# Enable PIM-SM on the other interfaces.

[SwitchA] interface vlan-interface 101

[SwitchA-Vlan-interface101] pim sm

[SwitchA-Vlan-interface101] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim sm

[SwitchA-Vlan-interface102] quit

[SwitchA] interface loopback 0

[SwitchA-LoopBack0] pim sm

[SwitchA-LoopBack0] quit

# Enable IP multicast routing, IGMP, and PIM-SM on Switch B, Switch C, and Switch D in the same way Switch A is configured. (Details not shown.)

# Configure a PIM domain border on Switch C.

[SwitchC] interface vlan-interface 101

[SwitchC-Vlan-interface101] pim bsr-boundary

[SwitchC-Vlan-interface101] quit

[SwitchC] interface vlan-interface 103

[SwitchC-Vlan-interface103] pim bsr-boundary

[SwitchC-Vlan-interface103] quit

[SwitchC] interface vlan-interface 104

[SwitchC-Vlan-interface104] pim bsr-boundary

[SwitchC-Vlan-interface104] quit

# Configure PIM domain borders on Switch A, Switch B, and Switch D in the same way Switch C is configured. (Details not shown.)

4.        Configure C-BSRs and C-RPs:

# Configure Loopback 0 as a C-BSR and a C-RP on Switch A.

[SwitchA] pim

[SwitchA-pim] c-bsr 1.1.1.1

[SwitchA-pim] c-rp 1.1.1.1

[SwitchA-pim] quit

# Configure C-BSRs and C-RPs on Switch C and Switch D in the same way Switch A is configured. (Details not shown.)

5.        Configure MSDP peers:

# Configure an MSDP peer on Switch A.

[SwitchA] msdp

[SwitchA-msdp] peer 192.168.1.2 connect-interface vlan-interface 101

[SwitchA-msdp] quit

# Configure MSDP peers on Switch C.

[SwitchC] msdp

[SwitchC-msdp] peer 192.168.1.1 connect-interface vlan-interface 101

[SwitchC-msdp] peer 10.110.5.2 connect-interface vlan-interface 104

[SwitchC-msdp] quit

# Configure an MSDP peer on Switch D.

[SwitchD] msdp

[SwitchD-msdp] peer 10.110.5.1 connect-interface vlan-interface 104

[SwitchD-msdp] quit

6.        Configure SA message policies:

# Configure an SA accepting and forwarding policy on Switch C so that Switch C will not forward SA messages for (Source 1, 225.1.1.0/30) to Switch D.

[SwitchC] acl number 3001

[SwitchC-acl-adv-3001] rule deny ip source 10.110.3.100 0 destination 225.1.1.0 0.0.0.3

[SwitchC-acl-adv-3001] rule permit ip source any destination any

[SwitchC-acl-adv-3001] quit

[SwitchC] msdp

[SwitchC-msdp] peer 10.110.5.2 sa-policy export acl 3001

[SwitchC-msdp] quit

# Configure an SA creation policy on Switch D so that Switch D will not create SA messages for Source 2.

[SwitchD] acl number 2001

[SwitchD-acl-basic-2001] rule deny source 10.110.6.100 0

[SwitchD-acl-basic-2001] quit

[SwitchD] msdp

[SwitchD-msdp] import-source acl 2001

[SwitchD-msdp] quit

Verifying the configuration

# Display the (S, G) entries in the SA message cache on Switch C.

[SwitchC] display msdp sa-cache

 MSDP Total Source-Active Cache - 8 entries

 Matched 8 entries

 

Source        Group          Origin RP       Pro  AS     Uptime   Expires

10.110.3.100  225.1.1.0      1.1.1.1         ?    ?      02:03:30 00:05:31

10.110.3.100  225.1.1.1      1.1.1.1         ?    ?      02:03:30 00:05:31

10.110.3.100  225.1.1.2      1.1.1.1         ?    ?      02:03:30 00:05:31

10.110.3.100  225.1.1.3      1.1.1.1         ?    ?      02:03:30 00:05:31

10.110.3.100  226.1.1.0      1.1.1.1         ?    ?      02:03:30 00:05:31

10.110.3.100  226.1.1.1      1.1.1.1         ?    ?      02:03:30 00:05:31

10.110.3.100  226.1.1.2      1.1.1.1         ?    ?      02:03:30 00:05:31

10.110.3.100  226.1.1.3      1.1.1.1         ?    ?      02:03:30 00:05:31

# Display the (S, G) entries in the SA message cache on Switch D.

[SwitchD] display msdp sa-cache

 MSDP Total Source-Active Cache - 4 entries

 Matched  4 entries

 

Source        Group          Origin RP       Pro  AS     Uptime   Expires

10.110.3.100  226.1.1.0      1.1.1.1         ?    ?      00:32:53 00:05:07

10.110.3.100  226.1.1.1      1.1.1.1         ?    ?      00:32:53 00:05:07

10.110.3.100  226.1.1.2      1.1.1.1         ?    ?      00:32:53 00:05:07

10.110.3.100  226.1.1.3      1.1.1.1         ?    ?      00:32:53 00:05:07

Troubleshooting MSDP

This section describes common MSDP problems and how to troubleshoot them.

MSDP peers stay in disabled state

Symptom

The configured MSDP peers stay in disabled state.

Solution

To resolve the problem:

1.        Use the display ip routing-table command to verify that the unicast route between the routers is reachable.

2.        Verify that a unicast route is available between the two routers that will become MSDP peers to each other.

3.        Use the display current-configuration command to verify that the local interface address and the MSDP peer address of the remote router are the same.

4.        If the problem persists, contact H3C Support.

No SA entries exist in the router's SA message cache

Symptom

MSDP fails to send (S, G) entries through SA messages.

Solution

To resolve the problem:

1.        Use the display ip routing-table command to verify that the unicast route between the routers is reachable.

2.        Verify that a unicast route is available between the two routers that will become MSDP peers to each other.

3.        Verify the configuration of the import-source command and its acl-number argument, and make sure the ACL rule filters appropriate (S, G) entries.

4.        If the problem persists, contact H3C Support.

No exchange of locally registered (S, G) entries between RPs

Symptom

RPs fail to exchange their locally registered (S, G) entries with one another in the Anycast RP application.

Solution

To resolve the problem:

1.        Use the display ip routing-table command to verify that the unicast route between the routers is reachable.

2.        Verify that a unicast route is available between the two routers that will establish an MSDP peering relationship.

3.        Verify the configuration of the originating-rp command. In the Anycast RP application environment, use the originating-rp command to configure the RP address in the SA messages, which must be the local interface address.

4.        Verify that the C-BSR address is different from the Anycast RP address.

5.        If the problem persists, contact H3C Support.



A

abnormal multicast data termination

IP multicast PIM, 99

ACL

IP multicast IGMP snooping policy configuration, 25

address

Ethernet multicast MAC, 8

IP multicast, 5, 5

adjusting

IP multicast IGMP performance adjustment, 56

administrative scoping

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM domain divisions, 67

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SM zone relationships, 67

IP multicast PIM-SM zones, 67

Anycast

MSDP Anycast RP configuration, 116

MSDP RP, 103

troubleshooting MSDP RP entry exchange, 124

application

IP multicast data distribution, 4

architecture

IP multicast network, 5

ASM

IP multicast model), 5

assert

IP multicast PIM-DM, 62

IP multicast PIM-SM, 67

B

BFD

IP multicast PIM enable, 83

bootstrap router. See BSR

border

IP multicast PIM domain border configuration, 75

IP multicast PIM-SM zone border router, 67

boundary

multicast forwarding, 42

broadcast

transmission technique, 2

BSM semantic fragmentation

IP multicast PIM-SM, 76

BSR

IP multicast PIM-SM administrative scoping zones, 67

IP multicast PIM-SM BSR configuration, 75

IP multicast PIM-SM C-BSR configuration, 75

IP multicast PIM-SM RP discovery, 64

C

cache

MSDP SA message, 110

troubleshooting MSDP SA message cache entries, 124

candidate

bootstrap router. See C-BSR

RP. See C-RP

C-BSR

IP multicast PIM-SM configuration, 75

IP multicast PIM-SM RP discovery, 64

changing

multicast routing RPF route, 44

check

multicast RPF mechanism, 37

configuring

IGMP snooping general query/response parameters, 22, 23

IGMP snooping general query/response parameters (global), 23

IGMP snooping message parameters, 23

IGMP snooping message source IP address, 24

IGMP snooping querier, 22, 33

IGMP snooping simulated member host, 21

IP multicast IGMP, 50, 54, 57

IP multicast IGMP basic features, 54

IP multicast IGMP multicast group policy, 55

IP multicast IGMP performance adjustment, 56

IP multicast IGMP snooping, 12, 16, 29

IP multicast IGMP snooping basic features, 16

IP multicast IGMP snooping fast leave processing, 21

IP multicast IGMP snooping group policy, 29

IP multicast IGMP snooping max number multicast groups on port, 27

IP multicast IGMP snooping multicast group policy, 25

IP multicast IGMP snooping multicast group policy globally, 25

IP multicast IGMP snooping multicast group policy on port, 25

IP multicast IGMP snooping multicast group replacement (port), 27

IP multicast IGMP snooping multicast group replacement globally, 27

IP multicast IGMP snooping multicast source port filtering, 25

IP multicast IGMP snooping policy, 25

IP multicast IGMP snooping port feature, 19

IP multicast IGMP snooping static port, 20, 31

IP multicast IGMP static member interface, 55

IP multicast PIM, 60, 84

IP multicast PIM common features, 78

IP multicast PIM common timer globally, 82

IP multicast PIM common timer on interface, 82

IP multicast PIM common timers, 81

IP multicast PIM domain border, 75

IP multicast PIM hello message option globally, 80

IP multicast PIM hello message option on interface, 81

IP multicast PIM hello message options, 80

IP multicast PIM hello policy, 79

IP multicast PIM multicast source policy, 79

IP multicast PIM-DM, 70, 84

IP multicast PIM-DM graft retry timer, 72

IP multicast PIM-DM state-refresh parameter, 71

IP multicast PIM-SM, 72

IP multicast PIM-SM admin-scoped zone, 90

IP multicast PIM-SM BSR, 75

IP multicast PIM-SM C-BSR, 75

IP multicast PIM-SM C-RP, 74

IP multicast PIM-SM multicast source registration, 76

IP multicast PIM-SM non-scoped zone, 87

IP multicast PIM-SM RP, 73

IP multicast PIM-SM SPT switchover, 77

IP multicast PIM-SM static RP, 73

IP multicast PIM-SSM, 77, 95

IP multicast PIM-SSM group range, 78

MSDP, 101, 105, 111

MSDP Anycast RP, 116

MSDP basics, 105

MSDP mesh group, 107

MSDP peer description, 106

MSDP peering connection, 106

MSDP PIM-SM inter-domain multicast configuration, 111

MSDP RPF static peer, 106

MSDP SA message cache, 110

MSDP SA message content, 108

MSDP SA message filtering, 120

MSDP SA message policy, 109

MSDP SA message-related parameters, 108

MSDP SA request message, 109

multicast forwarding, 37, 44

multicast forwarding boundary, 42

multicast routing, 37, 40, 41, 44

multicast routing load splitting, 42

multicast routing longest prefix match principle, 41

multicast routing MAC address static entry, 42

multicast source port filtering globally, 26

multicast source port filtering on port, 26

multicast static route, 41

connecting

MSDP peering connection, 106, 106

MSDP peering connection control, 107

controlling

IP multicast IGMPv3 host control capability, 52

MSDP peering connection, 107

creating

MSDP peering connection, 106

multicast routing RPF route, 46

C-RP

IP multicast PIM-SM configuration, 74

IP multicast PIM-SM RP discovery, 64

D

describing

MSDP peer description, 106

device

IP multicast IGMP configuration, 57

IP multicast IGMPv3 host control, 52

IP multicast PIM configuration, 84

IP multicast PIM-DM configuration, 84

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SSM configuration, 95

multicast forwarding configuration, 44

multicast routing configuration, 44

multicast routing RPF route change, 44

multicast routing RPF route creation, 46

disabling

IP multicast PIM-SM BSM semantic fragmentation, 76

discovering

IP multicast PIM-DM neighbor discovery, 60

IP multicast PIM-SM neighbor discovery, 63

IP multicast PIM-SM RP discovery, 64

IP multicast PIM-SSM neighbor discovery, 69

displaying

IP multicast IGMP, 56

IP multicast IGMP snooping, 28

IP multicast PIM, 84

MSDP, 111

multicast forwarding, 43

multicast routing, 43

domain

IP multicast PIM domain border, 75

IP multicast PIM-SM administrative scoping, 67, 67

MSDP configuration, 101

MSDP inter-domain multicast delivery, 102

MSDP PIM-SM inter-domain multicast configuration, 111

DR

IP multicast PIM hello message DR_Priority, 80

IP multicast PIM/BFD enable, 83

IP multicast PIM-SM DR election, 63

IP multicast PIM-SM RPT building, 65

IP multicast PIM-SM SPT switchover configuration, 77

IP multicast PIM-SSM election, 69

PIM passive mode enable, 83

dropping

IP multicast IGMP snooping unknown multicast data, 26

dynamic

IP multicast IGMP snooping dynamic port, 13

IP multicast IGMP snooping dynamic port aging timer, 19

E

electing

IP multicast IGMPv2 querier election, 51

enabling

IGMP snooping (IGMP-snooping view), 17

IGMP snooping (VLAN view), 17

IGMP snooping querier, 22

IP multicast IGMP, 54

IP multicast IGMP fast leave processing, 56

IP multicast IGMP snooping, 16

IP multicast IGMP snooping drop unknown multicast data, 26

IP multicast IGMP snooping fast-leave processing globally, 21

IP multicast IGMP snooping fast-leave processing on port, 21

IP multicast IGMP snooping multicast group replacement, 27

IP multicast IGMP snooping report suppression, 26

IP multicast PIM/BFD, 83

IP multicast PIM-DM, 70

IP multicast PIM-DM state-refresh feature, 71

IP multicast PIM-SM, 73

IP multicast PIM-SSM, 77

IP multicast routing, 41

MSDP, 105

PIM passive mode, 83

PIM-SM Auto-RP listening, 74

Ethernet

IP multicast MAC address, 8

IP multicast overview, 1

IP multicast PIM configuration, 60, 84

IP multicast PIM-DM configuration, 84

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SSM configuration, 95

F

fast leave processing

IP multicast IGMP, 56

IP multicast IGMP snooping, 21

filtering

IP multicast IGMP snooping drop unknown multicast data, 26

IP multicast IGMP snooping multicast source port filtering, 25

IP multicast PIM hello policy, 79

IP multicast PIM multicast source policy, 79

MSDP SA message filtering configuration, 120

flooding

IP multicast PIM-DM SPT building, 60

forwarding

IGMP snooping last member query interval, 18

IP multicast IGMP snooping max number forwarding entries, 18

IP multicast packets, 11

IP multicast PIM configuration, 60, 84

IP multicast PIM VPN support, 70

IP multicast PIM-DM, 60

IP multicast PIM-DM configuration, 84

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SSM configuration, 95

multicast forwarding. See multicast forwarding

G

general query

IP multicast IGMP snooping, 14

global-scoped zone

IP multicast PIM-SM admin-scoped/global-scoped zone relationship, 67

IP multicast PIM-SM zone border router, 67

graft

IP multicast PIM-DM, 61

IP multicast PIM-DM graft retry timer, 72

group

IP multicast IGMP snooping multicast group policy, 25

IP multicast PIM-SSM group range configuration, 78

MSDP mesh group, 107

H

hello

IP multicast PIM common timer configuration (global), 82

IP multicast PIM common timer configuration (on interface), 82

IP multicast PIM hello message option configuration (global), 80

IP multicast PIM hello message option configuration (on interface), 81

IP multicast PIM hello message options, 80

IP multicast PIM hello policy, 79

holdtime

IP multicast PIM hello message option, 80

host

IGMP snooping simulated member host, 21

IP multicast IGMPv3 host control capability, 52

I

ID

IP multicast PIM hello message Generation ID option, 80

IGMP

basic configuration, 54

configuration, 50, 54, 57

displaying, 56

enable, 54

fast leave processing, 56

IP multicast IGMPv1. See IGMPv1

IP multicast IGMPv2. See IGMPv2

IP multicast IGMPv3. See IGMPv3

IP multicast PIM-SSM IGMPv3 relationship, 68

maintaining, 56

multicast group policy configuration, 55

performance adjustment, 56

protocols and standards, 53

snooping. See IGMP snooping

static member interface configuration, 55

troubleshooting, 59

troubleshooting inconsistent membership information, 59

troubleshooting no membership information on router, 59

version specification, 54

versions, 50

VPN support, 53

IGMP snooping

aging timer for dynamic port, 13

basic concepts, 12

basic configuration, 16

configuration, 12, 16, 29

displaying, 28

drop unknown multicast data enable, 26

dynamic port aging timer, 19

enable, 16

fast leave processing enable, 21

forwarding max number entries, 18

general query, 14

general query/response parameter configuration, 22

group policy configuration, 29

how it works, 14

last member query interval, 18

leave message, 14

maintaining, 28

membership report, 14

message parameter configuration, 23

message source IP address, 24

multicast group policy configuration, 25

multicast group replacement, 27

multicast groups max number on port, 27

multicast source port filtering, 25

policy configuration, 25

port feature configuration, 19

protocols and standards, 15

querier configuration, 22, 33

querier enable, 22

related ports, 12

report suppression, 26

simulated member host configuration, 21

static port configuration, 20, 31

troubleshooting, 36

troubleshooting Layer 2 multicast forwarding, 36

troubleshooting multicast group policy, 36

version specification, 17

IGMPv1

implementation, 50

IP multicast IGMP versions, 50

version specification, 54

IGMPv1 snooping version specification, 17

IGMPv2

features, 51

IP multicast IGMP versions, 50

leave group mechanism, 52

querier election, 51

version specification, 54

IGMPv2 snooping version specification, 17

IGMPv3

features, 52

host control capability, 52

IP multicast IGMP versions, 50

query capability, 53

report capability, 53

version specification, 54

IGMPv3 snooping version specification, 17

inconsistent membership information (IGMP), 59

Internet

Group Management Protocol. Use IGMP

IP

multicast. See IP multicast

IP addressing

IGMP snooping message source IP address, 24

IP multicast address, 5, 5

IP multicast packet forwarding, 11

IP multicast

address, 5

architecture, 5

ASM model, 5

broadcast transmission technique, 2

common notation, 4

data distribution, 4

displaying IGMP snooping, 28

displaying PIM, 84

Ethernet multicast MAC address, 8

features, 3

forwarding. See multicast forwarding

IGMP basic configuration, 54

IGMP configuration, 50, 54, 57

IGMP multicast group policy, 55

IGMP performance adjustment, 56

IGMP snooping basic configuration, 16

IGMP snooping configuration, 12, 16, 29

IGMP snooping drop unknown multicast data, 26

IGMP snooping dynamic port aging timer, 19

IGMP snooping fast leave processing, 21

IGMP snooping general query/response parameters, 22

IGMP snooping group policy, 25

IGMP snooping group policy configuration, 29

IGMP snooping last member query interval, 18

IGMP snooping max number groups on port, 27

IGMP snooping message parameters, 23

IGMP snooping message source IP address, 24

IGMP snooping multicast group replacement, 27

IGMP snooping policy configuration, 25

IGMP snooping port feature configuration, 19

IGMP snooping querier, 22

IGMP snooping querier configuration, 33

IGMP snooping report suppression, 26

IGMP snooping source port filtering, 25

IGMP snooping static port, 20

IGMP snooping static port configuration, 31

IGMP static member interface configuration, 55

IGMP version specification, 54

IGMP versions, 50

IGMP VPN support, 53

IGMPv1, 50

IGMPv2, 51

IGMPv3, 52

IP multicast address, 5

Layer 2 protocols and standards, 10

Layer 3 protocols and standards, 9

maintaining IGMP snooping, 28

models, 4

MSDP Anycast RP, 103

MSDP Anycast RP configuration, 116

MSDP basics configuration, 105

MSDP configuration, 101, 105, 111

MSDP display, 111

MSDP inter-domain multicast delivery, 102

MSDP maintain, 111

MSDP mesh group, 107

MSDP peer, 101

MSDP peer description, 106

MSDP peering connection, 106, 106

MSDP peering connection control, 107

MSDP protocols and standards, 105

MSDP RPF static peer, 106

MSDP SA message cache, 110

MSDP SA message content, 108

MSDP SA message filtering configuration, 120

MSDP SA message policy, 109

MSDP SA message-related parameters, 108

MSDP SA request message, 109

MSDP VPN support, 104

notation rendezvous point tree (RPT), 4

notation shortest path tree (SPT), 4

overview, 1

packet forwarding, 11

PIM configuration, 60

PIM-DM enable, 70

PIM-DM graft retry timer, 72

PIM-DM state-refresh configuration, 71

PIM-DM state-refresh feature, 71

PIM-SM enable, 73

PIM-SM RP configuration, 73

PIM-SSM configuration, 95

protocols and standards, 8

protocols and standards (IGMP snooping), 15

protocols and standards (IGMP), 53

routing. See multicast routing

routing enable, 41

RPF check process, 37

SFM model, 5

SSM model, 5

transmission technique, 2

transmission techniques, 1

troubleshooting IGMP, 59

troubleshooting IGMP inconsistent membership information, 59

troubleshooting IGMP no membership information on router, 59

troubleshooting IGMP snooping, 36

troubleshooting IGMP snooping Layer 2 multicast forwarding, 36

troubleshooting IGMP snooping multicast group policy, 36

troubleshooting MSDP, 123

troubleshooting multicast forwarding, 48

troubleshooting multicast routing, 48

troubleshooting PIM, 98

troubleshooting static route failure, 48

unicast transmission technique, 1

IPv4

Ethernet multicast MAC address, 8

IP multicast address, 5

IP multicast IGMP snooping multicast source port filtering, 25

IPv6

Ethernet multicast MAC address, 8

IP multicast address, 5

IP multicast PIM-DM graft retry timer, 72

IP multicast PIM-SM DR election, 63

PIM-DM configuration, 70

J

join/prune message

IP multicast PIM, 82

L

Layer 2

IP multicast protocols and standards, 10

multicast routing MAC address static entry, 42

Layer 3

IP multicast PIM configuration, 60, 84

IP multicast PIM-DM configuration, 84

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SSM configuration, 95

IP multicast protocols and standards, 9

IP multicast routing enable, 41

leave message

IP multicast IGMP snooping, 14

leaving

IP multicast IGMPv2 leave group mechanism, 52

load splitting

multicast routing configuration, 42

M

MAC address

multicast routing static multicast MAC address entry, 42

maintaining

IP multicast IGMP, 56

IP multicast IGMP snooping, 28

MSDP, 111

multicast forwarding, 43

multicast routing, 43

member

IP multicast IGMP snooping member port, 12

membership report

IP multicast IGMP snooping, 14

mesh

MSDP mesh group, 107

message

IP multicast IGMP snooping leave, 14

IP multicast PIM hello message options, 80

IP multicast PIM hello policy, 79

IP multicast PIM join/prune message size, 82

MSDP SA message, 110

MSDP SA message content, 108

MSDP SA message filtering configuration, 120

MSDP SA message policy, 109

MSDP SA message-related parameters, 108

MSDP SA request message, 109

mode

PIM passive mode enable, 83

model

IP multicast, 4

IP multicast ASM, 5

IP multicast SFM, 5

IP multicast SSM, 5

MSDP

Anycast RP, 103

Anycast RP configuration, 116

basics configuration, 105

configuration, 101, 105, 111

display, 111

enable, 105

how it works, 101

inter-domain multicast delivery, 102

maintain, 111

mesh group, 107

peer, 101

peer description, 106

peering connection, 106, 106

peering connection control, 107

PIM-SM inter-domain multicast configuration, 111

protocols and standards, 105

RPF static peer, 106

SA message cache, 110

SA message content, 108

SA message filtering configuration, 120

SA message policy, 109

SA message-related parameters, 108

SA request message, 109

troubleshooting, 123

troubleshooting peers stay in disabled state, 124

troubleshooting RP entry exchange, 124

troubleshooting SA message cache, 124

VPN support, 104

multicast

PIM common features configuration, 78

PIM common timer configuration, 81

PIM configuration, 84

PIM hello message options, 80

PIM hello policy, 79

PIM join/prune message size, 82

PIM multicast source policy, 79

PIM VPN support, 70

PIM-DM, 60

PIM-DM assert, 62

PIM-DM configuration, 70, 84

PIM-DM graft, 61

PIM-DM neighbor discovery, 60

PIM-DM SPT building, 60

PIM-SM, 62

PIM-SM administrative scoping, 67

PIM-SM admin-scoped zone configuration, 90

PIM-SM assert, 67

PIM-SM configuration, 72

PIM-SM DR election, 63

PIM-SM multicast source registration, 76

PIM-SM neighbor discovery, 63

PIM-SM non-scoped zone configuration, 87

PIM-SM RP discovery, 64

PIM-SM RPT building, 65

PIM-SM source registration, 65

PIM-SM SPT switchover, 66

PIM-SM SPT switchover configuration, 77

PIM-SSM, 68

PIM-SSM configuration, 77, 95

PIM-SSM DR election, 69

PIM-SSM group range configuration, 78

PIM-SSM neighbor discovery, 69

PIM-SSM SPT building, 69

troubleshooting PIM abnormal multicast data termination, 99

troubleshooting PIM multicast distribution tree, 98

troubleshooting PIM-SM multicast source registration failure, 99

multicast forwarding

boundary configuration, 42

configuration, 37, 40, 41, 44

displaying, 43

forwarding table, 37

MAC address static entry configuration, 42

multicast routing

configuration, 37, 40, 41, 44

displaying, 43

forwarding boundary configuration, 42

IP multicast routing enable, 41, 41

load splitting configuration, 42

longest prefix match principle, 41

MAC address static entry configuration, 42

multicast static route, 39

protocol-specific routing tables, 37

RPF check implementation, 38

RPF check mechanism, 37

RPF route change, 39, 44

RPF route creation, 39, 46

static multicast routing table, 37

static route, 39

static route configuration, 41

N

neighbor discovery

IP multicast PIM-DM, 60

IP multicast PIM-SM, 63

IP multicast PIM-SSM, 69

network

Ethernet multicast MAC address, 8

IP multicast address, 5, 5

IP multicast architecture, 5

IP multicast IGMP fast leave processing, 56

IP multicast IGMP multicast group policy, 55

IP multicast IGMP snooping multicast source port filtering, 25

IP multicast IGMP static member interface configuration, 55

IP multicast IGMP version specification, 54

IP multicast packet forwarding, 11

IP multicast PIM common features configuration, 78

IP multicast PIM common timer configuration, 81

IP multicast PIM domain border configuration, 75

IP multicast PIM hello message options, 80

IP multicast PIM hello policy, 79

IP multicast PIM join/prune message size, 82

IP multicast PIM multicast source policy, 79

IP multicast PIM/BFD enable, 83

IP multicast PIM-DM assert, 62

IP multicast PIM-DM graft, 61

IP multicast PIM-DM graft retry timer, 72

IP multicast PIM-DM neighbor discovery, 60

IP multicast PIM-DM SPT building, 60

IP multicast PIM-DM state-refresh feature, 71

IP multicast PIM-DM state-refresh parameters, 71

IP multicast PIM-SM administrative scoping, 67

IP multicast PIM-SM assert, 67

IP multicast PIM-SM BSM semantic fragmentation, 76

IP multicast PIM-SM BSR configuration, 75

IP multicast PIM-SM C-BSR configuration, 75

IP multicast PIM-SM C-RP configuration, 74

IP multicast PIM-SM DR election, 63

IP multicast PIM-SM multicast source registration, 65, 76

IP multicast PIM-SM neighbor discovery, 63

IP multicast PIM-SM RP configuration, 73

IP multicast PIM-SM RP discovery, 64

IP multicast PIM-SM RPT building, 65

IP multicast PIM-SM SPT switchover, 66

IP multicast PIM-SM SPT switchover configuration, 77

IP multicast PIM-SM static RP configuration, 73

IP multicast PIM-SM zone relationships, 67

IP multicast PIM-SSM DR election, 69

IP multicast PIM-SSM group range configuration, 78

IP multicast PIM-SSM neighbor discovery, 69

IP multicast PIM-SSM SPT building, 69

IP multicast routing enable, 41

MSDP basics configuration, 105

MSDP mesh group, 107

MSDP peer description, 106

MSDP peering connection, 106, 106

MSDP peering connection control, 107

MSDP RPF static peer, 106

MSDP SA message-related parameters, 108

multicast static route, 39

PIM passive mode enable, 83

PIM-SM Auto-RP, 74

network management

IGMP snooping querier configuration, 33

IP multicast IGMP basic configuration, 54

IP multicast IGMP configuration, 54, 57

IP multicast IGMP performance adjustment, 56

IP multicast IGMP snooping basic configuration, 16

IP multicast IGMP snooping configuration, 12, 16, 29

IP multicast IGMP snooping group policy configuration, 29

IP multicast IGMP snooping static port configuration, 31

IP multicast overview, 1

IP multicast PIM configuration, 60, 84

IP multicast PIM VPN support, 70

IP multicast PIM-DM, 60

IP multicast PIM-DM configuration, 70, 84

IP multicast PIM-SM, 62

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM configuration, 72

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SSM, 68

IP multicast PIM-SSM configuration, 77, 95

IP multicastIP multicast IGMP configuration, 50

MSDP Anycast RP configuration, 116

MSDP configuration, 101, 105, 111

MSDP PIM-SM inter-domain multicast configuration, 111

MSDP SA message filtering configuration, 120

multicast forwarding configuration, 37, 40, 41, 44

multicast routing configuration, 37, 40, 41, 44

multicast routing RPF route change, 44

multicast routing RPF route creation, 46

transmission techniques, 1

O

option

IP multicast PIM hello message DR_Priority, 80

IP multicast PIM hello message Generation ID option, 80

IP multicast PIM hello message holdtime option, 80

IP multicast PIM hello message LAN_Prune_Delay option, 80

IP multicast PIM hello message options, 80

P

packet

IP multicast forwarding, 11

multicast RPF check, 38

parameter

IGMP snooping general query/response parameters, 22

IGMP snooping message parameters, 23

IP multicast PIM-DM state-refresh configuration, 71

MSDP SA message-related parameters, 108

peer

MSDP Anycast RP, 103

MSDP inter-domain multicast delivery, 102

MSDP intermediate, 101

MSDP peer description, 106

MSDP peering connection, 106, 106

MSDP peering connection control, 107

MSDP receiver-side, 101

MSDP RPF static peer, 106

MSDP source-side, 101

troubleshooting MSDP peer stays in disabled state, 124

PIM

BFD enable, 83

common feature configuration, 78

common timer configuration, 81

common timer configuration (global), 82

common timer configuration (on interface), 82

configuration, 60, 84

displaying, 84

DM. See PIM-DM

hello message option configuration (global), 80

hello message option configuration (on interface), 81

hello message options, 80

hello policy configuration, 79

join/prune message size, 82

multicast source policy, 79

passive mode enable, 83

protocols and standards, 70

SM. See PIM-SM

SSM. See PIM-SSM

troubleshooting, 98

troubleshooting abnormal multicast data termination, 99

troubleshooting multicast distribution tree, 98

VPN support, 70

PIM-DM

assert, 62

configuration, 70, 84

enable, 70

graft, 61

graft retry timer configuration, 72

introduction, 60

IP multicast PIM configuration, 60

IP multicast PIM/BFD enable, 83

neighbor discovery, 60

PIM passive mode enable, 83

protocols and standards, 70

SPT building, 60

state-refresh feature enable, 71

state-refresh parameter configuration, 71

PIM-SM

administrative scoping, 67

administrative scoping zones, 67

admin-scoped zone configuration, 90

assert, 67

Auto-RP listening configuration, 74

BSM semantic fragmentation, 76

BSR configuration, 75

C-BSR configuration, 75

configuration, 72

C-RP configuration, 74

DR election, 63

enable, 73

introduction, 62

IP multicast PIM configuration, 60

IP multicast PIM domain border configuration, 75

IP multicast PIM/BFD enable, 83

MSDP Anycast RP, 103

MSDP Anycast RP configuration, 116

MSDP basics configuration, 105

MSDP configuration, 101, 105, 111

MSDP enable, 105

MSDP inter-domain multicast configuration, 111

MSDP inter-domain multicast delivery, 102

MSDP mesh group, 107

MSDP peer, 101

MSDP peer description, 106

MSDP peering connection, 106, 106

MSDP peering connection control, 107

MSDP RPF static peer, 106

MSDP SA message cache, 110

MSDP SA message content, 108

MSDP SA message filtering configuration, 120

MSDP SA message policy, 109

MSDP SA message-related parameters, 108

MSDP SA request message, 109

multicast source registration, 65, 76

neighbor discovery, 63

non-scoped zone configuration, 87

PIM passive mode enable, 83

protocols and standards, 70

RP configuration, 73

RP discovery, 64

RPT building, 65

SPT switchover, 66

SPT switchover configuration, 77

static RP configuration, 73

troubleshooting multicast source registration failure, 99

troubleshooting RP cannot be built, 99

troubleshooting RP cannot join SPT, 99

zone relationships, 67

PIM-SSM

configuration, 77, 95

DR election, 69

enable, 77

group range configuration, 78

IP multicast PIM configuration, 60

model implementation, 68

neighbor discovery, 69

protocols and standards, 70

SPT building, 69

policy

IP multicast IGMP multicast group policy, 55

IP multicast IGMP snooping configuration, 25

IP multicast IGMP snooping group policy configuration, 29

IP multicast IGMP snooping multicast group policy, 25

MSDP SA message policy, 109

port

IGMP snooping simulated member host, 21

IP multicast IGMP snooping aging timer for dynamic port, 13

IP multicast IGMP snooping basic configuration, 16

IP multicast IGMP snooping configuration, 12, 16, 29

IP multicast IGMP snooping dynamic port aging timer, 19

IP multicast IGMP snooping fast leave processing, 21

IP multicast IGMP snooping group policy configuration, 29

IP multicast IGMP snooping max number multicast groups on port, 27

IP multicast IGMP snooping member port, 12

IP multicast IGMP snooping multicast group replacement, 27

IP multicast IGMP snooping multicast source port filtering, 25

IP multicast IGMP snooping port feature configuration, 19

IP multicast IGMP snooping related ports, 12

IP multicast IGMP snooping router port, 12

IP multicast IGMP snooping static port configuration, 20, 31

principle

longest prefix match principle, 41

procedure

changing multicast routing RPF route, 44

configuring IGMP, 54

configuring IGMP snooping general query/response parameters, 22

configuring IGMP snooping general query/response parameters (global), 23

configuring IGMP snooping general query/response parameters (VLAN), 23

configuring IGMP snooping message parameters, 23

configuring IGMP snooping message source IP address, 24

configuring IGMP snooping querier, 22, 33

configuring IGMP snooping simulated member host, 21

configuring IP multicast IGMP, 57

configuring IP multicast IGMP basic features, 54

configuring IP multicast IGMP multicast group policy, 55

configuring IP multicast IGMP performance adjustment, 56

configuring IP multicast IGMP snooping, 16, 29

configuring IP multicast IGMP snooping basic features, 16

configuring IP multicast IGMP snooping fast leave processing configuration, 21

configuring IP multicast IGMP snooping group policy, 29

configuring IP multicast IGMP snooping max number multicast groups on port, 27

configuring IP multicast IGMP snooping multicast group policy, 25

configuring IP multicast IGMP snooping multicast group policy globally, 25

configuring IP multicast IGMP snooping multicast group policy on port, 25

configuring IP multicast IGMP snooping multicast group replacement (port), 27

configuring IP multicast IGMP snooping multicast group replacement globally, 27

configuring IP multicast IGMP snooping multicast source port filtering, 25

configuring IP multicast IGMP snooping multicast source port filtering globally, 26

configuring IP multicast IGMP snooping multicast source port filtering on port, 26

configuring IP multicast IGMP snooping policy, 25

configuring IP multicast IGMP snooping port feature, 19

configuring IP multicast IGMP snooping static port, 20, 31

configuring IP multicast IGMP static member interface, 55

configuring IP multicast PIM common features, 78

configuring IP multicast PIM common timer globally, 82

configuring IP multicast PIM common timer on interface, 82

configuring IP multicast PIM common timers, 81

configuring IP multicast PIM domain border, 75

configuring IP multicast PIM hello message option globally, 80

configuring IP multicast PIM hello message option on interface, 81

configuring IP multicast PIM hello message options, 80

configuring IP multicast PIM hello policy, 79

configuring IP multicast PIM multicast source policy, 79

configuring IP multicast PIM-DM, 70, 84

configuring IP multicast PIM-SM, 72

configuring IP multicast PIM-SM admin-scoped zone, 90

configuring IP multicast PIM-SM BSR, 75

configuring IP multicast PIM-SM C-BSR, 75

configuring IP multicast PIM-SM C-RP, 74

configuring IP multicast PIM-SM multicast source registration, 76

configuring IP multicast PIM-SM non-scoped zone, 87

configuring IP multicast PIM-SM RP, 73

configuring IP multicast PIM-SM SPT switchover, 77

configuring IP multicast PIM-SM static RP, 73

configuring IP multicast PIM-SSM, 77, 95

configuring IP multicast PIM-SSM group range, 78

configuring MSDP, 105, 111

configuring MSDP Anycast RP, 116

configuring MSDP basics, 105

configuring MSDP mesh group, 107

configuring MSDP peer description, 106

configuring MSDP peering connection, 106

configuring MSDP PIM-SM inter-domain multicast, 111

configuring MSDP RPF static peer, 106

configuring MSDP SA message cache, 110

configuring MSDP SA message content, 108

configuring MSDP SA message filtering, 120

configuring MSDP SA message policy, 109

configuring MSDP SA message-related parameters, 108

configuring MSDP SA request message, 109

configuring multicast forwarding, 40, 41, 44

configuring multicast forwarding boundary, 42

configuring multicast routing, 40, 41, 44

configuring multicast routing load splitting, 42

configuring multicast routing MAC address static entry, 42

configuring multicast static route, 41

configuring PIM, 84

configuring PIM-DM graft retry timer, 72

configuring PIM-DM state-refresh parameter, 71

controlling MSDP peering connection, 107

creating MSDP peering connection, 106

creating multicast routing RPF route, 46

disabling IP multicast PIM-SM BSM semantic fragmentation, 76

displaying IP multicast IGMP, 56

displaying IP multicast IGMP snooping, 28

displaying IP multicast PIM, 84

displaying MSDP, 111

displaying multicast forwarding, 43

displaying multicast routing, 43

enabling IGMP snooping, 16

enabling IGMP snooping (IGMP-snooping view), 17

enabling IGMP snooping (VLAN view), 17

enabling IGMP snooping querier, 22

enabling IP multicast IGMP, 54

enabling IP multicast IGMP fast leave processing, 56

enabling IP multicast IGMP snooping drop unknown multicast data, 26

enabling IP multicast IGMP snooping fast-leave processing globally, 21

enabling IP multicast IGMP snooping fast-leave processing on port, 21

enabling IP multicast IGMP snooping multicast group replacement, 27

enabling IP multicast IGMP snooping report suppression, 26

enabling IP multicast PIM-DM, 70

enabling IP multicast PIM-SSM, 77

enabling IP multicast routing, 41

enabling MSDP, 105

enabling PIM passive mode, 83

enabling PIM/BFD, 83

enabling PIM-DM state-refresh feature, 71

enabling PIM-SM, 73

enabling PIM-SM Auto-RP listening, 74

maintaining IP multicast IGMP, 56

maintaining IP multicast IGMP snooping, 28

maintaining MSDP, 111

maintaining multicast forwarding, 43

maintaining multicast routing, 43

setting IGMP last member query interval (global), 19

setting IGMP last member query interval (VLAN), 19

setting IGMP snooping last member query interval, 18

setting IP multicast IGMP snooping dynamic port aging timer, 19

setting IP multicast IGMP snooping dynamic port aging timer globally, 19

setting IP multicast IGMP snooping dynamic port aging timer in VLAN, 20

setting IP multicast IGMP snooping max number forwarding entries, 18

setting PIM join/prune message max size, 82

specifying IGMP snooping version (IGMP-snooping view), 17

specifying IGMP snooping version (VLAN view), 18

specifying IP multicast IGMP snooping version, 17

specifying IP multicast IGMP version, 54

specifying longest prefix match principle, 41

troubleshooting IP multicast IGMP snooping Layer 2 forwarding, 36

troubleshooting IP multicast IGMP snooping multicast group policy, 36

troubleshooting IP multicast PIM RP cannot join SPT, 99

troubleshooting IP multicast PIM-SM multicast source registration failure, 99

troubleshooting MSDP peer stays in disabled state, 124

troubleshooting MSDP RP entry exchange, 124

troubleshooting MSDP SA message cache, 124

troubleshooting multicast forwarding, 48

troubleshooting multicast routing, 48

troubleshooting multicast static route failure, 48

troubleshooting PIM abnormal multicast data termination, 99

troubleshooting PIM multicast distribution tree, 98

Protocol Independent Multicast. Use PIM

protocols and standards

IP multicast, 8

IP multicast IGMP, 53

IP multicast IGMP snooping, 15

IP multicast PIM, 70

Layer 2 multicast, 10

Layer 3 multicast, 9

MSDP, 105

pruning

IP multicast PIM hello message LAN_Prune_Delay option, 80

IP multicast PIM join/prune message size, 82

IP multicast PIM-DM SPT building, 60

Q

querier

IGMP snooping querier, 22

IGMP snooping querier configuration, 33

querying

IGMP snooping general query/response parameters, 22

IGMP snooping message parameters, 23

IGMP snooping querier, 22

IGMP snooping querier configuration, 33

IP multicast IGMP snooping general query, 14

IP multicast IGMPv2 querier election, 51

IP multicast IGMPv3 query capability, 53

R

refresh

IP multicast PIM-DM state-refresh configuration, 71

IP multicast PIM-DM state-refresh feature, 71

rendezvous point tree. Use RPT

reporting

IP multicast IGMP snooping membership, 14

IP multicast IGMP snooping report suppression, 26

IP multicast IGMPv3 report capability), 53

reverse path forwarding. Use RPF

router

IP multicast IGMP snooping router port, 12

routing

Ethernet multicast MAC address, 8

IGMP snooping general query/response parameters, 22

IGMP snooping last member query interval, 18

IGMP snooping message parameters, 23

IGMP snooping message source IP address, 24

IGMP snooping querier configuration, 33

IGMP snooping simulated member host, 21

IP multicast address, 5, 5

IP multicast IGMP basic configuration, 54

IP multicast IGMP configuration, 50, 54, 57

IP multicast IGMP fast leave processing, 56

IP multicast IGMP performance adjustment, 56

IP multicast IGMP snooping basic configuration, 16

IP multicast IGMP snooping configuration, 12, 16, 29

IP multicast IGMP snooping dynamic port aging timer, 19

IP multicast IGMP snooping fast leave processing, 21

IP multicast IGMP snooping group policy configuration, 29

IP multicast IGMP snooping max number forwarding entries, 18

IP multicast IGMP snooping multicast group policy, 25

IP multicast IGMP snooping multicast source port filtering, 25

IP multicast IGMP snooping policy configuration, 25

IP multicast IGMP snooping port feature configuration, 19

IP multicast IGMP snooping static port configuration, 20, 31

IP multicast IGMP snooping version specification, 17

IP multicast IGMP version specification, 54

IP multicast overview, 1

IP multicast packet forwarding, 11

IP multicast PIM common features configuration, 78

IP multicast PIM common timer configuration, 81

IP multicast PIM configuration, 60, 84

IP multicast PIM domain border configuration, 75

IP multicast PIM hello message options, 80

IP multicast PIM hello policy, 79

IP multicast PIM join/prune message size, 82

IP multicast PIM multicast source policy, 79

IP multicast PIM VPN support, 70

IP multicast PIM-DM, 60

IP multicast PIM-DM assert, 62

IP multicast PIM-DM configuration, 70, 84

IP multicast PIM-DM graft, 61

IP multicast PIM-DM neighbor discovery, 60

IP multicast PIM-DM SPT building, 60

IP multicast PIM-SM, 62

IP multicast PIM-SM administrative scoping, 67

IP multicast PIM-SM administrative zones, 67

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM assert, 67

IP multicast PIM-SM BSM semantic fragmentation, 76

IP multicast PIM-SM BSR configuration, 75

IP multicast PIM-SM C-BSR configuration, 75

IP multicast PIM-SM configuration, 72

IP multicast PIM-SM C-RP configuration, 74

IP multicast PIM-SM DR election, 63

IP multicast PIM-SM multicast source registration, 65, 76

IP multicast PIM-SM neighbor discovery, 63

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SM RP configuration, 73

IP multicast PIM-SM RP discovery, 64

IP multicast PIM-SM RPT building, 65

IP multicast PIM-SM SPT switchover, 66

IP multicast PIM-SM SPT switchover configuration, 77

IP multicast PIM-SM static RP configuration, 73

IP multicast PIM-SM zone relationships, 67

IP multicast PIM-SSM, 68

IP multicast PIM-SSM configuration, 77, 95

IP multicast PIM-SSM DR election, 69

IP multicast PIM-SSM group range configuration, 78

IP multicast PIM-SSM neighbor discovery, 69

IP multicast PIM-SSM SPT building, 69

MSDP Anycast RP, 103

MSDP Anycast RP configuration, 116

MSDP basics configuration, 105

MSDP configuration, 101, 105, 111

MSDP inter-domain multicast delivery, 102

MSDP mesh group, 107

MSDP peer, 101

MSDP peer description, 106

MSDP peering connection, 106, 106

MSDP peering connection control, 107

MSDP PIM-SM inter-domain multicast configuration, 111

MSDP RPF static peer, 106

MSDP SA message filtering configuration, 120

MSDP SA message-related parameters, 108

multicast routing. See multicast routing

PIM-SM Auto-RP listening, 74

transmission techniques, 1

routing table

multicast protocol-specific tables, 37

multicast RPF check mechanism, 37, 37

static multicast, 37

unicast, 37

RP

IP multicast PIM-SM configuration, 73

IP multicast PIM-SM C-RP configuration, 74

IP multicast PIM-SM discovery, 64

IP multicast PIM-SM RPT building, 65

IP multicast PIM-SM SPT switchover configuration, 77

IP multicast PIM-SM static RP configuration, 73

MSDP Anycast RP, 103

MSDP Anycast RP configuration, 116

MSDP RPF static peer, 106

PIM-SM Auto-RP listening, 74

troubleshooting IP multicast PIM-SM RP cannot be built, 99

troubleshooting PIM-SM RP cannot join SPT, 99

RPF

longest prefix match principle, 41

multicast check mechanism, 37

multicast check process, 37

multicast routing RPF route change, 44

multicast routing RPF route creation, 46

multicast RPF check implementation, 38

multicast RPF route change, 39

multicast RPF route creation, 39

multicast static route, 39

multicast static route configuration, 41

RPF route

longest prefix match principle, 41

RPT

IP multicast notation, 4

IP multicast PIM-SM multicast source registration, 65

IP multicast PIM-SM RPT building, 65

rule

MSDP SA message policy, 109

S

SA

MSDP SA message, 110

MSDP SA message content, 108

MSDP SA message filtering configuration, 120

MSDP SA message policy, 109

MSDP SA message-related parameters, 108

MSDP SA request message, 109

semantic fragmentation

IP multicast PIM-SM BSM, 76

setting

IGMP last member query interval, 18

IGMP last member query interval (global), 19

IGMP last member query interval (VLAN), 19

IP multicast IGMP snooping dynamic port aging timer, 19

IP multicast IGMP snooping dynamic port aging timer globally, 19

IP multicast IGMP snooping dynamic port aging timer in VLAN, 20

IP multicast IGMP snooping max number forwarding entries, 18

IP multicast PIM join/prune message max size, 82

SFM

IP multicast model, 5

shortest path tree. Use SPT

snooping

IGMP. See IGMP snooping

source registration

IP multicast PIM-SM, 65

IP multicast PIM-SM multicast, 76

troubleshooting IP multicast PIM-SM multicast source registration failure, 99

specifying

IGMP snooping version (IGMP-snooping view), 17

IGMP snooping version (VLAN view), 18

IP multicast IGMP snooping version, 17

IP multicast IGMP version, 54

SPT

IP multicast notation, 4

IP multicast PIM-DM SPT building, 60

IP multicast PIM-SM multicast source registration, 65

IP multicast PIM-SM SPT switchover configuration, 77

IP multicast PIM-SM switchover, 66

IP multicast PIM-SSM building, 69

troubleshooting PIM-SM RP cannot join SPT, 99

SSM

IP multicast model, 5

state

IP multicast PIM-DM state-refresh feature, 71

IP multicast PIM-DM state-refresh parameters, 71

static

IP multicast IGMP snooping static port, 20

IP multicast IGMP snooping static port configuration, 31

IP multicast IGMP static member interface, 55

IP multicast PIM-SM static RP configuration, 73

MSDP RPF static peer, 106

static route

multicast routing configuration, 41

multicast routing table, 37

RPF route change, 39

RPF route creation, 39

RPF route/route change, 39

suppressing

IP multicast IGMP snooping report suppression, 26

switch

IP multicast IGMP configuration, 57

IP multicast PIM configuration, 84

IP multicast PIM-DM configuration, 84

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SSM configuration, 95

multicast forwarding configuration, 44

multicast routing configuration, 44

multicast routing RPF route change, 44

multicast routing RPF route creation, 46

switching

IP multicast overview, 1

MSDP PIM-SM inter-domain multicast configuration, 111

multicast forwarding configuration, 37, 40, 41, 44

multicast routing configuration, 37, 40, 41, 44

multicast routing RPF route change, 44

multicast routing RPF route creation, 46

transmission techniques, 1

switchover

IP multicast PIM-SM switchover to SPT, 77

T

table

multicast forwarding table, 37

multicast protocol-specific routing tables, 37

TCP/IP

IP multicast IGMP basic configuration, 54

IP multicast IGMP configuration, 50, 54, 57

MSDP peering connection control, 107

timer

IP multicast IGMP snooping dynamic port aging timer, 13, 19

IP multicast PIM common timers, 81

IP multicast PIM-DM graft retry timer, 72

topology

multicast RPF route change, 39

multicast RPF route creation, 39

traffic

multicast routing load splitting, 42

transmitting

broadcast, 2

IP multicast, 2

IP multicast overview, 1

techniques, 1

unicast, 1

troubleshooting

IP multicast IGMP, 59

IP multicast IGMP inconsistent membership information, 59

IP multicast IGMP no membership information on router, 59

IP multicast IGMP snooping, 36

IP multicast IGMP snooping Layer 2 multicast forwarding, 36

IP multicast IGMP snooping multicast group policy, 36

IP multicast PIM, 98

IP multicast PIM abnormal multicast data termination, 99

IP multicast PIM multicast distribution tree, 98

IP multicast PIM-SM multicast source registration failure, 99

IP multicast PIM-SM RP cannot be built, 99

IP multicast PIM-SM RP cannot join SPT, 99

MSDP, 123

MSDP peers stay in disabled state, 124

MSDP RP entry exchange, 124

MSDP SA message cache, 124

multicast forwarding, 48

multicast routing, 48

multicast static route failure, 48

U

unicast

routing table, 37

transmission technique, 1

V

verifying

multicast RPF check mechanism, 37

multicast RPF check process, 37

version

IP multicast IGMP snooping specification, 17

IP multicast IGMP specification, 54

IP multicast IGMPv1, 50, 50

IP multicast IGMPv1 snooping, 17

IP multicast IGMPv2, 50, 51

IP multicast IGMPv2 snooping, 17

IP multicast IGMPv3, 50, 52

IP multicast IGMPv3 snooping, 17

VLAN

IGMP snooping general query/response parameters, 22

IGMP snooping message parameters, 23

IGMP snooping message source IP address, 24

IGMP snooping querier, 22

IGMP snooping querier configuration, 33

IGMP snooping querier enable, 22

IGMP snooping simulated member host, 21

IP multicast IGMP snooping basic configuration, 16

IP multicast IGMP snooping configuration, 12, 16, 29

IP multicast IGMP snooping drop unknown multicast data, 26

IP multicast IGMP snooping dynamic port aging timer, 19, 20

IP multicast IGMP snooping fast leave processing, 21

IP multicast IGMP snooping group policy configuration, 29

IP multicast IGMP snooping max number multicast groups on port, 27

IP multicast IGMP snooping multicast group replacement, 27

IP multicast IGMP snooping policy configuration, 25

IP multicast IGMP snooping port feature configuration, 19

IP multicast IGMP snooping static port configuration, 20, 31

IP multicast PIM configuration, 60, 84

IP multicast PIM-DM configuration, 84

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM non-scoped zone configuration, 87

IP multicast PIM-SSM configuration, 95

VPN

IP multicast IGMP support, 53

IP multicast PIM support, 70

MSDP support, 104

Z

ZBR

IP multicast PIM-SM administrative scoping, 67

zone

border router. See ZBR

IP multicast PIM-SM admin-scoped zone configuration, 90

IP multicast PIM-SM admin-scoped/global-scoped zone relationship, 67

IP multicast PIM-SM non-scoped zone configuration, 87

 

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网