H3C S5500-EI Series Switches Operation Manual-Release 2102(V1.01)

HomeSupportSwitchesH3C S5500 Switch SeriesConfigure & DeployConfiguration GuidesH3C S5500-EI Series Switches Operation Manual-Release 2102(V1.01)
15-Multicast Configuration
Title Size Download
15-Multicast Configuration 2 MB

Table of Contents

Chapter 1 Multicast Overview.. 1-1

1.1 Introduction to Multicast 1-1

1.1.1 Comparison of Information Transmission Techniques. 1-1

1.1.2 Roles in Multicast 1-4

1.1.3 Advantages and Applications of Multicast 1-5

1.2 Multicast Models. 1-6

1.3 Multicast Architecture. 1-6

1.3.1 Multicast Addresses. 1-7

1.3.2 Multicast Protocols. 1-11

1.4 Multicast Packet Forwarding Mechanism.. 1-13

Chapter 2 IGMP Snooping Configuration. 2-1

2.1 IGMP Snooping Overview. 2-1

2.1.1 Principle of IGMP Snooping. 2-1

2.1.2 Basic Concepts in IGMP Snooping. 2-2

2.1.3 Work Mechanism of IGMP Snooping. 2-4

2.1.4 Processing of Multicast Protocol Messages. 2-6

2.1.5 Protocols and Standards. 2-6

2.2 IGMP Snooping Configuration Task List 2-7

2.3 Configuring Basic Functions of IGMP Snooping. 2-8

2.3.1 Configuration Prerequisites. 2-8

2.3.2 Enabling IGMP Snooping. 2-8

2.3.3 Configuring the Version of IGMP Snooping. 2-9

2.4 Configuring IGMP Snooping Port Functions. 2-9

2.4.1 Configuration Prerequisites. 2-9

2.4.2 Configuring Aging Timers for Dynamic Ports. 2-10

2.4.3 Configuring Static Ports. 2-11

2.4.4 Configuring Simulated Joining. 2-12

2.4.5 Configuring Fast Leave Processing. 2-13

2.5 Configuring IGMP Snooping Querier 2-14

2.5.1 Configuration Prerequisites. 2-14

2.5.2 Enabling IGMP Snooping Querier 2-14

2.5.3 Configuring IGMP Queries and Responses. 2-15

2.5.4 Configuring Source IP Address of IGMP Queries. 2-16

2.6 Configuring an IGMP Snooping Policy. 2-17

2.6.1 Configuration Prerequisites. 2-17

2.6.2 Configuring a Multicast Group Filter 2-17

2.6.3 Configuring Multicast Source Port Filtering. 2-18

2.6.4 Configuring the Function of Dropping Unknown Multicast Data. 2-19

2.6.5 Configuring IGMP Report Suppression. 2-20

2.6.6 Configuring Maximum Multicast Groups that Can Be Joined on a Port 2-20

2.6.7 Configuring Multicast Group Replacement 2-21

2.7 Displaying and Maintaining IGMP Snooping. 2-23

2.8 IGMP Snooping Configuration Examples. 2-23

2.8.1 Configuring Simulated Joining. 2-23

2.8.2 Static Router Port Configuration. 2-26

2.8.3 IGMP Snooping Querier Configuration. 2-29

2.9 Troubleshooting IGMP Snooping Configuration. 2-31

2.9.1 Switch Fails in Layer 2 Multicast Forwarding. 2-31

2.9.2 Configured Multicast Group Policy Fails to Take Effect 2-32

Chapter 3 MLD Snooping Configuration. 3-1

3.1 MLD Snooping Overview. 3-1

3.1.1 Introduction to MLD Snooping. 3-1

3.1.2 Basic Concepts in MLD Snooping. 3-2

3.1.3 How MLD Snooping Works. 3-4

3.1.4 Protocols and Standards. 3-6

3.2 MLD Snooping Configuration Task List 3-6

3.3 Configuring Basic Functions of MLD Snooping. 3-7

3.3.1 Configuration Prerequisites. 3-7

3.3.2 Enabling MLD Snooping. 3-7

3.3.3 Configuring the Version of MLD Snooping. 3-8

3.4 Configuring MLD Snooping Port Functions. 3-8

3.4.1 Configuration Prerequisites. 3-8

3.4.2 Configuring Aging Timers for Dynamic Ports. 3-9

3.4.3 Configuring Static Ports. 3-10

3.4.4 Configuring Simulated Joining. 3-10

3.4.5 Configuring Fast Leave Processing. 3-11

3.5 Configuring MLD Snooping Querier 3-12

3.5.1 Configuration Prerequisites. 3-12

3.5.2 Enabling MLD Snooping Querier 3-13

3.5.3 Configuring MLD Queries and Responses. 3-13

3.5.4 Configuring Source IPv6 Addresses of MLD Queries. 3-15

3.6 Configuring an MLD Snooping Policy. 3-15

3.6.1 Configuration Prerequisites. 3-15

3.6.2 Configuring an IPv6 Multicast Group Filter 3-16

3.6.3 Configuring IPv6 Multicast Source Port Filtering. 3-17

3.6.4 Configuring Dropping Unknown IPv6 Multicast Data. 3-18

3.6.5 Configuring MLD Report Suppression. 3-18

3.6.6 Configuring Maximum Multicast Groups that that Can Be Joined on a Port 3-19

3.6.7 Configuring IPv6 Multicast Group Replacement 3-20

3.7 Displaying and Maintaining MLD Snooping. 3-21

3.8 MLD Snooping Configuration Examples. 3-22

3.8.1 Simulated Joining. 3-22

3.8.2 Static Router Port Configuration. 3-24

3.8.3 MLD Snooping Querier Configuration. 3-27

3.9 Troubleshooting MLD Snooping. 3-29

3.9.1 Switch Fails in Layer 2 Multicast Forwarding. 3-29

3.9.2 Configured IPv6 Multicast Group Policy Fails to Take Effect 3-30

Chapter 4 Multicast VLAN Configuration. 4-1

4.1 Introduction to Multicast VLAN. 4-1

4.2 Configuring Multicast VLAN. 4-1

4.3 Displaying and Maintaining Multicast VLAN. 4-2

4.4 Multicast VLAN Configuration Example. 4-2

Chapter 5 IPv6 Multicast VLAN Configuration. 5-1

5.1 Introduction to IPv6 Multicast VLAN. 5-1

5.2 Configuring IPv6 Multicast VLAN. 5-1

5.3 Displaying and Maintaining IPv6 Multicast VLAN. 5-2

5.4 IPv6 Multicast VLAN Configuration Examples. 5-3

Chapter 6 IGMP Configuration. 6-1

6.1 IGMP Overview. 6-1

6.1.1 IGMP Versions. 6-1

6.1.2 Work Mechanism of IGMPv1. 6-1

6.1.3 Enhancements Provided by IGMPv2. 6-3

6.1.4 Enhancements in IGMPv3. 6-4

6.1.5 Protocols and Standards. 6-6

6.2 IGMP Configuration Task List 6-6

6.3 Configuring Basic Functions of IGMP. 6-7

6.3.1 Configuration Prerequisites. 6-7

6.3.2 Enabling IGMP. 6-7

6.3.3 Configuring IGMP Versions. 6-8

6.3.4 Configuring a Static Member of a Multicast Group. 6-8

6.3.5 Configuring a Multicast Group Filter 6-9

6.4 Adjusting IGMP Performance. 6-9

6.4.1 Configuration Prerequisites. 6-9

6.4.2 Configuring IGMP Message Options. 6-10

6.4.3 Configuring IGMP Query and Response Parameters. 6-11

6.4.4 Configuring IGMP Fast Leave Processing. 6-13

6.5 Displaying and Maintaining IGMP. 6-14

6.6 IGMP Configuration Example. 6-15

6.7 Troubleshooting IGMP. 6-17

6.7.1 No Member Information on the Receiver-Side Router 6-17

6.7.2 Inconsistent Memberships on Routers on the Same Subnet 6-18

Chapter 7 PIM Configuration. 7-1

7.1 PIM Overview. 7-1

7.1.1 Introduction to PIM-DM.. 7-2

7.1.2 How PIM-DM Works. 7-2

7.1.3 Introduction to PIM-SM.. 7-5

7.1.4 How PIM-SM Works. 7-6

7.1.5 Introduction to BSR Admin-scope Regions in PIM-SM.. 7-11

7.1.6 SSM Model Implementation in PIM.. 7-13

7.1.7 Protocols and Standards. 7-15

7.2 Configuring PIM-DM.. 7-16

7.2.1 PIM-DM Configuration Task List 7-16

7.2.2 Configuration Prerequisites. 7-16

7.2.3 Enabling PIM-DM.. 7-16

7.2.4 Enabling State Refresh. 7-17

7.2.5 Configuring State Refresh Parameters. 7-17

7.2.6 Configuring PIM-DM Graft Retry Period. 7-18

7.3 Configuring PIM-SM.. 7-19

7.3.1 PIM-SM Configuration Task List 7-19

7.3.2 Configuration Prerequisites. 7-19

7.3.3 Enabling PIM-SM.. 7-20

7.3.4 Configuring a BSR. 7-21

7.3.5 Configuring an RP. 7-25

7.3.6 Configuring PIM-SM Register Messages. 7-28

7.3.7 Disabling RPT-to-SPT Switchover 7-29

7.4 Configuring PIM-SSM.. 7-30

7.4.1 PIM-SSM Configuration Task List 7-30

7.4.2 Configuration Prerequisites. 7-30

7.4.3 Enabling PIM-SM.. 7-31

7.4.4 Configuring the SSM Group Range. 7-31

7.5 Configuring PIM Common Information. 7-32

7.5.1 PIM Common Information Configuration Task List 7-32

7.5.2 Configuration Prerequisites. 7-33

7.5.3 Configuring a PIM Filter 7-33

7.5.4 Configuring PIM Hello Options. 7-34

7.5.5 Configuring PIM Common Timers. 7-36

7.5.6 Configuring Join/Prune Message Limits. 7-38

7.6 Displaying and Maintaining PIM.. 7-38

7.7 PIM Configuration Examples. 7-39

7.7.1 PIM-DM Configuration Example. 7-39

7.7.2 PIM-SM Configuration Example. 7-43

7.7.3 PIM-SSM Configuration Example. 7-48

7.8 Troubleshooting PIM Configuration. 7-51

7.8.1 Failure of Building a Multicast Distribution Tree Correctly. 7-51

7.8.2 Multicast Data Abnormally Terminated on an Intermediate Router 7-53

7.8.3 RPs Unable to Join SPT in PIM-SM.. 7-53

7.8.4 No Unicast Route Between BSR and C-RPs in PIM-SM.. 7-54

Chapter 8 MSDP Configuration. 8-1

8.1 MSDP Overview. 8-1

8.1.1 Introduction to MSDP. 8-1

8.1.2 How MSDP Works. 8-2

8.1.3 Protocols and Standards. 8-8

8.2 MSDP Configuration Task List 8-9

8.3 Configuring Basic Functions of MSDP. 8-9

8.3.1 Configuration Prerequisites. 8-9

8.3.2 Enabling MSDP. 8-9

8.3.3 Creating an MSDP Peer Connection. 8-10

8.3.4 Configuring a Static RPF Peer 8-10

8.4 Configuring an MSDP Peer Connection. 8-11

8.4.1 Configuration Prerequisites. 8-11

8.4.2 Configuring MSDP Peer Description. 8-11

8.4.3 Configuring an MSDP Mesh Group. 8-12

8.4.4 Configuring MSDP Peer Connection Control 8-12

8.5 Configuring SA Messages Related Parameters. 8-13

8.5.1 Configuration Prerequisites. 8-13

8.5.2 Configuring SA Message Content 8-13

8.5.3 Configuring SA Request Messages. 8-14

8.5.4 Configuring an SA Message Filtering Rule. 8-15

8.5.5 Configuring SA Message Cache. 8-16

8.6 Displaying and Maintaining MSDP. 8-16

8.7 MSDP Configuration Examples. 8-17

8.7.1 Inter-AS Multicast Configuration Leveraging BGP Routes. 8-17

8.7.2 Inter-AS Multicast Configuration Leveraging Static RPF Peers. 8-23

8.7.3 Anycast RP Configuration. 8-27

8.8 Troubleshooting MSDP. 8-32

8.8.1 MSDP Peers Stay in Down State. 8-32

8.8.2 No SA Entries in the Router’s SA Cache. 8-32

8.8.3 Inter-RP Communication Faults in Anycast RP Application. 8-33

Chapter 9 Multicast Routing and Forwarding Configuration. 9-1

9.1 Multicast Routing and Forwarding Overview. 9-1

9.1.1 Introduction to Multicast Routing and Forwarding. 9-1

9.1.2 RPF Mechanism.. 9-2

9.1.3 Multicast Static Routes. 9-4

9.1.4 Multicast Traceroute. 9-5

9.2 Configuration Task List 9-6

9.3 Configuring Multicast Routing and Forwarding. 9-6

9.3.1 Configuration Prerequisites. 9-6

9.3.2 Enabling IP Multicast Routing. 9-7

9.3.3 Configuring Multicast Static Routes. 9-7

9.3.4 Configuring a Multicast Route Match Rule. 9-8

9.3.5 Configuring Multicast Load Splitting. 9-8

9.3.6 Configuring a Multicast Forwarding Range. 9-9

9.3.7 Configuring the Multicast Forwarding Table Size. 9-9

9.3.8 Tracing a Multicast Path. 9-10

9.4 Displaying and Maintaining Multicast Routing and Forwarding. 9-11

9.5 Configuration Examples. 9-12

9.5.1 Changing an RPF Route. 9-12

9.5.2 Creating an RPF Route. 9-14

9.6 Troubleshooting Multicast Routing and Forwarding. 9-17

9.6.1 Multicast Static Route Failure. 9-17

9.6.2 Multicast Data Fails to Reach Receivers. 9-17

 


Chapter 1  Multicast Overview

 

&  Note:

This manual chiefly focuses on the IP multicast technology and device operations. Unless otherwise stated, the term “multicast” in this document refers to IP multicast.

 

1.1  Introduction to Multicast

As a technique coexisting with unicast and broadcast, the multicast technique effectively addresses the issue of point-to-multipoint data transmission. By allowing high-efficiency point-to-multipoint data transmission over a network, multicast greatly saves network bandwidth and reduces network load.

With the multicast technology, a network operator can easily provide new value-added services, such as live Webcasting, Web TV, distance learning, telemedicine, Web radio, real-time videoconferencing, and other bandwidth- and time-critical information services.

1.1.1  Comparison of Information Transmission Techniques

I. Unicast

In unicast, the information source sends a separate copy of information to each host that needs the information, as shown in Figure 1-1.

Figure 1-1 Unicast transmission

Assume that Hosts B, D and E need this information. The information source establishes a separate transmission channel for each of these hosts.

In unicast transmission, the traffic over the network is proportional to the number of hosts that need the information. If a large number of users need the information, the information source needs to send a copy of the same information to each of these users. This means a tremendous pressure on the information source and the network bandwidth.

As we can see from the information transmission process, unicast is not suitable for batch transmission of information.

II. Broadcast

In broadcast, the information source sends information to all hosts on the network, even if some hosts do not need the information, as shown in Figure 1-2.

Figure 1-2 Broadcast transmission

Assume that only Hosts B, D, and E need the information. If the information source broadcasts the information, Hosts A and C also receive it. In addition to information security issues, this also causes traffic flooding on the same network.

Therefore, broadcast is disadvantageous in transmitting data to specific hosts; moreover, broadcast transmission is a significant usage of network resources.

III. Multicast

As discussed above, the unicast and broadcast techniques are unable to provide point-to-multipoint data transmissions with the minimum network consumption.

The multicast technique has solved this problem. When some hosts on the network need multicast information, the multicast source (Source in the figure) sends only one copy of the information. Multicast distribution threes are built for the multicast packets through multicast routing protocols, and the packets are replicated only on nodes where the trees branch, as shown in Figure 1-3:

Figure 1-3 Multicast transmission

Assume that Hosts B, D and E need the information. To receive the information correctly, these hosts need to join a receiver set, which is known as a multicast group. The routers on the network duplicate and forward the information based on the distribution of the receivers in this set. Finally, the information is correctly delivered to Hosts B, D, and E.

To sum up, multicast has the following advantages:

l           Over unicast: As multicast traffic flows to the node the farthest possible from the source before it is replicated and distributed, an increase of the number of hosts will not remarkably add to the network load.

l           Over broadcast: As multicast data is sent only to the receivers that need it, multicast uses the network bandwidth reasonably and brings no waste of network resources, and enhances network security.

1.1.2  Roles in Multicast

The following roles are involved in multicast transmission:

l           An information sender is referred to as a Multicast Source (“Source” in Figure 1-3).

l           Each receiver is a Multicast Group Member (“Receiver” in Figure 1-3).

l           All receivers interested in the same information form a Multicast Group. Multicast groups are not subject to geographic restrictions.

l           A router that supports Layer 3 multicast is called multicast router or Layer 3 multicast device. In addition to providing the multicast routing function, a multicast router can also manage multicast group members.

For a better understanding of the multicast concept, you can assimilate multicast transmission to the transmission of TV programs, as shown in Table 1-1.

Table 1-1 An analogy between TV transmission and multicast transmission

Step

TV transmission

Multicast transmission

1

A TV station transmits a TV program through a channel.

A multicast source sends multicast data to a multicast group.

2

A user tunes the TV set to the channel.

A receiver joins the multicast group.

3

The user starts to watch the TV program transmitted by the TV station via the channel.

The receiver starts to receive the multicast data that the source sends to the multicast group.

4

The user turns off the TV set or tunes to another channel.

The receiver leaves the multicast group or joins another group.

 

&  Note:

l      A multicast source does not necessarily belong to a multicast group. Namely, a multicast source is not necessarily a multicast data receiver.

l      A multicast source can send data to multiple multicast groups at the same time, and multiple multicast sources can send data to the same multicast group at the same time.

 

1.1.3  Advantages and Applications of Multicast

I. Advantages of multicast

Advantages of the multicast technique include:

l           Enhanced efficiency: reduces the CPU load of information source servers and network devices.

l           Optimal performance: reduces redundant traffic.

l           Distributive application: Enables point-to-multiple-point applications at the price of the minimum network resources.

II. Applications of multicast

Applications of the multicast technique include:

l           Multimedia and streaming applications, such as Web TV, Web radio, and real-time video/audio conferencing.

l           Communication for training and cooperative operations, such as distance learning and telemedicine.

l           Data warehouse and financial applications (stock quotes).

l           Any other point-to-multiple-point data distribution application.

1.2  Multicast Models

Based on how the receivers treat the multicast sources, there are two multicast models:

I. ASM model

In the ASM model, any sender can send information to a multicast group as a multicast source, and numbers of receivers can join a multicast group identified by a group address and obtain multicast information addressed to that multicast group. In this model, receivers are not aware of the position of multicast sources in advance. However, they can join or leave the multicast group at any time.

II. SSM model

In the practical life, users may be interested in the multicast data from only certain multicast sources. The SSM model provides a transmission service that allows users to specify the multicast sources they are interested in at the client side.

The radical difference between the SSM model and the ASM model is that in the SSM model, receivers already know the locations of the multicast sources by some other means. In addition, the SSM model uses a multicast address range that is different from that of the ASM model, and dedicated multicast forwarding paths are established between receivers and the specified multicast sources.

1.3  Multicast Architecture

IP multicast addresses the following questions:

l           Where should the multicast source transmit information to? (multicast addressing)

l           What receivers exist on the network? (host registration)

l           Where is the multicast source from which the receivers need to receive multicast data? (multicast source discovery)

l           How should information be transmitted to the receivers? (multicast routing)

IP multicast falls in the scope of end-to-end service. The multicast architecture involves the following four parts:

1)         Addressing mechanism: Information is sent from a multicast source to a group of receivers through a multicast address.

2)         Host registration: Receiver hosts are allowed to join and leave multicast groups dynamically. This mechanism is the basis for group membership management.

3)         Multicast routing: A multicast distribution tree (namely a forwarding path tree for multicast data on the network) is constructed for delivering multicast data from a multicast source to receivers.

4)         Multicast applications: A software system that supports multicast applications, such as video conferencing, must be installed on multicast sources and receiver hosts, and the TCP/IP stack must support reception and transmission of multicast data.

1.3.1  Multicast Addresses

To allow communication between multicast sources and multicast group members, network-layer multicast addresses, namely, multicast IP addresses must be provided. In addition, a technique must be available to map multicast IP addresses to link-layer multicast MAC addresses.

I. IPv4 multicast addresses

Internet Assigned Numbers Authority (IANA) assigned the Class D address space (224.0.0.0 to 239.255.255.255) for IPv4 multicast. The specific address blocks and usages are shown in Table 1-2.

Table 1-2 Class D IP address blocks and description

Address block

Description

224.0.0.0 to 224.0.0.255

Reserved permanent group addresses. The IP address 224.0.0.0 is reserved, and other IP addresses can be used by routing protocols and for topology searching, protocol maintenance, and so on. Commonly used permanent group addresses are listed in Table 1-3. A packet destined for an address in this block will not be forwarded beyond the local subnet regardless of the Time to Live (TTL) value in the IP header.

224.0.1.0 to 238.255.255.255

Globally scoped group addresses. This block includes two types of designated group addresses:

l      232.0.0.0/8: SSM group addresses, and

l      233.0.0.0/8: Glop group addresses; for details, see RFC 2770.

239.0.0.0 to 239.255.255.255

Administratively scoped multicast addresses. These addresses are considered to be locally rather than globally unique, and can be reused in domains administered by different organizations without causing conflicts. For details, refer to RFC 2365.

 

&  Note:

l      The membership of a group is dynamic. Hosts can join or leave multicast groups at any time.

l      “Glop” is a mechanism for assigning multicast addresses between different autonomous systems (ASs). By filling an AS number into the middle two bytes of 233.0.0.0, you get 255 multicast addresses for that AS.

 

Table 1-3 Some reserved multicast addresses

Address

Description

224.0.0.1

All systems on this subnet, including hosts and routers

224.0.0.2

All multicast routers on this subnet

224.0.0.3

Unassigned

224.0.0.4

Distance Vector Multicast Routing Protocol (DVMRP) routers

224.0.0.5

Open Shortest Path First (OSPF) routers

224.0.0.6

OSPF designated routers/backup designated routers

224.0.0.7

Shared Tree (ST) routers

224.0.0.8

ST hosts

224.0.0.9

Routing Information Protocol version 2 (RIPv2) routers

224.0.0.11

Mobile agents

224.0.0.12

Dynamic Host Configuration Protocol (DHCP) server/relay agent

224.0.0.13

All Protocol Independent Multicast (PIM) routers

224.0.0.14

Resource Reservation Protocol (RSVP) encapsulation

224.0.0.15

All Core-Based Tree (CBT) routers

224.0.0.16

Designated Subnetwork Bandwidth Management (SBM)

224.0.0.17

All SBMs

224.0.0.18

Virtual Router Redundancy Protocol (VRRP)

 

II. IPv6 Multicast Addresses

As defined in RFC 4291, the format of an IPv6 multicast is as follows:

Figure 1-4 IPv6 multicast format

l           0xFF: 8 bits, indicating that this address is an IPv6 multicast address.

l           Flags: 4 bits, of which the high-order flag is reserved and set to 0; the definition and usage of the second bit can be found in RFC 3956; and definition and usage of the third bit can be found in RFC 3306; the low-order bit is the Transient (T) flag. When set to 0, the T flag indicates a permanently-assigned multicast address assigned by IANA; when set to 1, the T flag indicates a transient, or dynamically assigned multicast address.

l           Scope: 4 bits, indicating the scope of the IPv6 internetwork for which the multicast traffic is intended. Possible values of this field are given in Table 1-4.

l           Reserved: 80 bits, all set to 0 currently.

l           Group ID: 112 bits, identifying the multicast group. For details about this field, refer to RFC 3306.

Table 1-4 Values of the Scope field

Value

Meaning

0, 3, F

Reserved

1

Node-local scope

2

Link-local scope

4

Admin-local scope

5

Site-local scope

6, 7, 9 through D

Unassigned

8

Organization-local scope

E

Global scope

 

III. Ethernet multicast MAC addresses

When a unicast IP packet is transmitted over Ethernet, the destination MAC address is the MAC address of the receiver. When a multicast packet is transmitted over Ethernet, however, the destination address is a multicast MAC address because the packet is directed to a group formed by a number of receivers, rather than to one specific receiver.

1)         IPv4 multicast MAC addresses

As defined by IANA, the high-order 24 bits of an IPv4 multicast MAC address are 0x01005e, bit 25 is 0x0, and the low-order 23 bits are the low-order 23 bits of a multicast IPv4 address. The IPv4-to-MAC mapping relation is shown in Figure 1-5.

Figure 1-5 IPv4-to-MAC address mapping

The high-order four bits of a multicast IPv4 address are 1110, indicating that this address is a multicast address, and only 23 bits of the remaining 28 bits are mapped to a MAC address, so five bits of the multicast IPv4 address are lost. As a result, 32 multicast IPv4 addresses map to the same MAC address. Therefore, in Layer 2 multicast forwarding, a device may receive some multicast data addressed for other IPv4 multicast groups, and such redundant data needs to be filtered by the upper layer.

2)         IPv6 multicast MAC addresses

The high-order 16 bits of an IPv6 multicast MAC address are 0x3333, and the low-order 32 bits are the low-order 32 bits of a multicast IPv6 address. Figure 1-6 shows an example of mapping an IPv6 multicast address, FF1E::F30E:0101, to a MAC address.

Figure 1-6 An example of IPv6-to-MAC address mapping

1.3.2  Multicast Protocols

 

&  Note:

l      Generally, we refer to IP multicast working at the network layer as Layer 3 multicast and the corresponding multicast protocols as Layer 3 multicast protocols, which include IGMP/MLD, PIM/IPv6 PIM, and MSDP; we refer to IP multicast working at the data link layer as Layer 2 multicast and the corresponding multicast protocols as Layer 2 multicast protocols, which include IGMP Snooping/MLD Snooping, and multicast VLAN/IPv6 multicast VLAN.

l      IGMP Snooping, IGMP, multicast VLAN, PIM and MSDP are for IPv4, MLD Snooping, MLD, IPv6 multicast VLAN, and IPv6 PIM are for IPv6.

This section provides only general descriptions about applications and functions of the Layer 2 and Layer 3 multicast protocols in a network. For details of these protocols, refer to the respective chapters.

 

I. Layer 3 multicast protocols

Layer 3 multicast protocols include multicast group management protocols and multicast routing protocols. Figure 1-7 describes where these multicast protocols are in a network.

Figure 1-7 Positions of Layer 3 multicast protocols

1)         Multicast management protocols

Typically, the internet group management protocol (IGMP) or multicast listener discovery protocol (MLD) is used between hosts and Layer 3 multicast devices directly connected with the hosts. These protocols define the mechanism of establishing and maintaining group memberships between hosts and Layer 3 multicast devices.

2)         Multicast routing protocols

A multicast routing protocol runs on Layer 3 multicast devices to establish and maintain multicast routes and forward multicast packets correctly and efficiently. Multicast routes constitute a loop-free data transmission path from a data source to multiple receivers, namely, a multicast distribution tree.

In the ASM model, multicast routes come in intra-domain routes and inter-domain routes.

l           An intra-domain multicast routing protocol is used to discover multicast sources and build multicast distribution trees within an AS so as to deliver multicast data to receivers. Among a variety of mature intra-domain multicast routing protocols, protocol independent multicast (PIM) is a popular one. Based on the forwarding mechanism, PIM comes in two modes – dense mode (often referred to as PIM-DM) and sparse mode (often referred to as PIM-SM).

l           An inter-domain multicast routing protocol is used for delivery of multicast information between two ASs. So far, mature solutions include multicast source discovery protocol (MSDP).

For the SSM model, multicast routes are not divided into inter-domain routes and intra-domain routes. Since receivers know the position of the multicast source, channels established through PIM-SM are sufficient for multicast information transport.

II. Layer 2 multicast protocols

Layer 2 multicast protocols include IGMP Snooping/MLD Snooping and multicast VLAN/IPv6 multicast VLAN. Figure 1-8 shows where these protocols are in the network.

Figure 1-8 Position of Layer 2 multicast protocols

1)         IGMP Snooping/MLD Snooping

Running on Layer 2 devices, Internet Group Management Protocol Snooping (IGMP Snooping) and Multicast Listener Discovery Snooping (MLD Snooping) are multicast constraining mechanisms that manage and control multicast groups by listening to and analyzing IGMP or MLD messages exchanged between the hosts and Layer 3 multicast devices, thus effectively controlling the flooding of multicast data in a Layer 2 network.

2)         Multicast VLAN/IPv6 multicast VLAN

In the traditional multicast-on-demand mode, when users in different VLANs on a Layer 2 device need multicast information, the upstream Layer 3 device needs to forward a separate copy of the multicast data to each VLAN of the Layer 2 device. With the multicast VLAN or IPv6 multicast VLAN feature enabled on the Layer 2 device, the Layer 3 multicast device needs to send only one copy of multicast to the multicast VLAN or IPv6 multicast VLAN on the Layer 2 device. This avoids waste of network bandwidth and extra burden on the Layer 3 device.

1.4  Multicast Packet Forwarding Mechanism

In a multicast model, a multicast source sends information to the host group identified by the multicast group address in the destination address field of IP multicast packets. Therefore, to deliver multicast packets to receivers located in different parts of the network, multicast routers on the forwarding path usually need to forward multicast packets received on one incoming interface to multiple outgoing interfaces. Compared with a unicast model, a multicast model is more complex in the following aspects.

l           To ensure multicast packet transmission in the network, unicast routing tables or multicast routing tables specially provided for multicast must be used as guidance for multicast forwarding.

l           To process the same multicast information from different peers received on different interfaces of the same device, every multicast packet is subject to a reverse path forwarding (RPF) check on the incoming interface. The result of the RPF check determines whether the packet will be forwarded or discarded. The RPF check mechanism is the basis for most multicast routing protocols to implement multicast forwarding.

 

&  Note:

For details about the RPF mechanism, refer to RPF Mechanism.

 


Chapter 2  IGMP Snooping Configuration

When configuring IGMP Snooping, go to the following sections for information you are interested in:

l           IGMP Snooping Overview

l           IGMP Snooping Configuration Task List

l           Displaying and Maintaining IGMP Snooping

l           IGMP Snooping Configuration Examples

l           Troubleshooting IGMP Snooping Configuration

2.1  IGMP Snooping Overview

Internet Group Management Protocol Snooping (IGMP Snooping) is a multicast constraining mechanism that runs on Layer 2 devices to manage and control multicast groups.

2.1.1  Principle of IGMP Snooping

By analyzing received IGMP messages, a Layer 2 device running IGMP Snooping establishes mappings between ports and multicast IP addresses and forwards multicast data based on these mappings.

As shown in Figure 2-1, when IGMP Snooping is not running on the switch, multicast packets are broadcast to all devices at Layer 2. When IGMP Snooping is running on the switch, multicast packets for known multicast groups are multicast to the receivers, rather than broadcast to all hosts, at Layer 2.

Figure 2-1 Before and after IGMP Snooping is enabled on the Layer 2 device

2.1.2  Basic Concepts in IGMP Snooping

I. IGMP Snooping related ports

As shown in Figure 2-2, Router A connects to the multicast source, IGMP Snooping runs on Switch A and Switch B, Host A and Host C are receiver hosts (namely, multicast group members).

Figure 2-2 IGMP Snooping related ports

Ports involved in IGMP Snooping, as shown in Figure 2-2, are described as follows:

l           Router port: A router port is a port on the Ethernet switch that leads switch towards the Layer 3 multicast device (DR or IGMP querier). In the figure, Ethernet 1/0/1 of Switch A and Ethernet 1/0/1 of Switch B are router ports. The switch registers all its local router ports (including static and dynamic router ports) in its router port list.

l           Member port: A member port is a port on the Ethernet switch that leads switch towards multicast group members. In the figure, Ethernet 1/0/2 and Ethernet 1/0/3 of Switch A and Ethernet 1/0/2 of Switch B are member ports. The switch registers all the member ports (including static and dynamic member ports) on the local device in its IGMP Snooping forwarding table.

 

&  Note:

l      Whenever mentioned in this document, a router port is a port on the switch that leads the switch to a Layer 3 multicast device, rather than a port on a router.

l      An IGMP-snooping-enabled switch deems that all its ports on which IGMP general queries with the source address other than 0.0.0.0 or PIM hello messages are received to be router ports.

 

II. Aging timers for dynamic ports in IGMP Snooping and related messages and actions

Table 2-1 Aging timers for dynamic ports in IGMP Snooping and related messages and actions

Timer

Description

Message before expiry

Action after expiry

Router port aging timer

For each router port, the switch sets a timer initialized to the aging time of the route port.

IGMP general query of which the source address is not 0.0.0.0 or PIM hello

The switch removes this port from its router port list.

Member port aging timer

When a port joins a multicast group, the switch sets a timer for the port, which is initialized to the member port aging time.

IGMP membership report

The switch removes this port from the multicast group forwarding table.

 

&  Note:

The port aging mechanism of IGMP Snooping works only for dynamic ports; a static port will never age out.

 

2.1.3  Work Mechanism of IGMP Snooping

A switch running IGMP Snooping performs different actions when it receives different IGMP messages, as follows:

I. When receiving a general query

The IGMP querier periodically sends IGMP general queries to all hosts and routers (224.0.0.1) on the local subnet to find out whether active multicast group members exist on the subnet.

Upon receiving an IGMP general query, the switch forwards it through all ports in the VLAN except the receiving port and performs the following to the receiving port:

l           If the receiving port is a router port existing in its router port list, the switch resets the aging timer of this router port.

l           If the receiving port is not a router port existing in its router port list, the switch adds it into its router port list and sets an aging timer for this router port.

II. When receiving a membership report

A host sends an IGMP report to the multicast router in the following circumstances:

l           Upon receiving an IGMP query, a multicast group member host responds with an IGMP report.

l           When intended to join a multicast group, a host sends an IGMP report to the multicast router to announce that it is interested in the multicast information addressed to that group.

Upon receiving an IGMP report, the switch forwards it through all the router ports in the VLAN, resolves the address of the reported multicast group, and performs the following:

l           If no forwarding table entry exists for the reported group, the switch creates an entry, adds the port as member port to the outgoing port list, and starts a member port aging timer for that port.

l           If a forwarding table entry exists for the reported group, but the port is not included in the outgoing port list for that group, the switch adds the port as a member port to the outgoing port list, and starts a member port aging timer for that port.

l           If a forwarding table entry exists for the reported group and the port is included in the outgoing port list, which means that this port is already a member port, the switch resets the member port aging timer for that port.

 

&  Note:

A switch does not forward an IGMP report through a non-router port. The reason is as follows: Due to the IGMP report suppression mechanism, if the switch forwards a report message through a member port, all the attached hosts listening to the reported multicast address will suppress their own reports upon hearing this report, and this will prevent the switch from knowing whether any hosts attached to that port are still active members of the reported multicast group.

For the description of IGMP report suppression mechanism, refer to Work Mechanism of IGMPv1.

 

III. When receiving a leave group message

When an IGMPv1 host leaves a multicast group, the host does not send an IGMP leave group message, so the switch cannot know immediately that the host has left the multicast group. However, as the host stops sending IGMP reports as soon as it leaves a multicast group, the switch deletes the forwarding entry for the member port corresponding to the host from the forwarding table when its aging timer expires.

When an IGMPv2 or IGMPv3 host leaves a multicast group, the host sends an IGMP leave group message to the multicast router.

When the switch hears a group-specific IGMP leave group message on a member port, it first checks whether a forwarding table entry for that group exists, and, if one exists, whether its outgoing port list contains that port.

l           If the forwarding table entry does not exist or if its outgoing port list does not contain the port, the switch discards the IGMP leave group message instead of forwarding it to any port.

l           If the forwarding table entry exists and its outgoing port list contains the port, the switch forwards the leave group message to all router ports in the VLAN. Because the switch does not know whether any other hosts attached to the port are still listening to that group address, the switch does not immediately removes the port from the outgoing port list of the forwarding table entry for that group; instead, it resets the member port aging timer for the port.

Upon receiving the IGMP leave group message from a host, the IGMP querier resolves from the message the address of the multicast group that the host just left and sends an IGMP group-specific query to that multicast group through the port that received the leave group message. Upon hearing the IGMP group-specific query, the switch forwards it through all its router ports in the VLAN and all member ports for that multicast group, and performs the following:

l           If any IGMP report in response to the group-specific query is heard on a member port before its aging timer expires, this means that some host attached to the port is receiving or expecting to receive multicast data for that multicast group. The switch resets the aging timer of the member port.

l           If no IGMP report in response to the group-specific query is heard on a member port before its aging timer expires, this means that no hosts attached to the port are still listening to that group address: the switch removes the port from the outgoing port list of the forwarding table entry for that multicast group when the aging timer expires.

2.1.4  Processing of Multicast Protocol Messages

With Layer 3 multicast routing enabled, an IGMP Snooping switch processes multicast protocol messages differently under different conditions, specifically as follows:

1)         If only IGMP is enabled, or both IGMP and PIM are enabled on the switch, the switch handles multicast protocol messages in the normal way.

2)         In only PIM is enabled on the switch:

l           The switch broadcasts IGMP messages as unknown messages in the VLAN.

l           Upon receiving a PIM hello message, the switch will maintain the corresponding router port.

3)         When IGMP is disabled on the switch, or when IGMP forwarding entries are cleared (by using the reset igmp group command):

l           If PIM is disabled, the switch clears all its Layer 2 multicast entries and router ports.

l           If PIM is enabled, the switch clears only its Layer 2 multicast entries without deleting its router ports.

4)         When PIM is disabled on the switch:

l           If IGMP is disabled, the switch clears all its router ports.

l           If IGMP is enabled, the switch maintains all its Layer 2 multicast entries and router ports.

2.1.5  Protocols and Standards

IGMP Snooping is documented in:

l           RFC 4541: Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches

2.2  IGMP Snooping Configuration Task List

Complete these tasks to configure IGMP Snooping:

Task

Remarks

Configuring Basic Functions of IGMP Snooping

Enabling IGMP Snooping

Required

Configuring the Version of IGMP Snooping

Optional

Configuring IGMP Snooping Port Functions

Configuring Aging Timers for Dynamic Ports

Optional

Configuring Static Ports

Optional

Configuring Simulated Joining

Optional

Configuring Fast Leave Processing

Optional

Configuring IGMP Snooping Querier

Enabling IGMP Snooping Querier

Optional

Configuring IGMP Queries and Responses

Optional

Configuring Source IP Address of IGMP Queries

Optional

Configuring an IGMP Snooping Policy

Configuring a Multicast Group Filter

Optional

Configuring Multicast Source Port Filtering

Optional

Configuring the Function of Dropping Unknown Multicast Data

Optional

Configuring IGMP Report Suppression

Optional

Configuring Maximum Multicast Groups that Can Be Joined on a Port

Optional

Configuring Multicast Group Replacement

Optional

 

&  Note:

l      Configurations made in IGMP Snooping view are effective for all VLANs, while configurations made in VLAN view are effective only for ports belonging to the current VLAN. For a given VLAN, a configuration made in IGMP Snooping view is effective only if the same configuration is not made in VLAN view.

l      Configurations made in IGMP Snooping view are effective for all ports; configurations made in Ethernet port view are effective only for the current port; configurations made in manual port group view are effective only for all the ports in the current port group; configurations made in aggregation group view are effective only for the master port. For a given port, a configuration made in IGMP Snooping view is effective only if the same configuration is not made in Ethernet port view or port group view.

 

2.3  Configuring Basic Functions of IGMP Snooping

2.3.1  Configuration Prerequisites

Before configuring the basic functions of IGMP Snooping, complete the following task:

l           Configure the corresponding VLANs.

Before configuring the basic functions of IGMP Snooping, prepare the following data:

l           Version of IGMP Snooping.

2.3.2  Enabling IGMP Snooping

Follow these steps to enable IGMP Snooping:

To do...

Use the command...

Remarks

Enter system view

system-view

Enable IGMP Snooping globally and enter IGMP-Snooping view

igmp-snooping

Required

Disabled by default

Return to system view

quit

Enter VLAN view

vlan vlan-id

Enable IGMP Snooping in the VLAN

igmp-snooping enable

Required

Disabled by default

 

&  Note:

l      IGMP Snooping must be enabled globally before it can be enabled in a VLAN.

l      After enabling IGMP Snooping in a VLAN, you cannot enable IGMP and/or PIM on the corresponding VLAN interface, and vice versa.

l      When you enable IGMP Snooping in a specified VLAN, this function takes effect for Ethernet ports in this VLAN only.

 

2.3.3  Configuring the Version of IGMP Snooping

By configuring an IGMP Snooping version, you actually configure the version of IGMP messages that IGMP Snooping can process.

l           IGMP Snooping version 2 can process IGMPv1 and IGMPv2 messages, but not IGMPv3 messages, which will be flooded in the VLAN.

l           IGMP Snooping version 3 can process IGMPv1, IGMPv2 and IGMPv3 messages.

Follow these steps to configure the version of IGMP Snooping:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure the version of IGMP Snooping

igmp-snooping version version-number

Optional

Version 2 by default

 

  Caution:

If you switch IGMP Snooping from version 3 to version 2, the system will clear all IGMP Snooping forwarding entries from dynamic joins, and will:

l      Keep forwarding entries for version 3 static (*, G) joins;

l      Clear forwarding entries from version 3 static (S, G) joins, which will be restored when IGMP Snooping is switched back to version 3.

For details about static joins, Refer to Configuring Static Ports.

 

2.4  Configuring IGMP Snooping Port Functions

2.4.1  Configuration Prerequisites

Before configuring IGMP Snooping port functions, complete the following tasks:

l           Enable IGMP Snooping in the VLAN or enable IGMP on the desired VLAN interface

l           Configure the corresponding port groups.

Before configuring IGMP Snooping port functions, prepare the following data:

l           Aging time of router ports,

l           Aging timer of member ports, and

l           Multicast group and multicast source addresses

2.4.2  Configuring Aging Timers for Dynamic Ports

If the switch receives no IGMP general queries or PIM hello messages on a dynamic router port, the switch removes the port from the router port list when the aging timer of the port expires.

If the switch receives no IGMP reports for a multicast group on a dynamic member port, the switch removes the port from the outgoing port list of the forwarding table entry for that multicast group when the aging timer of the port for that group expires.

If multicast group memberships change frequently, you can set a relatively small value for the member port aging timer, and vice versa.

I. Configuring aging timers for dynamic ports globally

Follow these steps to configure aging timers for dynamic ports globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter IGMP Snooping view

igmp-snooping

Configure router port aging time

router-aging-time interval

Optional

105 seconds by default

Configure member port aging time

host-aging-time interval

Optional

260 seconds by default

 

II. Configuring aging timers for dynamic ports in a VLAN

Follow these steps to configure aging timers for dynamic ports in a VLAN:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure router port aging time

igmp-snooping router-aging-time interval

Optional

105 seconds by default

Configure member port aging time

igmp-snooping host-aging-time interval

Optional

260 seconds by default

 

2.4.3  Configuring Static Ports

If all the hosts attached to a port are interested in the multicast data addressed to a particular multicast group or the multicast data that a particular multicast source sends to a particular group, you can configure static (*, G) or (S, G) joining on that port, namely configure the port as a group-specific or source-and-group-specific static member port.

You can configure a port of a switch to be a static router port, through which the switch can forward all the multicast traffic it received.

Follow these steps to configure static ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command.

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure the port(s) as static member port(s)

igmp-snooping static-group group-address [ source-ip source_address ] vlan vlan-id

Required

Disabled by default

Configure the port(s) as static router port(s)

igmp-snooping static-router-port vlan vlan-id

Required

Disabled by default

 

&  Note:

l      The static (S, G) joining function is available only if a valid multicast source address is specified and IGMP Snooping version 3 is currently running on the switch.

l      A static member port does not respond to queries from the IGMP querier; when static (*, G) or (S, G) joining is enabled or disabled on a port, the port does not send an unsolicited IGMP report or an IGMP leave group message.

l      Static member ports and static router ports never age out. To remove such a port, you need to use the corresponding command.

 

2.4.4  Configuring Simulated Joining

Generally, a host running IGMP responds to IGMP queries from the IGMP querier. If a host fails to respond due to some reasons, the multicast router may deem that no member of this multicast group exists on the network segment, and therefore will remove the corresponding forwarding path.

To avoid this situation from happening, you can enable simulated joining on a port of the switch, namely configure the port as a simulated member host for a multicast group. When an IGMP query is heard, the simulated host gives a response. Thus, the switch can continue receiving multicast data.

A simulated host acts like a real host, as follows:

l           When a port is configured as a simulated member host, the switch sends an unsolicited IGMP report through that port.

l           After a port is configured as a simulated member host, the switch responds to IGMP general queries by sending IGMP reports through that port.

l           When the simulated joining function is disabled on a port, the switch sends an IGMP leave group message through that port.

Follow these steps to configure simulated joining:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure simulated (*, G) or (S, G) joining

igmp-snooping host-join group-address [ source-ip source-address ] vlan vlan-id

Required

Disabled by default

 

&  Note:

l      Each simulated host is equivalent to an independent host. For example, when receiving an IGMP query, the simulated host corresponding to each configuration responds respectively.

l      Unlike a static member port, a port configured as a simulated member host will age out like a dynamic member port.

 

2.4.5  Configuring Fast Leave Processing

The fast leave processing feature allows the switch to process IGMP leave group messages in a fast way. With the fast leave processing feature enabled, when receiving an IGMP leave group message on a port, the switch immediately removes that port from the outgoing port list of the forwarding table entry for the indicated group. Then, when receiving IGMP group-specific queries for that multicast group, the switch will not forward them to that port.

In VLANs where only one host is attached to each port, fast leave processing helps improve bandwidth and resource usage.

I. Configuring fast leave processing globally

Follow these steps to configure fast leave processing globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter IGMP Snooping view

igmp-snooping

Enable fast leave processing

fast-leave [ vlan vlan-list ]

Required

Disabled by default

 

II. Configuring fast leave processing on a port or a group of ports

Follow these steps to configure fast leave processing on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Enable fast leave processing

igmp-snooping fast-leave [ vlan vlan-list ]

Required

Disabled by default

 

  Caution:

If fast leave processing is enabled on a port to which more than one host is attached, when one host leaves a multicast group, the other hosts attached to the port and interested in the same multicast group will fail to receive multicast data for that group.

 

2.5  Configuring IGMP Snooping Querier

2.5.1  Configuration Prerequisites

Before configuring IGMP Snooping querier, complete the following task:

l           Enable IGMP Snooping in the VLAN.

Before configuring IGMP Snooping querier, prepare the following data:

l           IGMP general query interval,

l           IGMP last-member query interval,

l           Maximum response time to IGMP general queries,

l           Source address of IGMP general queries, and

l           Source address of IGMP group-specific queries.

2.5.2  Enabling IGMP Snooping Querier

In an IP multicast network running IGMP, a multicast router or Layer 3 multicast switch is responsible for sending IGMP general queries, so that all Layer 3 multicast devices can establish and maintain multicast forwarding entries, thus to forward multicast traffic correctly at the network layer. This router or Layer 3 switch is called IGMP querier.

However, a Layer 2 multicast switch does not support IGMP, and therefore cannot send general queries by default. By enabling IGMP Snooping on a Layer 2 switch in a VLAN where multicast traffic needs to be Layer-2 switched only and no multicast routers are present, the Layer 2 switch will act as the IGMP Snooping querier to send IGMP queries, thus allowing multicast forwarding entries to be established and maintained at the data link layer.

Follow these steps to enable IGMP Snooping querier:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Enable IGMP Snooping querier

igmp-snooping querier

Required

Disabled by default

 

  Caution:

It is meaningless to configure an IGMP Snooping querier in a multicast network running IGMP. Although an IGMP Snooping querier does not take part in IGMP querier elections, it may affect IGMP querier elections because it sends IGMP general queries with a low source IP address.

 

2.5.3  Configuring IGMP Queries and Responses

You can tune the IGMP general query interval based on actual condition of the network.

Upon receiving an IGMP query (general query or group-specific query), a host starts a timer for each multicast group it has joined. This timer is initialized to a random value in the range of 0 to the maximum response time (the host obtains the value of the maximum response time from the Max Response Time field in the IGMP query it received). When the timer value comes down to 0, the host sends an IGMP report to the corresponding multicast group.

An appropriate setting of the maximum response time for IGMP queries allows hosts to respond to queries quickly and avoids bursts of IGMP traffic on the network caused by reports simultaneously sent by a large number of hosts when the corresponding timers expire simultaneously.

l           For IGMP general queries, you can configure the maximum response time to fill their Max Response time field.

l           For IGMP group-specific queries, you can configure the IGMP last-member query interval to fill their Max Response time field. Namely, for IGMP group-specific queries, the maximum response time equals to the IGMP last-member query interval.

I. Configuring IGMP queries and responses globally

Follow these steps to configure IGMP queries and responses globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter IGMP Snooping view

igmp-snooping

Configure the maximum response time to IGMP general queries

max-response-time interval

Optional

10 seconds by default

Configure the IGMP last-member query interval

last-member-query-interval interval

Optional

1 second by default

 

II. Configuring IGMP queries and responses in a VLAN

Follow these steps to configure IGMP queries and responses in a VLAN:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure IGMP general query interval

igmp-snooping query-interval interval

Optional

60 seconds by default

Configure the maximum response time to IGMP general queries

igmp-snooping max-response-time interval

Optional

10 seconds by default

Configure the IGMP last-member query interval

igmp-snooping last-member-query-interval interval

Optional

1 second by default

 

  Caution:

In the configuration, make sure that the IGMP general query interval is larger than the maximum response time for IGMP general queries. Otherwise, multicast group members may be deleted by mistake.

 

2.5.4  Configuring Source IP Address of IGMP Queries

Upon receiving an IGMP query whose source IP address is 0.0.0.0 on a port, the switch will not set that port as a router port. This may prevent multicast forwarding entries from being correctly created at the data link layer and cause multicast traffic forwarding failure in the end. When a Layer 2 device acts as an IGMP-Snooping querier, to avoid the aforesaid problem, you are commended to configure a non-all-zero IP address as the source IP address of IGMP queries.

Follow these steps to configure source IP address of IGMP queries:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure the source address of IGMP general queries

igmp-snooping general-query source-ip { current-interface | ip-address }

Optional

0.0.0.0 by default

Configure the source IP address of IGMP group-specific queries

igmp-snooping special-query source-ip { current-interface | ip-address }

Optional

0.0.0.0 by default

 

  Caution:

The source address of IGMP query messages may affect IGMP querier selection within the segment.

 

2.6  Configuring an IGMP Snooping Policy

2.6.1  Configuration Prerequisites

Before configuring an IGMP Snooping policy, complete the following task:

l           Enable IGMP Snooping in the VLAN or enable IGMP on the desired VLAN interface

Before configuring an IGMP Snooping policy, prepare the following data:

l           ACL rule for multicast group filtering

l           The maximum number of multicast groups that can pass the ports

2.6.2  Configuring a Multicast Group Filter

On an IGMP Snooping–enabled switch, the configuration of a multicast group allows the service provider to define restrictions on multicast programs available to different users.

In an actual application, when a user requests a multicast program, the user’s host initiates an IGMP report. Upon receiving this report message, the switch checks the report against the configured ACL rule. If the port on which the report was heard can join this multicast group, the switch adds an entry for this port in the IGMP Snooping forwarding table; otherwise the switch drops this report message. Any multicast data that has failed the ACL check will not be sent to this port. In this way, the service provider can control the VOD programs provided for multicast users.

I. Configuring a multicast group filter globally

Follow these steps to configure a multicast group filter globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter IGMP Snooping view

igmp-snooping

Configure a multicast group filter

group-policy acl-number [ vlan vlan-list ]

Required

No group filter is configured by default, namely hosts can join any multicast group.

 

II. Configuring a multicast group filter on a port or a group of ports

Follow these steps to configuring a multicast group filter on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure a multicast group filter

igmp-snooping group-policy acl-number [ vlan vlan-list ]

Required

No filter is configured by default, namely hosts can join any multicast group.

 

2.6.3  Configuring Multicast Source Port Filtering

With the multicast source port filtering feature enabled on a port, the port can be connected with multicast receivers only rather than with multicast sources, because the port will block all multicast data packets while it permits multicast protocol packets to pass.

If this feature is disabled on a port, the port can be connected with both multicast sources and multicast receivers.

I. Configuring multicast source port filtering globally

Follow these steps to configure multicast source port filtering globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter IGMP Snooping view

igmp-snooping

Enable multicast source port filtering

source-deny port interface-list

Required

Disabled by default

 

II. Configuring multicast source port filtering on a port or a group of ports

Follow these steps to configure multicast source port filtering on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Enable multicast source port filtering

igmp-snooping source-deny

Required

Disabled by default

 

&  Note:

When enabled to filter IPv4 multicast data based on the source ports, the device is automatically enabled to filter IPv6 multicast data based on the source ports.

 

2.6.4  Configuring the Function of Dropping Unknown Multicast Data

Unknown multicast data refers to multicast data for which no entries exist in the IGMP Snooping forwarding table. When the switch receives such multicast traffic:

l           With the function of dropping unknown multicast data enabled, the switch drops all the unknown multicast data received.

l           With the function of dropping unknown multicast data disabled, the switch floods unknown multicast data in the VLAN which the unknown multicast data belongs to.

Follow these steps to configure the function of dropping unknown multicast data in a VLAN:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Enable the function of dropping unknown multicast data

igmp-snooping drop-unknown

Required

Disabled by default

 

&  Note:

When enabled to drop unknown IPv4 multicast data, the device is automatically enabled to drop unknown IPv6 multicast data.

 

2.6.5  Configuring IGMP Report Suppression

When a Layer 2 device receives an IGMP report from a multicast group member, the device forwards the message to the Layer 3 device directly connected with it. Thus, when multiple members of a multicast group are attached to the Layer 2 device, the Layer 3 device directly connected with it will receive duplicate IGMP reports from these members.

With the IGMP report suppression function enabled, within each query cycle, the Layer 2 device forwards only the first IGMP report per multicast group to the Layer 3 device and will not forward the subsequent IGMP reports from the same multicast group to the Layer 3 device. This helps reduce the number of packets being transmitted over the network.

Follow these steps to configure IGMP report suppression:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter IGMP Snooping view

igmp-snooping

Enable IGMP report suppression

report-aggregation

Optional

Enabled by default

 

2.6.6  Configuring Maximum Multicast Groups that Can Be Joined on a Port

By configuring the maximum number of multicast groups that can be joined on a port, you can limit the number of multicast programs on-demand available to users, thus to regulate traffic on the port.

Follow these steps to configure the maximum number of multicast groups that can be joined on a port or ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure the maximum number of multicast groups that can be joined on the port(s)

igmp-snooping group-limit limit [ vlan vlan-list ]

Optional

The default is 1024.

 

&  Note:

l      When the number of multicast groups a port has joined reaches the maximum number configured, the system deletes all the forwarding entries persistent to that port from the IGMP Snooping forwarding table, and the hosts on this port need to join the multicast groups again.

l      If you have configured static or simulated joins on a port, however, when the number of multicast groups on the port exceeds the configured threshold, the system deletes all the forwarding entries persistent to that port from the IGMP Snooping forwarding table and applies the static or simulated joins again, until the number of multicast groups joined by the port comes back within the configured threshold.

 

2.6.7  Configuring Multicast Group Replacement

For some special reasons, the number of multicast groups that can be joined on the current switch or port may exceed the number configured for the switch or the port. In addition, in some specific applications, a multicast group newly joined on the switch needs to replace an existing multicast group automatically. A typical example is “channel switching”, namely, by joining a new multicast group, a user automatically switches from the current multicast group to the new one.

To address such situations, you can enable the multicast group replacement function on the switch or certain ports. When the number of multicast groups joined on the switch or a port has joined reaches the limit:

l           If the multicast group replacement feature is enabled, the newly joined multicast group automatically replaces an existing multicast group with the lowest address.

l           If the multicast group replacement feature is not enabled, new IGMP reports will be automatically discarded.

I. Configuring multicast group replacement globally

Follow these steps to configure multicast group replacement globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter IGMP Snooping view

igmp-snooping

Configure multicast group replacement

overflow-replace [ vlan vlan-list ]

Required

Disabled by default

 

II. Configuring multicast group replacement on a port or a group of ports

Follow these steps to configure multicast group replacement on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure multicast group replacement

igmp-snooping overflow-replace [ vlan vlan-list ]

Required

Disabled by default

 

  Caution:

Be sure to configure the maximum number of multicast groups allowed on a port (refer to Configuring Maximum Multicast Groups that Can Be Joined on a Port) before configuring multicast group replacement. Otherwise, the multicast group replacement functionality will not take effect.

 

2.7  Displaying and Maintaining IGMP Snooping

To do...

Use the command...

Remarks

View the information of IGMP Snooping multicast groups

display igmp-snooping group [ vlan vlan-id ] [ verbose ]

Available in any view

View the statistics information of IGMP messages learned by IGMP Snooping

display igmp-snooping statistics

Available in any view

Clear IGMP Snooping multicast group information

reset igmp-snooping group { group-address | all } [ vlan vlan-id ]

Available in user view

Clear the statistics information of all kinds of IGMP messages learned by IGMP Snooping

reset igmp-snooping statistics

Available in user view

 

&  Note:

l      The reset igmp-snooping group command works only on an IGMP Snooping–enabled VLAN, but not on a VLAN with IGMP enabled on its VLAN interface.

l      The reset igmp-snooping group command cannot clear IGMP Snooping forwarding table entries for static joins.

 

2.8  IGMP Snooping Configuration Examples

2.8.1  Configuring Simulated Joining

I. Network requirements

l           As shown in Figure 2-3, Router A connects to the multicast source through GigabitEthernet 1/0/2 and to Switch A through GigabitEthernet 1/0/1. Router A is the IGMP querier on the subnet.

l           IGMP is required on Router A, IGMP Snooping is required on Switch A, and Router A will act as the IGMP querier on the subnet.

l           Perform the following configuration so that multicast data can be forwarded through GigabitEthernet 1/0/3 and GigabitEthernet 1/0/4 even if Host A and Host B temporarily stop receiving multicast data for some unexpected reasons.

II. Network diagram

Figure 2-3 Network diagram for simulated joining configuration

III. Configuration procedure

1)         Configure the IP address of each interface

Configure an IP address and subnet mask for each interface as per Figure 2-3. The detailed configuration steps are omitted.

2)         Configure Router A

# Enable IP multicast routing, enable PIM-DM on each interface, and enable IGMPv2 on GigabitEthernet 1/0/1.

<RouterA> system-view

[RouterA] multicast routing-enable

[RouterA] interface GigabitEthernet 1/0/1

[RouterA-GigabitEthernet1/0/1] igmp enable

[RouterA-GigabitEthernet1/0/1] pim dm

[RouterA-GigabitEthernet1/0/1] quit

[RouterA] interface GigabitEthernet 1/0/2

[RouterA-GigabitEthernet1/0/2] pim dm

[RouterA-GigabitEthernet1/0/2] quit

3)         Configure Switch A

# Enable IGMP Snooping globally.

<SwitchA> system-view

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 through GigabitEthernet 1/0/4 to this VLAN, and enable IGMP Snooping in the VLAN.

[SwitchA] vlan 100

[SwitchA-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/4

[SwitchA-vlan100] igmp-snooping enable

[SwitchA-vlan100] quit

# Enable simulated host joining on GigabitEthernet 1/0/3 and GigabitEthernet 1/0/4 respectively.

[SwitchA] interface GigabitEthernet 1/0/3

[SwitchA-GigabitEthernet1/0/3] igmp-snooping host-join 224.1.1.1 vlan 100

[SwitchA-GigabitEthernet1/0/3] quit

[SwitchA] interface GigabitEthernet 1/0/4

[SwitchA-GigabitEthernet1/0/4] igmp-snooping host-join 224.1.1.1 vlan 100

[SwitchA-GigabitEthernet1/0/4] quit

4)         Verify the configuration

# View the detailed information about IGMP Snooping multicast groups in VLAN 100 on Switch A.

[SwitchA] display igmp-snooping group vlan 100 verbose

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

 

  Port flags: D-Dynamic port, S-Static port, A-Aggregation port, C-Copy port

  Subvlan flags: R-Real VLAN, C-Copy VLAN

  Vlan(id):100.

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

    Router port(s):total 1 port.

            GE1/0/1               (D) ( 00:01:30 )

    IP group(s):the following ip group(s) match to one mac group.

      IP group address:224.1.1.1

        (0.0.0.0, 224.1.1.1):

          Attribute:    Host Port

          Host port(s):total 2 port.

            GE1/0/3               (D) ( 00:03:23 )

            GE1/0/4               (D) ( 00:03:23 )

    MAC group(s):

      MAC group address:0100-5e01-0101

          Host port(s):total 2 port.

            GE1/0/3

            GE1/0/4

As shown above, GigabitEthernet 1/0/3 and GigabitEthernet 1/0/4 of Switch A have joined multicast group 224.1.1.1.

2.8.2  Static Router Port Configuration

I. Network requirements

l           As shown in Figure 2-4, Router A connects to a multicast source (Source) through GigabitEthernet 1/0/2, and to Switch A through GigabitEthernet 1/0/1.

l           IGMP is to run between Router A and Switch A, and IGMP Snooping is to run on Switch A, Switch B and Switch C, with Router A acting as the IGMP querier.

l           Suppose STP runs on the network. To avoid data loops, the forwarding path from Switch A to Switch C is blocked under normal conditions, and multicast traffic flows to the receivers, Host A and Host C, attached to Switch C only along the path of Switch A—Switch B—Switch C.

l           Now it is required to configure GigabitEthernet 1/0/3 that connects Switch A to Switch C as a static router port, so that multicast traffic can flows to the receivers nearly uninterruptedly along the path of Switch A—Switch C in the case that the path of Switch A—Switch B—Switch C gets blocked.

 

&  Note:

If no static router port is configured, when the path of Switch A—Switch B—Switch C gets blocked, at least one IGMP query-response cycle must be completed before the multicast data can flow to the receivers along the new path of Switch A—Switch C, namely multicast delivery will be interrupted during this process.

 

II. Network diagram

Figure 2-4 Network diagram for static router port configuration

III. Configuration procedure

1)         Configure the IP address of each interface

Configure an IP address and subnet mask for each interface as per Figure 2-4. The detailed configuration steps are omitted.

2)         Configure Router A

# Enable IP multicast routing, enable PIM-DM on each interface, and enable IGMP on GigabitEthernet 1/0/1.

<RouterA> system-view

[RouterA] multicast routing-enable

[RouterA] interface GigabitEthernet 1/0/1

[RouterA-GigabitEthernet1/0/1] igmp enable

[RouterA-GigabitEthernet1/0/1] pim dm

[RouterA-GigabitEthernet1/0/1] quit

[RouterA] interface GigabitEthernet 1/0/2

[RouterA-GigabitEthernet1/0/2] pim dm

[RouterA-GigabitEthernet1/0/2] quit

3)         Configure Switch A

# Enable IGMP Snooping globally.

<SwitchA> system-view

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 to this VLAN, and enable IGMP Snooping in the VLAN.

[SwitchA] vlan 100

[SwitchA-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/3

[SwitchA-vlan100] igmp-snooping enable

[SwitchA-vlan100] quit

# Configure GigabitEthernet 1/0/3 to be a static router port.

[SwitchA] interface GigabitEthernet 1/0/3

[SwitchA-GigabitEthernet1/0/3] igmp-snooping static-router-port vlan 100

[SwitchA-GigabitEthernet1/0/3] quit

4)         Configure Switch B

# Enable IGMP Snooping globally.

<SwitchB> system-view

[SwitchB] igmp-snooping

[SwitchB-igmp-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 and GigabitEthernet 1/0/2 to this VLAN, and enable IGMP Snooping in the VLAN.

[SwitchB] vlan 100

[SwitchB-vlan100] port GigabitEthernet 1/0/1 GigabitEthernet 1/0/2

[SwitchB-vlan100] igmp-snooping enable

[SwitchB-vlan100] quit

5)         Configure Switch C

# Enable IGMP Snooping globally.

<SwitchC> system-view

[SwitchC] igmp-snooping

[SwitchC-igmp-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 through GigabitEthernet 1/0/5 to this VLAN, and enable IGMP Snooping in the VLAN.

[SwitchC] vlan 100

[SwitchC-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/5

[SwitchC-vlan100] igmp-snooping enable

[SwitchC-vlan100] quit

6)         Verify the configuration

# View the detailed information about IGMP Snooping multicast groups in VLAN 100 on Switch A.

[SwitchA] display igmp-snooping group vlan 100 verbose

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

 

  Port flags: D-Dynamic port, S-Static port, A-Aggregation port, C-Copy port

  Subvlan flags: R-Real VLAN, C-Copy VLAN

  Vlan(id):100.

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

    Router port(s):total 2 port.

            GE1/0/1               (D) ( 00:01:30 )

            GE1/0/3               (S)

    IP group(s):the following ip group(s) match to one mac group.

      IP group address:224.1.1.1

        (0.0.0.0, 224.1.1.1):

          Attribute:    Host Port

          Host port(s):total 1 port.

            GE1/0/2               (D) ( 00:03:23 )

    MAC group(s):

      MAC group address:0100-5e01-0101

          Host port(s):total 1 port.

            GE1/0/2

As shown above, GigabitEthernet 1/0/3 of Switch A has become a static router port.

2.8.3  IGMP Snooping Querier Configuration

I. Network requirements

l              As shown in Figure 2-5, in a Layer-2-only network environment, Switch C is connected to the multicast source (Source) through GigabitEthernet 1/0/3. At least one receiver is attached to Switch B and Switch C respectively.

l           IGMPv2 is enabled on all the receivers. Switch A, Switch B, and Switch C run IGMP Snooping. Switch A acts as the IGMP-Snooping querier.

l           Configure a non-all-zero IP address as the source IP address of IGMP queries to ensure normal creation of multicast forwarding entries.

II. Network diagram

Figure 2-5 Network diagram for IGMP Snooping querier configuration

III. Configuration procedure

1)         Configure switch A

# Enable IGMP Snooping globally.

<SwitchA> system-view

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] quit

# Create VLAN 100 and add GigabitEthernet 1/0/1 and GigabitEthernet 1/0/2 to VLAN 100.

[SwitchA] vlan 100

[SwitchA-vlan100] port GigabitEthernet 1/0/1 GigabitEthernet 1/0/2

# Enable IGMP Snooping in VLAN 100 and configure the IGMP-Snooping querier feature.

[SwitchA-vlan100] igmp-snooping enable

[SwitchA-vlan100] igmp-snooping querier

# Set the source IP address of IGMP general queries and group-specific queries to 192.168.1.1.

[SwitchA-vlan100] igmp-snooping general-query source-ip 192.168.1.1

[SwitchA-vlan100] igmp-snooping special-query source-ip 192.168.1.1

2)         Configure Switch B

# Enable IGMP Snooping globally.

<SwitchB> system-view

[SwitchB] igmp-snooping

[SwitchB-igmp-snooping] quit

# Create VLAN 100, add GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 to VLAN 100, and enable IGMP Snooping in this VLAN.

[SwitchB] vlan 100

[SwitchB-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/3

[SwitchB-vlan100] igmp-snooping enable

3)         Configuration on Switch C

# Enable IGMP Snooping globally.

<SwitchC> system-view

[SwitchC] igmp-snooping

[SwitchC-igmp-snooping] quit

# Create VLAN 100, add GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 to VLAN 100, and enable IGMP Snooping in this VLAN.

[SwitchC] vlan 100

[SwitchC-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/3

[SwitchC-vlan100] igmp-snooping enable

4)         Verify the configuration

# View the IGMP message statistics on Switch C.

[SwitchC-vlan100] display igmp-snooping statistics

  Received IGMP general queries:3.

  Received IGMPv1 reports:0.

  Received IGMPv2 reports:4.

  Received IGMP leaves:0.

  Received IGMPv2 specific queries:0.

  Sent     IGMPv2 specific queries:0.

  Received IGMPv3 reports:0.

  Received IGMPv3 reports with right and wrong records:0.

  Received IGMPv3 specific queries:0.

  Received IGMPv3 specific sg queries:0.

  Sent     IGMPv3 specific queries:0.

  Sent     IGMPv3 specific sg queries:0.

  Received error IGMP messages:0.

Switch C received IGMP general queries. This means that Switch A works as an IGMP-Snooping querier.

2.9  Troubleshooting IGMP Snooping Configuration

2.9.1  Switch Fails in Layer 2 Multicast Forwarding

I. Symptom

A switch fails to implement Layer 2 multicast forwarding.

II. Analysis

IGMP Snooping is not enabled.

III. Solution

1)         Enter the display current-configuration command to view the running status of IGMP Snooping.

2)         If IGMP Snooping is not enabled, use the igmp-snooping command to enable IGMP Snooping globally, and then use igmp-snooping enable command to enable IGMP Snooping in VLAN view.

3)         If IGMP Snooping is disabled only for the corresponding VLAN, just use the igmp-snooping enable command in VLAN view to enable IGMP Snooping in the corresponding VLAN.

2.9.2  Configured Multicast Group Policy Fails to Take Effect

I. Symptom

Although a multicast group policy has been configured to allow hosts to join specific multicast groups, the hosts can still receive multicast data addressed to other multicast groups.

II. Analysis

l           The ACL rule is incorrectly configured.

l           The multicast group policy is not correctly applied.

l           The function of dropping unknown multicast data is not enabled, so unknown multicast data is flooded.

l           Certain ports have been configured as static member ports of multicasts groups, and this configuration conflicts with the configured multicast group policy.

III. Solution

1)         Use the display acl command to check the configured ACL rule. Make sure that the ACL rule conforms to the multicast group policy to be implemented.

2)         Use the display this command in IGMP Snooping view or in the corresponding interface view to check whether the correct multicast group policy has been applied. If not, use the group-policy or igmp-snooping group-policy command to apply the correct multicast group policy.

3)         Use the display current-configuration command to check whether the function of dropping unknown multicast data is enabled. If not, use the igmp-snooping drop-unknown command to enable the function of dropping unknown multicast data.

4)         Use the display igmp-snooping group command to check whether any port has been configured as a static member port of any multicast group. If so, check whether this configuration conflicts with the configured multicast group policy. If any conflict exists, remove the port as a static member of the multicast group.

 


Chapter 3  MLD Snooping Configuration

When configuring MLD Snooping, go to these sections for information you are interested in:

l           MLD Snooping Overview

l           MLD Snooping Configuration Task List

l           Displaying and Maintaining MLD Snooping

l           MLD Snooping Configuration Examples

l           Troubleshooting MLD Snooping

3.1  MLD Snooping Overview

Multicast Listener Discovery Snooping (MLD Snooping) is an IPv6 multicast constraining mechanism that runs on Layer 2 devices to manage and control IPv6 multicast groups.

3.1.1  Introduction to MLD Snooping

By analyzing received MLD messages, a Layer 2 device running MLD Snooping establishes mappings between ports and multicast MAC addresses and forwards IPv6 multicast data based on these mappings.

As shown in Figure 3-1, when MLD Snooping is not running, IPv6 multicast packets are broadcast to all devices at Layer 2. When MLD Snooping runs, multicast packets for known IPv6 multicast groups are multicast to the receivers at Layer 2.

Figure 3-1 Before and after MLD Snooping is enabled on the Layer 2 device

3.1.2  Basic Concepts in MLD Snooping

I. MLD Snooping related ports

As shown in Figure 2-2, Router A connects to the multicast source, MLD Snooping runs on Switch A and Switch B, Host A and Host C are receiver hosts (namely, IPv6 multicast group members).

Figure 3-2 MLD Snooping related ports

Ports involved in MLD Snooping, as shown in Figure 2-2, are described as follows:

l           Router port: A router port is a port on the Ethernet switch that leads switch towards the Layer-3 multicast device (DR or MLD querier). In the figure, Ethernet 1/0/1 of Switch A and Ethernet 1/0/1 of Switch B are router ports. The switch registers all its local router ports (including static and dynamic router ports) in its router port list.

l           Member port: A member port (also known as IPv6 multicast group member port) is a port on the Ethernet switch that leads switch towards multicast group members. In the figure, Ethernet 1/0/2 and Ethernet 1/0/3 of Switch A and Ethernet 1/0/2 of Switch B are member ports. The switch registers all the member ports (including static and dynamic member ports) on the local device in its MLD Snooping forwarding table.

 

&  Note:

l      Whenever mentioned in this document, a router port is a router-connecting port on the switch, rather than a port on a router.

l      On an MLD-snooping-enabled switch, the ports that received MLD general queries with the source address other than 0::0 or IPv6 PIM hello messages are router ports.

 

II. Aging timers for dynamic ports in MLD Snooping

Table 3-1 Aging timers for dynamic ports in MLD Snooping and related messages and actions

Timer

Description

Message before expiry

Action after expiry

Router port aging timer

For each router port, the switch sets a timer initialized to the aging time of the route port.

MLD general query of which the source address is not 0::0 or IPv6 PIM hello.

The switch removes this port from its router port list.

Member port aging timer

When a port joins an IPv6 multicast group, the switch sets a timer for the port, which is initialized to the member port aging time.

MLD report message.

The switch removes this port from the IPv6 multicast group forwarding table.

 

&  Note:

The port aging mechanism of MLD Snooping works only for dynamic ports; a static port will never age out.

 

3.1.3  How MLD Snooping Works

A switch running MLD Snooping performs different actions when it receives different MLD messages, as follows:

I. General queries

The MLD querier periodically sends MLD general queries to all hosts and routers (FF02::1) on the local subnet to find out whether IPv6 multicast group members exist on the subnet.

Upon receiving an MLD general query, the switch forwards it through all ports in the VLAN except the receiving port and performs the following to the receiving port:

l           If the receiving port is a router port existing in its router port list, the switch resets the aging timer of this router port.

l           If the receiving port is not a router port existing in its router port list, the switch adds it into its router port list and sets an aging timer for this router port.

II. Membership reports

A host sends an MLD report to the multicast router in the following circumstances:

l           Upon receiving an MLD query, an IPv6 multicast group member host responds with an MLD report.

l           When intended to join an IPv6 multicast group, a host sends an MLD report to the multicast router to announce that it is interested in the multicast information addressed to that IPv6 multicast group.

Upon receiving an MLD report, the switch forwards it through all the router ports in the VLAN, resolves the address of the reported IPv6 multicast group, and performs the following to the receiving port:

l           If no forwarding table entry exists for the reported IPv6 multicast group, the switch creates an entry, adds the port as member port to the outgoing port list, and starts a member port aging timer for that port.

l           If a forwarding table entry exists for the reported IPv6 multicast group, but the port is not included in the outgoing port list for that group, the switch adds the port as a member port to the outgoing port list, and starts a member port aging timer for that port.

l           If a forwarding table entry exists for the reported IPv6 multicast group and the port is included in the outgoing port list, which means that this port is already a member port, the switch resets the member port aging timer for that port.

 

&  Note:

A switch does not forward an MLD report through a non-router port. The reason is as follows: Due to the MLD report suppression mechanism, if the switch forwards a report message through a member port, all the attached hosts listening to the reported IPv6 multicast address will suppress their own reports upon hearing this report, and this will prevent the switch from knowing whether any hosts attached to that port are still active members of the reported IPv6 multicast group.

 

III. Done messages

When a host leaves an IPv6 multicast group, the host sends an MLD done message to the multicast router.

When the switch receives a group-specific MLD done message on a member port, it first checks whether a forwarding table entry for that IPv6 multicast group exists, and, if one exists, whether its outgoing port list contains that port.

l           If the forwarding table entry does not exist or if its outgoing port list does not contain the port, the switch discards the MLD done message instead of forwarding it to any port.

l           If the forwarding table entry exists and its outgoing port list contains the port, the switch forwards the done message to all router ports in the VLAN. Because the switch does not know whether any other hosts attached to the port are still listening to that IPv6 multicast group address, the switch does not immediately removes the port from the outgoing port list of the forwarding table entry for that group; instead, it resets the member port aging timer for the port.

Upon receiving an MLD done message from a host, the MLD querier resolves from the message the address of the IPv6 multicast group that the host just left and sends an MLD multicast-address-specific query to that IPv6 multicast group through the port that received the done message. Upon hearing the MLD multicast-address-specific query, the switch forwards it through all its router ports in the VLAN and all member ports for that IPv6 multicast group, and performs the following to the receiving port:

l           If any MLD report in response to the MLD multicast-address-specific query is heard on a member port before its aging timer expires, this means that some host attached to the port is receiving or expecting to receive IPv6 multicast data for that IPv6 multicast group. The switch resets the aging timer of the member port.

l           If no MLD report in response to the MLD multicast-address-specific query is heard on a member port before its aging timer expires, this means that no hosts attached to the port are still listening to that IPv6 multicast group address. The switch removes the port from the outgoing port list of the forwarding table entry for that IPv6 multicast group when the aging timer expires.

3.1.4  Protocols and Standards

MLD Snooping is documented in:

l           RFC 4541: Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches

3.2  MLD Snooping Configuration Task List

Complete these tasks to configure MLD Snooping:

Task

Remarks

Configuring Basic Functions of MLD Snooping

Enabling MLD Snooping

Required

Configuring the Version of MLD Snooping

Optional

Configuring MLD Snooping Port Functions

Configuring Aging Timers for Dynamic Ports

Optional

Configuring Static Ports

Optional

Configuring Simulated Joining

Optional

Configuring Fast Leave Processing

Optional

Configuring MLD Snooping Querier

Enabling MLD Snooping Querier

Optional

Configuring MLD Queries and Responses

Optional

Configuring Source IPv6 Addresses of MLD Queries

Optional

Configuring an MLD Snooping Policy

Configuring an IPv6 Multicast Group Filter

Optional

Configuring IPv6 Multicast Source Port Filtering

Optional

Configuring Dropping Unknown IPv6 Multicast Data

Optional

Configuring MLD Report Suppression

Optional

Configuring Maximum Multicast Groups that that Can Be Joined on a Port

Optional

Configuring IPv6 Multicast Group Replacement

Optional

 

&  Note:

l      Configurations made in MLD Snooping view are effective for all VLANs, while configurations made in VLAN view are effective only for ports belonging to the current VLAN. For a given VLAN, a configuration made in MLD Snooping view is effective only if the same configuration is not made in VLAN view.

l      Configurations made in MLD Snooping view are effective for all ports; configurations made in Ethernet port view are effective only for the current port; configurations made in manual port group view are effective only for all the ports in the current port group; configurations made in aggregation group view are effective only for the master port. For a given port, a configuration made in MLD Snooping view is effective only if the same configuration is not made in Ethernet port view or port group view.

 

3.3  Configuring Basic Functions of MLD Snooping

3.3.1  Configuration Prerequisites

Before configuring the basic functions of MLD Snooping, complete the following tasks:

l           Configure the corresponding VLANs

Before configuring the basic functions of MLD Snooping, prepare the following data:

l           The version of MLD Snooping

3.3.2  Enabling MLD Snooping

Follow these steps to enable MLD Snooping:

To do...

Use the command...

Remarks

Enter system view

system-view

Enable MLD Snooping globally and enter MLD-Snooping view

mld-snooping

Required

Disabled by default

Return to system view

quit

Enter VLAN view

vlan vlan-id

Enable MLD Snooping in the VLAN

mld-snooping enable

Required

Disabled by default

 

&  Note:

l      MLD Snooping must be enabled globally before it can be enabled in a VLAN.

l      After enabling MLD Snooping in a VLAN, you cannot enable MLD and/or IPv6 PIM on the corresponding VLAN interface, and vice versa.

l      When you enable MLD Snooping in a specified VLAN, this function takes effect for Ethernet ports in this VLAN only.

 

3.3.3  Configuring the Version of MLD Snooping

By configuring the MLD Snooping version, you actually configure the version of MLD messages that MLD Snooping can process.

l           MLD Snooping version 1 can process MLDv1 messages, but cannot analyze and process MLDv2 messages, which will be flooded in the VLAN.

l           MLD Snooping version 2 can process MLDv1 and MLDv2 messages.

Follow these steps to configure the version of MLD Snooping:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure the version of MLD Snooping

mld-snooping version version-number

Optional

Version 1 by default

 

  Caution:

If you switch MLD Snooping from version 2 to version 1, the system will clear all MLD Snooping forwarding entries from dynamic joins, and will:

l      Keep forwarding entries from version 2 static (*, G) joins;

l      Clear forwarding entries from version 2 static (S, G) joins, which will be restored when MLD Snooping is switched back to version 2.

For details about static joins, Refer to Configuring Static Ports.

 

3.4  Configuring MLD Snooping Port Functions

3.4.1  Configuration Prerequisites

Before configuring MLD Snooping port functions, complete the following tasks:

l           Enable MLD Snooping in the VLAN

l           Configure the corresponding port groups

Before configuring MLD Snooping port functions, prepare the following data:

l           Aging time of router ports

l           Aging timer of member ports

l           IPv6 multicast group and IPv6 multicast source addresses

3.4.2  Configuring Aging Timers for Dynamic Ports

If the switch receives no MLD general queries or IPv6 PIM hello messages on a dynamic router port, the switch removes the port from the router port list when the aging timer of the port expires.

If the switch receives no MLD reports for an IPv6 multicast group on a dynamic member port, the switch removes the port from the outgoing port list of the forwarding table entry for that IPv6 multicast group when the aging timer of the port for that group expires.

If IPv6 multicast group memberships change frequently, you can set a relatively small value for the member port aging timer, and vice versa.

I. Configuring aging timers for dynamic ports globally

Follow these steps to configure aging timers for dynamic ports globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MLD Snooping view

mld-snooping

Configure router port aging time

router-aging-time interval

Optional

260 seconds by default

Configure member port aging time

host-aging-time interval

Optional

260 seconds by default

 

II. Configuring aging timers for dynamic ports in a VLAN

Follow these steps to configure aging timers for dynamic ports in a VLAN:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure router port aging time

mld-snooping router-aging-time interval

Optional

260 seconds by default

Configure member port aging time

mld-snooping host-aging-time interval

Optional

260 seconds by default

 

3.4.3  Configuring Static Ports

If all the hosts attached to a port is interested in the IPv6 multicast data addressed to a particular IPv6 multicast group, you can configure that port as a static member port for that IPv6 multicast group.

You can configure a port of a switch to be static router port, through which the switch can forward all IPv6 multicast data it received.

Follow these steps to configure static ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure the port(s) as static member port(s)

mld-snooping static-group ipv6-group-address [ source-ip ipv6-source-address ] vlan vlan-id

Required

Disabled by default

Configure the port(s) as static router port(s)

mld-snooping static-router-port vlan vlan-id

Required

Disabled by default

 

l      The IPv6 static (S, G) joining function is available only if a valid IPv6 multicast source address is specified and MLD Snooping version 2 is currently running on the switch.

l      A static member port does not respond to queries from the MLD querier; when static (*, G) or (S, G) joining is enabled or disabled on a port, the port does not send an unsolicited MLD report or an MLD done message.

l      Static member ports and static router ports never age out. To remove such a port, you need to use the corresponding command.

 

3.4.4  Configuring Simulated Joining

Generally, a host running MLD responds to MLD queries from the MLD querier. If a host fails to respond due to some reasons, the multicast router will deem that no member of this IPv6 multicast group exists on the network segment, and therefore will remove the corresponding forwarding path.

To avoid this situation from happening, you can enable simulated joining on a port of the switch, namely configure the port as a simulated member host for an IPv6 multicast group. When an MLD query is heard, simulated host gives a response. Thus, the switch can continue receiving IPv6 multicast data.

A simulated host acts like a real host, as follows:

l           When a port is configured as a simulated member host, the switch sends an unsolicited MLD report through that port.

l           After a port is configured as a simulated member host, the switch responds to MLD general queries by sending MLD reports through that port.

l           When the simulated joining function is disabled on a port, the switch sends an MLD done message through that port.

Follow these steps to configure simulated joining:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure simulated joining

mld-snooping host-join ipv6-group-address [ source-ip ipv6-source-address ] vlan vlan-id

Required

Disabled by default

 

&  Note:

l      Each simulated host is equivalent to an independent host. For example, when receiving an MLD query, the simulated host corresponding to each configuration responds respectively.

l      Unlike a static member port, a port configured as a simulated member host will age out like a dynamic member port.

 

3.4.5  Configuring Fast Leave Processing

The fast leave processing feature allows the switch to process MLD done messages in a fast way. With the fast leave processing feature enabled, when receiving an MLD done message on a port, the switch immediately removes that port from the outgoing port list of the forwarding table entry for the indicated IPv6 multicast group. Then, when receiving MLD done multicast-address-specific queries for that IPv6 multicast group, the switch will not forward them to that port.

In VLANs where only one host is attached to each port, fast leave processing helps improve bandwidth and resource usage.

I. Configuring fast leave processing globally

Follow these steps to configure fast leave processing globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MLD Snooping view

mld-snooping

Enable fast leave processing

fast-leave [ vlan vlan-list ]

Required

Disabled by default

 

II. Configuring fast leave processing on a port or a group of ports

Follow these steps to configure fast leave processing on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Enable fast leave processing

mld-snooping fast-leave [ vlan vlan-list ]

Required

Disabled by default

 

  Caution:

If fast leave processing is enabled on a port to which more than one host is connected, when one host leaves an IPv6 multicast group, the other hosts connected to port and interested in the same IPv6 multicast group will fail to receive IPv6 multicast data addressed to that group.

 

3.5  Configuring MLD Snooping Querier

3.5.1  Configuration Prerequisites

Before configuring MLD Snooping querier, complete the following task:

l           Enable MLD Snooping in the VLAN.

Before configuring MLD Snooping querier, prepare the following data:

l           MLD general query interval,

l           MLD last-member query interval,

l           Maximum response time for MLD general queries,

l           Source IPv6 address of MLD general queries, and

l           Source IPv6 address of MLD multicast-address-specific queries.

3.5.2  Enabling MLD Snooping Querier

In an IPv6 multicast network running MLD, a multicast router or Layer 3 multicast switch is responsible for sending periodic MLD general queries, so that all Layer 3 multicast devices can establish and maintain multicast forwarding entries, thus to forward multicast traffic correctly at the network layer. This router or Layer 3 switch is called MLD querier.

However, a Layer 2 multicast switch does not support MLD, and therefore cannot send MLD general queries by default. By enabling MLD Snooping querier on a Layer 2 switch in a VLAN where multicast traffic needs to be Layer-2 switched only and no Layer 3 multicast devices are present, the Layer 2 switch will act as the MLD querier to send periodic MLD queries, thus allowing multicast forwarding entries to be established and maintained at the data link layer.

Follow these steps to enable the MLD Snooping querier:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Enable the MLD Snooping querier

mld-snooping querier

Required

Disabled by default

 

  Caution:

It is meaningless to configure an MLD Snooping querier in an IPv6 multicast network running MLD. Although an MLD Snooping querier does not take part in MLD querier elections, it may affect MLD querier elections because it sends MLD general queries with a low source IPv6 address.

 

3.5.3  Configuring MLD Queries and Responses

You can tune the MLD general query interval based on actual condition of the network.

Upon receiving an MLD query (general query or group-specific query), a host starts a timer for each IPv6 multicast group it has joined. This timer is initialized to a random value in the range of 0 to the maximum response time (the host obtains the value of the maximum response time from the Max Response Time field in the MLD query it received). When the timer value comes down to 0, the host sends an MLD report to the corresponding IPv6 multicast group.

An appropriate setting of the maximum response time for MLD queries allows hosts to respond to queries quickly and avoids burstiness of MLD traffic on the network caused by reports simultaneously sent by a large number of hosts when the corresponding timers expire simultaneously.

l           For MLD general queries, you can configure the maximum response time to fill their Max Response time field.

l           For MLD multicast-address-specific queries, you can configure the MLD last-member query interval to fill their Max Response time field. Namely, for MLD multicast-address-specific queries, the maximum response time equals to the MLD last-member query interval.

I. Configuring MLD queries and responses globally

Follow these steps to configure MLD queries and responses globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MLD Snooping view

mld-snooping

Configure the maximum response time for MLD general queries

max-response-time interval

Optional

10 seconds by default

Configure the MLD last-member query interval

last-listener-query-interval interval

Optional

1 second by default

 

II. Configuring MLD queries and responses in a VLAN

Follow these steps to configure MLD queries and responses in a VLAN

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure MLD query interval

mld-snooping query-interval interval

Optional

125 seconds by default

Configure the maximum response time for MLD general queries

mld-snooping max-response-time interval

Optional

10 seconds by default

Configure the MLD last-member query interval

mld-snooping last-listener-query-interval interval

Optional

1 second by default

 

  Caution:

Make sure that the MLD query interval is greater than the maximum response time for MLD general queries; otherwise undesired deletion of IPv6 multicast members may occur.

 

3.5.4  Configuring Source IPv6 Addresses of MLD Queries

This configuration allows you to change the source IPv6 address of MLD queries.

Follow these steps to configure source IPv6 addresses of MLD queries:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Configure the source IPv6 address of MLD general queries

mld-snooping general-query source-ip { current-interface | ipv6-address }

Optional

FE80::02FF:FFFF:FE00:0001 by default

Configure the source IPv6 address of MLD multicast-address-specific queries

mld-snooping special-query source-ip { current-interface | ipv6-address }

Optional

FE80::02FF:FFFF:FE00:0001 by default

 

  Caution:

The source IPv6 address of MLD query messages may affect MLD querier election within the segment.

 

3.6  Configuring an MLD Snooping Policy

3.6.1  Configuration Prerequisites

Before configuring an MLD Snooping policy, complete the following tasks:

l           Enable MLD Snooping in the VLAN

Before configuring an MLD Snooping policy, prepare the following data:

l           IPv6 ACL rule for IPv6 multicast group filtering

l           The maximum number of IPv6 multicast groups that can pass the ports

3.6.2  Configuring an IPv6 Multicast Group Filter

On a MLD Snooping–enabled switch, the configuration of an IPv6 multicast group filter allows the service provider to define limits of multicast programs available to different users.

In an actual application, when a user requests a multicast program, the user’s host initiates an MLD report. Upon receiving this report message, the switch checks the report against the configured ACL rule. If the port on which the report was heard can join this IPv6 multicast group, the switch adds an entry for this port in the MLD Snooping forwarding table; otherwise the switch drops this report message. Any IPv6 multicast data that fails the ACL check will not be sent to this port. In this way, the service provider can control the VOD programs provided for multicast users.

I. Configuring an IPv6 multicast group filter globally

Follow these steps to configure an IPv6 multicast group globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MLD Snooping view

mld-snooping

Configure an IPv6 multicast group filter

group-policy acl6-number [ vlan vlan-list ]

Required

No IPv6 filter configured by default, namely hosts can join any IPv6 multicast group.

 

II. Configuring an IPv6 multicast group filter on a port or a group of ports

Follow these steps to configure an IPv6 multicast group filer on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure an IPv6 multicast group filter

mld-snooping group-policy acl6-number [ vlan vlan-list ]

Required

No IPv6 filter configured by default, namely hosts can join any IPv6 multicast group.

 

3.6.3  Configuring IPv6 Multicast Source Port Filtering

With the IPv6 multicast source port filtering feature enabled on a port, the port can be connected with IPv6 multicast receivers only rather than with multicast sources, because the port will block all IPv6 multicast data packets while it permits multicast protocol packets to pass.

If this feature is disabled on a port, the port can be connected with both multicast sources and IPv6 multicast receivers.

I. Configuring IPv6 multicast source port filtering globally

Follow these steps to configure IPv6 multicast source port filtering:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MLD Snooping view

mld-snooping

Enable IPv6 multicast source port filtering

source-deny port interface-list

Required

Disabled by default

 

II. Configuring IPv6 multicast source port filtering on a port or a group of ports

Follow these steps to configure IPv6 multicast source port filtering on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Enable IPv6 multicast source port filtering

mld-snooping source-deny

Required

Disabled by default

 

&  Note:

When enabled to filter IPv6 multicast data based on the source ports, the device is automatically enabled to filter IPv4 multicast data based on the source ports.

 

3.6.4  Configuring Dropping Unknown IPv6 Multicast Data

Unknown IPv6 multicast data refers to IPv6 multicast data for which no forwarding entries exist in the MLD Snooping forwarding table: When the switch receives such IPv6 multicast traffic:

l           With the function of dropping unknown IPv6 multicast data enabled, the switch drops all unknown IPv6 multicast data received.

l           With the function of dropping unknown IPv6 multicast data disabled, the switch floods unknown IPv6 multicast data in the VLAN to which the unknown IPv6 multicast data belongs.

Follow these steps to enable dropping unknown IPv6 multicast data in a VLAN:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter VLAN view

vlan vlan-id

Enable dropping unknown IPv6 multicast data

mld-snooping drop-unknown

Required

Disabled by default

 

&  Note:

When enabled to drop unknown IPv6 multicast data, the device is automatically enabled to drop unknown IPv4 multicast data.

 

3.6.5  Configuring MLD Report Suppression

When a Layer 2 device receives an MLD report from an IPv6 multicast group member, the Layer 2 device forwards the message to the Layer 3 device directly connected with it. Thus, when multiple members belonging to an IPv6 multicast group exist on the Layer 2 device, the Layer 3 device directly connected with it will receive duplicate MLD reports from these members.

With the MLD report suppression function enabled, within a query interval, the Layer 2 device forwards only the first MLD report of an IPv6 group to the Layer 3 device and will not forward the subsequent MLD reports from the same multicast group to the Layer 3 device. This helps reduce the number of packets being transmitted over the network.

Follow these steps to configure MLD report suppression:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MLD Snooping view

mld-snooping

Enable MLD report suppression

report-aggregation

Optional

Enabled by default

 

3.6.6  Configuring Maximum Multicast Groups that that Can Be Joined on a Port

By configuring the maximum number of IPv6 multicast groups that can be joined on a port or a group of ports, you can limit the number of multicast programs available to VOD users, thus to control the traffic on the port.

Follow these steps configure the maximum number of IPv6 multicast groups that can be joined on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure the maximum number of IPv6 multicast groups that can be joined on a port

mld-snooping group-limit limit [ vlan vlan-list ]

Optional

The default is 512.

 

&  Note:

l      When the number of IPv6 multicast groups that can be joined on a port reaches the maximum number configured, the system deletes all the forwarding entries persistent to that port from the MLD Snooping forwarding table, and the hosts on this port need to join IPv6 multicast groups again.

l      If you have configured static or simulated joins on a port, however, when the number of IPv6 multicast groups on the port exceeds the configured threshold, the system deletes all the forwarding entries persistent to that port from the MLD Snooping forwarding table and applies the static or simulated joins again, until the number of IPv6 multicast groups joined by the port comes back within the configured threshold.

 

3.6.7  Configuring IPv6 Multicast Group Replacement

For some special reasons, the number of IPv6 multicast groups passing through a switch or port may exceed the number configured for the switch or the port. In addition, in some specific applications, an IPv6 multicast group newly joined on the switch needs to replace an existing IPv6 multicast group automatically. A typical example is “channel switching”, namely, by joining the new multicast, a user automatically switches from the current IPv6 multicast group to the one.

To address this situation, you can enable the IPv6 multicast group replacement function on the switch or certain ports. When the number of IPv6 multicast groups a switch or a port has joined exceeds the limit.

l           If the IPv6 multicast group replacement is enabled, the newly joined IPv6 multicast group automatically replaces an existing IPv6 multicast group with the lowest IPv6 address.

l           If the IPv6 multicast group replacement is not enabled, new MLD reports will be automatically discarded.

I. Configuring IPv6 multicast group replacement globally

Follow these steps to configure IPv6 multicast group replacement globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MLD Snooping view

mld-snooping

Configure IPv6 multicast group replacement

overflow-replace [ vlan vlan-list ]

Required

Disabled by default

 

II. Configuring IPv6 multicast group replacement on a port or a group of ports

Follow these steps to configure IPv6 multicast group replacement on a port or a group of ports:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter the corresponding view

Enter Ethernet port view

interface interface-type interface-number

Use either command

Enter port group view

port-group { manual port-group-name | aggregation agg-id }

Configure IPv6 multicast group replacement

mld-snooping overflow-replace [ vlan vlan-list ]

Required

Disabled by default

 

  Caution:

Be sure to configure the maximum number of IPv6 multicast groups allowed on a port (refer to Configuring Maximum Multicast Groups that that Can Be Joined on a Port) before configuring IPv6 multicast group replacement. Otherwise, the IPv6 multicast group replacement functionality will not take effect.

 

3.7  Displaying and Maintaining MLD Snooping

To do…

Use the command...

Remarks

View the information about MLD Snooping multicast groups

display mld-snooping group [ vlan vlan-id ] [ verbose ]

Available in any view

View the statistics information of MLD messages learned by MLD Snooping

display mld-snooping statistics

Available in any view

Clear MLD Snooping multicast group information

reset mld-snooping group { ipv6-group-address | all } [ vlan vlan-id ]

Available in user view

Clear the statistics information of all kinds of MLD messages learned by MLD Snooping

reset mld-snooping statistics

Available in user view

 

&  Note:

The reset mld-snooping group command cannot clear MLD Snooping multicast group information for static joins.

 

3.8  MLD Snooping Configuration Examples

3.8.1  Simulated Joining

I. Network requirements

As shown in Figure 3-3, Router A connects to the IPv6 multicast source through GigabitEthernet 1/0/2 and to Switch A through GigabitEthernet 1/0/1. Router A is the MLD querier on the subnet.

Perform the following configuration so that multicast data can be forwarded through GigabitEthernet 1/0/3 and GigabitEthernet 1/0/4 even if Host A and Host B temporarily stop receiving IPv6 multicast data for some unexpected reasons.

II. Network diagram

Figure 3-3 Network diagram for simulated joining configuration

III. Configuration procedure

1)         Enable IPv6 forwarding and configure the IPv6 address of each interface

Enable IPv6 forwarding and configure an IPv6 address and prefix length for each interface as per Figure 3-3. The detailed configuration steps are omitted.

2)         Configure Router A

# Enable IPv6 multicast routing, enable IPv6 PIM-DM on each interface, and enable MLDv1 on GigabitEthernet 1/0/1.

<RouterA> system-view

[RouterA] multicast ipv6 routing-enable

[RouterA] interface GigabitEthernet 1/0/1

[RouterA-GigabitEthernet1/0/1] mld enable

[RouterA-GigabitEthernet1/0/1] pim ipv6 sm

[RouterA-GigabitEthernet1/0/1] quit

[RouterA] interface GigabitEthernet 1/0/2

[RouterA-GigabitEthernet1/0/2] pim ipv6 sm

[RouterA-GigabitEthernet1/0/2] quit

3)         Configure Switch A

# Enable MLD Snooping globally.

<SwitchA> system-view

[SwitchA] mld-snooping

[SwitchA-mld-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 through GigabitEthernet 1/0/4 to this VLAN, and enable MLD Snooping in the VLAN.

[SwitchA] vlan 100

[SwitchA-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/4

[SwitchA-vlan100] mld-snooping enable

[SwitchA-vlan100] quit

# Enable simulated host joining on GigabitEthernet 1/0/3 and GigabitEthernet 1/0/4.

[SwitchA] interface GigabitEthernet 1/0/3

[SwitchA-GigabitEthernet1/0/3] mld-snooping host-join ff1e::101 vlan 100

[SwitchA-GigabitEthernet1/0/3] quit

[SwitchA] interface GigabitEthernet 1/0/4

[SwitchA-GigabitEthernet1/0/4] mld-snooping host-join ff1e::101 vlan 100

[SwitchA-GigabitEthernet1/0/4] quit

4)         Verify the configuration

# View the detailed information about MLD Snooping multicast groups in VLAN 100 on Switch A.

[SwitchA] display mld-snooping group vlan 100 verbose

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

 

  Port flags: D-Dynamic port, S-Static port, A-Aggregation port, C-Copy port

  Subvlan flags: R-Real VLAN, C-Copy VLAN

  Vlan(id):100.

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

    Router port(s):total 1 port.

            GE1/0/1               (D) ( 00:01:30 )

    IP group(s):the following ip group(s) match to one mac group.

      IP group address:FF1E::101

        (::, FF1E::101):

          Attribute:    Host Port

          Host port(s):total 2 port.

            GE1/0/3               (D) ( 00:03:23 )

            GE1/0/4               (D) ( 00:03:23 )

    MAC group(s):

      MAC group address:3333-0000-1001

          Host port(s):total 2 port.

            GE1/0/3

            GE1/0/4

As shown above, GigabitEthernet 1/0/3 and GigabitEthernet 1/0/4 of Switch A have joined IPv6 multicast group FF1E::101.

3.8.2  Static Router Port Configuration

I. Network requirements

l           As shown in Figure 3-4, Router A connects to an IPv6 multicast source (Source) through GigabitEthernet 1/0/2, and to Switch A through GigabitEthernet 1/0/1.

l           MLD is to run on Router A, and MLD Snooping is to run on Switch A, Switch B and Switch C, with Router A acting as the MLD querier.

l           Suppose STP runs on the network. To avoid data loops, the forwarding path from Switch A to Switch C is blocked under normal conditions, and IPv6 multicast traffic flows to the receivers, Host A and Host C, attached to Switch C only along the path of Switch A—Switch B—Switch C.

l           Now it is required to configure GigabitEthernet 1/0/3 that connects Switch A to Switch C as a static router port, so that IPv6 multicast traffic can flows to the receivers nearly uninterruptedly along the path of Switch A—Switch C in the case that the path of Switch A—Switch B—Switch C gets blocked.

 

&  Note:

If no static router port is configured, when the path of Switch A—Switch B—Switch C gets blocked, at least one MLD query-response cycle must be completed before the IPv6 multicast data can flow to the receivers along the new path of Switch A—Switch C, namely IPv6 multicast delivery will be interrupted during this process.

 

II. Network diagram

Figure 3-4 Network diagram for static router port configuration

III. Configuration procedure

1)         Enable IPv6 forwarding and configure the IPv6 address of each interface

Enable IPv6 forwarding and configure an IP address and prefix length for each interface as per Figure 3-4.

2)         Configure Router A

# Enable IPv6 multicast routing, enable IPv6 PIM-DM on each interface, and enable MLD on GigabitEthernet 1/0/1.

<RouterA> system-view

[RouterA] multicast ipv6 routing-enable

[RouterA] interface GigabitEthernet 1/0/1

[RouterA-GigabitEthernet 1/0/1] mld enable

[RouterA-GigabitEthernet 1/0/1] pim ipv6 dm

[RouterA-GigabitEthernet 1/0/1] quit

[RouterA] interface GigabitEthernet 1/0/2

[RouterA-GigabitEthernet 1/0/2] pim ipv6 dm

[RouterA-GigabitEthernet 1/0/2] quit

3)         Configure Switch A

# Enable MLD Snooping globally.

<SwitchA> system-view

[SwitchA] mld-snooping

[SwitchA-mld-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 to this VLAN, and enable MLD Snooping in the VLAN.

[SwitchA] vlan 100

[SwitchA-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/3

[SwitchA-vlan100] mld-snooping enable

[SwitchA-vlan100] quit

# Configure GigabitEthernet 1/0/3 to be a static router port.

[SwitchA] interface GigabitEthernet 1/0/3

[SwitchA-GigabitEthernet 1/0/3] mld-snooping static-router-port vlan 100

[SwitchA-GigabitEthernet 1/0/3] quit

4)         Configure Switch B

# Enable MLD Snooping globally.

<SwitchB> system-view

[SwitchB] mld-snooping

[SwitchB-mld-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 and GigabitEthernet 1/0/2 to this VLAN, and enable MLD Snooping in the VLAN.

[SwitchB] vlan 100

[SwitchB-vlan100] port GigabitEthernet 1/0/1 GigabitEthernet 1/0/2

[SwitchB-vlan100] mld-snooping enable

[SwitchB-vlan100] quit

5)         Configure Switch C

# Enable MLD Snooping globally.

<SwitchC> system-view

[SwitchC] mld-snooping

[SwitchC-mld-snooping] quit

# Create VLAN 100, assign GigabitEthernet 1/0/1 through GigabitEthernet 1/0/5 to this VLAN, and enable MLD Snooping in the VLAN.

[SwitchC] vlan 100

[SwitchC-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/5

[SwitchC-vlan100] mld-snooping enable

[SwitchC-vlan100] quit

6)         Verify the configuration

# View the detailed information about MLD Snooping multicast groups in VLAN 100 on Switch A.

[SwitchA] display mld-snooping group vlan 100 verbose

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

 

  Port flags: D-Dynamic port, S-Static port, A-Aggregation port, C-Copy port

  Subvlan flags: R-Real VLAN, C-Copy VLAN

  Vlan(id):100.

    Total 1 IP Group(s).

    Total 1 IP Source(s).

    Total 1 MAC Group(s).

    Router port(s):total 2 port.

            GE1/0/1               (D) ( 00:01:30 )

            GE1/0/3               (S)

    IP group(s):the following ip group(s) match to one mac group.

      IP group address:FF1E::101

        (::, FF1E::101):

          Attribute:    Host Port

          Host port(s):total 1 port.

            GE1/0/2               (D) ( 00:03:23 )

    MAC group(s):

      MAC group address:3333-0000-0101

          Host port(s):total 1 port.

            GE1/0/2

As shown above, GigabitEthernet 1/0/3 of Switch A has become a static router port.

3.8.3  MLD Snooping Querier Configuration

I. Network requirements

l           As shown in Figure 2-5, in a Layer-2-only network environment, Switch C is attached to the multicast source (Source) through GigabitEthernet 1/0/3. At least one receiver is connected to Switch B and Switch C respectively.

l           MLDv1 is enabled on all the receivers. Switch A, Switch B, and Switch C run MLD Snooping. Switch A acts as the MLD Snooping querier.

II. Network diagram

Figure 3-5 Network diagram for MLD Snooping querier configuration

III. Configuration procedure

1)         Configure switch A

# Enable IPv6 forwarding and enable MLD Snooping globally.

<SwitchA> system-view

[SwitchA] ipv6

[SwitchA] mld-snooping

[SwitchA-mld-snooping] quit

# Create VLAN 100 and add GigabitEthernet 1/0/1 and GigabitEthernet 1/0/2 to VLAN 100.

[SwitchA] vlan 100

[SwitchA-vlan100] port GigabitEthernet 1/0/1 GigabitEthernet 1/0/2

# Enable MLD Snooping in VLAN 100 and configure the MLD-Snooping querier feature.

[SwitchA-vlan100] mld-snooping enable

[SwitchA-vlan100] mld-snooping querier

2)         Configure Switch B

# Enable IPv6 forwarding and enable MLD Snooping globally.

<SwitchB> system-view

[SwitchB] ipv6

[SwitchB] mld-snooping

[SwitchB-mld-snooping] quit

# Create VLAN 100, add GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 into VLAN 100, and enable MLD Snooping in this VLAN.

[SwitchB] vlan 100

[SwitchB-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/3

[SwitchB-vlan100] mld-snooping enable

3)         Configuration on Switch C

# Enable IPv6 forwarding and enable MLD Snooping globally.

<SwitchC> system-view

[SwitchC] ipv6

[SwitchC] mld-snooping

[SwitchC-mld-snooping] quit

# Create VLAN 100, add GigabitEthernet 1/0/1 through GigabitEthernet 1/0/3 to VLAN 100, and enable MLD Snooping in this VLAN.

[SwitchC] vlan 100

[SwitchC-vlan100] port GigabitEthernet 1/0/1 to GigabitEthernet 1/0/3

[SwitchC-vlan100] mld-snooping enable

4)         Verify the configuration

# View the MLD message statistics on Switch C.

[SwitchC-vlan100] display mld-snooping statistics

  Received MLD general queries:3.

  Received MLDv1 specific queries:0.

  Received MLDv1 reports:4.

  Received MLD dones:0.

  Sent     MLDv1 specific queries:0.

  Received MLDv2 reports:0.

  Received MLDv2 reports with right and wrong records:0.

  Received MLDv2 specific queries:0.

  Received MLDv2 specific sg queries:0.

  Sent     MLDv2 specific queries:0.

  Sent     MLDv2 specific sg queries:0.

  Received error MLD messages:0.

Switch C received MLD general queries. This means that Switch A works as an MLD-Snooping querier.

3.9  Troubleshooting MLD Snooping

3.9.1  Switch Fails in Layer 2 Multicast Forwarding

I. Symptom

A switch fails to implement Layer 2 multicast forwarding.

II. Analysis

MLD Snooping is not enabled.

III. Solution

1)         Enter the display current-configuration command to view the running status of MLD Snooping.

2)         If MLD Snooping is not enabled, use the mld-snooping command to enable MLD Snooping globally, and then use mld-snooping enable command to enable MLD Snooping in VLAN view.

3)         If MLD Snooping is disabled only for the corresponding VLAN, just use the mld-snooping enable command in VLAN view to enable MLD Snooping in the corresponding VLAN.

3.9.2  Configured IPv6 Multicast Group Policy Fails to Take Effect

I. Symptom

Although an IPv6 multicast group policy has been configured to allow hosts to join specific IPv6 multicast groups, the hosts can still receive IPv6 multicast data addressed to other groups.

II. Analysis

l           The IPv6 ACL rule is incorrectly configured.

l           The IPv6 multicast group policy is not correctly applied.

l           The function of dropping unknown IPv6 multicast data is not enabled, so unknown IPv6 multicast data is flooded.

l           Certain ports have been configured as static member ports of IPv6 multicasts groups, and this configuration conflicts with the configured IPv6 multicast group policy.

III. Solution

1)         Use the display acl ipv6 command to check the configured IPv6 ACL rule. Make sure that the IPv6 ACL rule conforms to the IPv6 multicast group policy to be implemented.

2)         Use the display this command in MLD Snooping view or the corresponding interface view to check whether the correct IPv6 multicast group policy has been applied. If not, use the group-policy or mld-snooping group-policy command to apply the correct IPv6 multicast group policy.

3)         Use the display current-configuration command to whether the function of dropping unknown IPv6 multicast data is enabled. If not, use the mld-snooping drop-unknown command to enable the function of dropping unknown IPv6 multicast data.

4)         Use the display mld-snooping group command to check whether any port has been configured as a static member port of any IPv6 multicast group. If so, check whether this configuration conflicts with the configured IPv6 multicast group policy. If any conflict exists, remove the port as a static member of the IPv6 multicast group.

 


Chapter 4  Multicast VLAN Configuration

4.1  Introduction to Multicast VLAN

As shown in Figure 4-1, in the traditional multicast programs-on-demand mode, when hosts that belong to different VLANs, Host A, Host B and Host C require multicast programs on demand service, Router A needs to forward a separate copy of the multicast data in each VLAN. This results in not only waste of network bandwidth but also extra burden on the Layer 3 device.

Figure 4-1 Before and after multicast VLAN is enabled on the Layer 2 device

To solve this problem, you can enable the multicast VLAN feature on Switch A, namely configure the VLANs to which these hosts belong as sub-VLANs of a multicast VLAN on the Layer 2 device and enable Layer 2 multicast in the multicast VLAN. After this configuration, Router A replicates the multicast data only within the multicast VLAN instead of forwarding a separate copy of the multicast data to each VLAN. This saves the network bandwidth and lessens the burden of the Layer 3 device.

4.2  Configuring Multicast VLAN

Follow these steps to configure a multicast VLAN:

To do…

Use the command…

Remarks

Enter system view

system-view

Configure a specific VLAN as a multicast VLAN

multicast-vlan vlan-id enable

Required

Disabled by default

Configure sub-VLANs for a specific multicast VLAN

multicast-vlan vlan-id subvlan vlan-list

Required

No sub-VLAN by default.

 

&  Note:

l      The VLAN to be configured as the multicast VLAN and the VLANs to be configured as sub-VLANs of the multicast VLAN must exist.

l      The number of sub-VLANs of the multicast VLAN must not exceed the system-defined limit (an S5500-EI series Ethernet switch supports a maximum of one multicast VLAN and 127 sub-VLANs).

 

  Caution:

l      You cannot configure any multicast VLAN or a sub-VLAN of a multicast VLAN on a device with IP multicast routing or routing enabled.

l      After a VLAN is configured into a multicast VLAN, IGMP Snooping must be enabled in the VLAN before the multicast VLAN feature can be implemented, while it is not necessary to enable IGMP Snooping in the sub-VLANs of the multicast VLAN.

 

4.3  Displaying and Maintaining Multicast VLAN

To do…

Use the command…

Remarks

Display information about a multicast VLAN and its sub-VLANs

display multicast-vlan [ vlan-id ]

Available in any view

 

4.4  Multicast VLAN Configuration Example

I. Network requirements

l           Router A connects to a multicast source through GigabitEthernet 1/0/2 and to Switch A, through GigabitEthernet 1/0/1.

l           IGMP is required on Router A, and IGMP Snooping is required on Switch A. Router A is the IGMP querier.

l           Switch A’s GigabitEthernet 1/0/1 belongs to VLAN 1024, GigabitEthernet 1/0/2 through GigabitEthernet 1/0/4 belong to VLAN 11 through VLAN 13 respectively, and Host A through Host C are attached to GigabitEthernet 1/0/2 through GigabitEthernet 1/0/4 of Switch A.

l           Configure the multicast VLAN feature so that Router A just sends multicast data to VLAN 1024 rather than to each VLAN when the three hosts attached to Switch A need the multicast data.

II. Network diagram

Figure 4-2 Network diagram for multicast VLAN configuration

III. Configuration procedure

1)         Configure an IP address for each interconnecting interface

Configure an IP address and subnet mask for each interface as per Figure 4-2. The detailed configuration steps are omitted here.

2)         Configure Router A

# Enable IP multicast routing, enable PIM-DM on each interface and enable IGMP on GigabitEthernet 1/0/1.

<RouterA> system-view

[RouterA] multicast routing-enable

[RouterA] interface GigabitEthernet 1/0/1

[RouterA-GigabitEthernet 1/0/1] pim dm

[RouterA-GigabitEthernet 1/0/1] igmp enable

[RouterA-GigabitEthernet 1/0/1] quit

[RouterA] interface GigabitEthernet 1/0/2

[RouterA-GigabitEthernet 1/0/2] pim dm

[RouterA-GigabitEthernet 1/0/2] quit

3)         Configure Switch A

# Enable IGMP Snooping globally.

<SwitchA> system-view

[SwitchA] igmp-snooping

[SwitchA-igmp-snooping] quit

# Create VLAN 11 and assign GigabitEthernet 1/0/2 to this VLAN.

[SwitchA] vlan 11

[SwitchA-vlan11] port GigabitEthernet 1/0/2

[SwitchA-vlan11] quit

The configuration for VLAN 12 and VLAN 13 is similar to the configuration for VLAN 11.

# Create VLAN 1024, assign GigabitEthernet 1/0/1 to this VLAN and enable IGMP Snooping in the VLAN.

[SwitchA] vlan 1024

[SwitchA-vlan1024] port GigabitEthernet 1/0/1

[SwitchA-vlan1024] igmp-snooping enable

[SwitchA-vlan1024] quit

# Configure VLAN 1024 as multicast VLAN and configure VLAN 11 through VLAN 13 as its sub-VLANs.

[SwitchA] multicast-vlan 1024 enable

[SwitchA] multicast-vlan 1024 subvlan 11 to 13

4)         Verify the configuration

# Display information about the multicast VLAN and its sub-VLANs.

[SwitchA] display multicast-vlan

 multicast vlan 1024's subvlan list:

   Vlan 11-13

 


Chapter 5  IPv6 Multicast VLAN Configuration

5.1  Introduction to IPv6 Multicast VLAN

As shown in Figure 5-1, in the traditional IPv6 multicast programs-on-demand mode, when hosts that belong to different VLANs, Host A, Host B and Host C require IPv6 multicast programs on demand service, Router A needs to forward a separate copy of the IPv6 multicast data in each VLAN. This results in not only waste of network bandwidth but also extra burden on the Layer 3 device.

Figure 5-1 Before and after IPv6 multicast VLAN is enabled on the Layer 2 device

To solve this problem, you can enable the IPv6 multicast VLAN feature on Switch A, namely configure the VLANs to which these hosts belong as sub-VLANs of an IPv6 multicast VLAN on the Layer 2 device and enable IPv6 Layer 2 multicast in the IPv6 multicast VLAN. After this configuration, Router A replicates the IPv6 multicast data only within the IPv6 multicast VLAN instead of forwarding a separate copy of the IPv6 multicast data to each VLAN. This saves the network bandwidth and lessens the burden of the Layer 3 device.

5.2  Configuring IPv6 Multicast VLAN

Follow these steps to configure IPv6 VLAN

To do…

Use the command…

Remarks

Enter system view

system-view

Configure a specific VLAN as an IPv6 multicast VLAN

multicast-vlan ipv6 vlan-id enable

Required

By default, no VLAN is an IPv6 multicast VLAN.

Configure sub-VLANs for a multicast VLAN

multicast-vlan ipv6 vlan-id subvlan vlan-list

Required

By default, no sub-VLANs exist.

 

&  Note:

l      The VLAN to be configured as an IPv6 multicast VLAN and the VLANs to be configured as sub-VLANs of the IPv6 multicast VLAN must exist.

l      The total number of sub-VLANs of an IPv6 multicast VLAN must not exceed the system-defined limit (an S5500-EI series Ethernet switch supports a maximum of one IPv6 multicast VLAN and 127 sub-VLANs).

 

  Caution:

l      You cannot enable IPv6 multicast VLAN on a device with IPv6 multicast routing enabled.

l      After a VLAN is configured into an IPv6 multicast VLAN, MLD Snooping must be enabled in the VLAN before the IPv6 multicast VLAN feature can be implemented, while it is not necessary to enable MLD Snooping in the sub-VLANs of the IPv6 multicast VLAN.

 

5.3  Displaying and Maintaining IPv6 Multicast VLAN

To do…

Use the command…

Remarks

Display information about an IPv6 multicast VLAN and its sub-VLANs

display multicast-vlan ipv6 [ vlan-id ]

Available in any view

 

5.4  IPv6 Multicast VLAN Configuration Examples

I. Network requirements

l           As shown in Figure 5-2, Router A connects to an IPv6 multicast source (Source) through GigabitEthernet 1/0/2, and to Switch A through GigabitEthernet 1/0/1.

l           Router A is an IPv6 multicast router while Switch A is a Layer 2 switch. Router A acts as the MLD querier on the subnet.

l           Switch A’s GigabitEthernet 1/0/1 belongs to VLAN 1024, GigabitEthernet 1/0/2 through GigabitEthernet 1/0/4 belong to VLAN 11 through VLAN 13 respectively, and Host A through Host C are attached to GigabitEthernet 1/0/2 through GigabitEthernet 1/0/4 of Switch A.

l           Configure the IPv6 multicast VLAN feature so that Router A just sends IPv6 multicast data to VLAN 1024 rather than to each VLAN when the three hosts attached to Switch A need the IPv6 multicast data.

II. Network diagram

Figure 5-2 Network diagram for IPv6 multicast VLAN configuration

III. Configuration procedure

1)         Enable IPv6 forwarding and configure IPv6 addresses of the interfaces of each device.

Enable IPv6 forwarding and configure the IPv6 address and address prefix for each interface as per Figure 5-2. The detailed configuration steps are omitted here.

2)         Configure Router A

# Enable IPv6 multicast routing, enable IPv6 PIM-DM on each interface, and enable MLD on GigabitEthernet 1/0/1.

<RouterA> system-view

[RouterA] multicast ipv6 routing-enable

[RouterA] interface GigabitEthernet 1/0/1

[RouterA-GigabitEthernet1/0/1] pim ipv6 dm

[RouterA-GigabitEthernet1/0/1] mld enable

[RouterA-GigabitEthernet1/0/1] quit

[RouterA] interface GigabitEthernet1/0/2

[RouterA-GigabitEthernet1/0/2] pim ipv6 dm

[RouterA-GigabitEthernet1/0/2] quit

3)         Configure Switch A

# Enable MLD Snooping globally.

<SwitchA> system-view

[SwitchA] mld-snooping

[SwitchA-mld-snooping] quit

# Create VLAN 11 and add GigabitEthernet 1/0/2 into VLAN 11.

[SwitchA] vlan 11

[SwitchA-vlan11] port GigabitEthernet 1/0/2

[SwitchA-vlan11] quit

The configuration for VLAN 12 and VLAN 13 is similar. The detailed configuration steps are omitted.

# Create VLAN 1024, add GigabitEthernet 1/0/1 to VLAN 1024, and enable MLD Snooping in this VLAN.

[SwitchA] vlan 1024

[SwitchA-vlan1024] port GigabitEthernet 1/0/1

[SwitchA-vlan1024] mld-snooping enable

[SwitchA-vlan1024] quit

# Configure VLAN 1024 as an IPv6 multicast VLAN, and configure VLAN 11 through VLAN 13 as its sub-VLANs.

[SwitchA] multicast-vlan ipv6 1024 enable

[SwitchA] multicast-vlan ipv6 1024 subvlan 11 to 13

4)         Verify the configuration

# Display IPv6 multicast VLAN and sub-VLAN information on Switch A.

[SwitchA] display multicast-vlan ipv6

 IPv6 multicast vlan 1024's subvlan list:

    vlan 11-13

 


Chapter 6  IGMP Configuration

When configuring IGMP, go to the following sections for the information you are interested in:

l           IGMP Overview

l           IGMP Configuration Task List

l           IGMP Configuration Example

l           Troubleshooting IGMP

 

&  Note:

The term “router” in this document refers to a router in a generic sense or a Layer 3 switch running IGMP.

 

6.1  IGMP Overview

As a TCP/IP protocol responsible for IP multicast group member management, the Internet Group Management Protocol (IGMP) is used by IP hosts to establish and maintain their multicast group memberships to immediately neighboring multicast routers.

6.1.1  IGMP Versions

So far, there are three IGMP versions:

l           IGMPv1 (documented in RFC 1112)

l           IGMPv2 (documented in RFC 2236)

l           IGMPv3 (documented in RFC 3376)

All IGMP versions support the Any-Source Multicast (ASM) model. In addition, IGMPv3 can be directly used to implement the Source-Specific Multicast (SSM) model.

6.1.2  Work Mechanism of IGMPv1

IGMPv1 manages multicast group memberships mainly based on the query and response mechanism.

Of multiple multicast routers on the same subnet, all the routers can hear IGMP membership report messages (often referred to as reports) from hosts, but only one router is needed for sending IGMP query messages (often referred to as queries). So, a querier election mechanism is required to determine which router will act as the IGMP querier on the subnet.

In IGMPv1, the designated router (DR) elected by a multicast routing protocol (such as PIM) serves as the IGMP querier.

 

&  Note:

For more information about DR, refer to DR election.

 

Figure 6-1 Joining multicast groups

Assume that Host B and Host C are expected to receive multicast data addressed to multicast group G1, while Host A is expected to receive multicast data addressed to G2, as shown in Figure 6-1. The basic process that the hosts join the multicast groups is as follows:

1)         The IGMP querier (Router B in the figure) periodically multicasts IGMP queries (with the destination address of 224.0.0.1) to all hosts and routers on the local subnet.

2)         Upon receiving a query message, Host B or Host C (the delay timer of whichever expires first) sends an IGMP report to the multicast group address of G1, to announce its interest in G1. Assume it is Host B that sends the report message.

3)         Host C, which is on the same subnet, hears the report from Host B for joining G1. Upon hearing the report, Host C will suppress itself from sending a report message for the same multicast group, because the IGMP routers (Router A and Router B) already know that at least one host on the local subnet is interested in G1. This mechanism, known as IGMP report suppression, helps reduce traffic over the local subnet.

4)         At the same time, because Host A is interested in G2, it sends a report to the multicast group address of G2.

5)         Through the above-mentioned query/report process, the IGMP routers learn that members of G1 and G2 are attached to the local subnet, and generate (*, G1) and (*, G2) multicast forwarding entries, which will be the basis for subsequent multicast forwarding, where * represents any multicast source.

6)         When the multicast data addressed to G1 or G2 reaches an IGMP router, because the (*, G1) and (*, G2) multicast forwarding entries exist on the IGMP router, the router forwards the multicast data to the local subnet, and then the receivers on the subnet receive the data.

As IGMPv1 does not specifically define a Leave Group message, upon leaving a multicast group, an IGMPv1 host stops sending reports with the destination address being the address of that multicast group. If no member of a multicast group exists on the subnet, the IGMP routers will not receive any report addressed to that multicast group, so the routers will delete the multicast forwarding entries corresponding to that multicast group after a period of time.

6.1.3  Enhancements Provided by IGMPv2

Compared with IGMPv1, IGMPv2 provides the querier election mechanism and Leave Group mechanism.

I. Querier election mechanism

In IGMPv1, the DR elected by the Layer 3 multicast routing protocol (such as PIM) serves as the querier among multiple routers on the same subnet.

In IGMPv2, an independent querier election mechanism is introduced. The querier election process is as follows:

1)         Initially, every IGMPv2 router assumes itself as the querier and sends IGMP general query messages (often referred to as general queries) to all hosts and routers on the local subnet (the destination address is 224.0.0.1).

2)         Upon hearing a general query, every IGMPv2 router compares the source IP address of the query message with its own interface address. After comparison, the router with the lowest IP address wins the querier election and all other IGMPv2 routers become non-queriers.

3)         All the non-queriers start a timer, known as “other querier present timer”. If a router receives an IGMP query from the querier before the timer expires, it resets this timer; otherwise, it assumes the querier to have timed out and initiates a new querier election process.

II. “Leave group” mechanism

In IGMPv1, when a host leaves a multicast group, it does not send any notification to the multicast router. The multicast router relies on host response timeout to know whether a group no longer has members. This adds to the leave latency.

In IGMPv2, on the other hand, when a host leaves a multicast group:

1)         This host sends a Leave Group message (often referred to as leave message) to all routers (the destination address is 224.0.0.2) on the local subnet.

2)         Upon receiving the leave message, the querier sends a configurable number of group-specific queries to the group being left. The destination address field and group address field of the message are both filled with the address of the multicast group being queried.

3)         One of the remaining members, if any on the subnet, of the group being queried should send a membership report within the maximum response time set in the query messages.

4)         If the querier receives a membership report for the group within the maximum response time, it will maintain the memberships of the group; otherwise, the querier will assume that no hosts on the subnet are still interested in multicast traffic to that group and will stop maintaining the memberships of the group.

6.1.4  Enhancements in IGMPv3

 

&  Note:

The support for the Exclude mode varies with device models.

 

Built upon and being compatible with IGMPv1 and IGMPv2, IGMPv3 provides hosts with enhanced control capabilities and provides enhancements of query and report messages.

I. Enhancements in control capability of hosts

IGMPv3 has introduced source filtering modes (Include and Exclude), so that a host not only can join a designated multicast group but also can specify to receive or reject multicast data from a designated multicast source. When a host joins a multicast group:

l           If it needs to receive multicast data from specific sources like S1, S2, …, it sends a report with the Filter-Mode denoted as “Include Sources (S1, S2, ……).

l           If it needs to reject multicast data from specific sources like S1, S2, …, it sends a report with the Filter-Mode denoted as “Exclude Sources (S1, S2, ……).

As shown in Figure 6-2, the network comprises two multicast sources, Source 1 (S1) and Source 2 (S2), both of which can send multicast data to multicast group G. Host B is interested only in the multicast data that Source 1 sends to G but not in the data from Source 2.

Figure 6-2 Flow paths of source-and-group-specific multicast traffic

In the case of IGMPv1 or IGMPv2, Host B cannot select multicast sources when it joins multicast group G. Therefore, multicast streams from both Source 1 and Source 2 will flow to Host B whether it needs them or not.

When IGMPv3 is running between the hosts and routers, Host B can explicitly express its interest in the multicast data Source 1 sends to multicast group G (denoted as (S1, G)), rather than the multicast data Source 2 sends to multicast group G (denoted as (S2, G)). Thus, only multicast data from Source 1 will be delivered to Host B.

II. Enhancements in query and report capabilities

1)         Query message carrying the source addresses

IGMPv3 supports not only general queries (feature of IGMPv1) and group-specific queries (feature of IGMPv2), but also group-and-source-specific queries.

l           A general query does not carry a group address, nor a source address;

l           A group-specific query carries a group address, but no source address;

l           A group-and-source-specific query carries a group address and one or more source addresses.

2)         Reports containing multiple group records

Unlike an IGMPv1 or IGMPv2 report message, an IGMPv3 report message is destined to 224.0.0.22 and contains one or more group records. Each group record contains a multicast group address and a multicast source address list.

Group record types include:

l           IS_IN: The source filtering mode is Include, namely, the report sender requests the multicast data from only the sources defined in the specified multicast source list. If the specified multicast source list is empty, this means that the report sender has left the reported multicast group.

l           IS_EX: The source filtering mode is Exclude, namely, the report sender requests the multicast data from any sources but those defined in the specified multicast source list.

l           TO_IN: The filter mode has changed from Exclude to Include.

l           TO_EX: The filter mode has changed from Include to Exclude.

l           ALLOW: The Source Address fields in this Group Record contain a list of the additional sources that the system wishes to hear from, for packets sent to the specified multicast address. If the change was to an Include source list, these are the addresses that were added to the list; if the change was to an Exclude source list, these are the addresses that were deleted from the list.

l           BLOCK: indicates that the Source Address fields in this Group Record contain a list of the sources that the system no longer wishes to hear from, for packets sent to the specified multicast address. If the change was to an Include source list, these are the addresses that were deleted from the list; if the change was to an Exclude source list, these are the addresses that were added to the list.

6.1.5  Protocols and Standards

The following documents describe different IGMP versions:

l           RFC 1112: Host Extensions for IP Multicasting

l           RFC 2236: Internet Group Management Protocol, Version 2

l           RFC 3376: Internet Group Management Protocol, Version 3

6.2  IGMP Configuration Task List

Complete these tasks to configure IGMP:

Task

Description

Configuring Basic Functions of IGMP

Enabling IGMP

Required

Configuring IGMP Versions

Optional

Configuring a Static Member of a Multicast Group

Optional

Configuring a Multicast Group Filter

Optional

Adjusting IGMP Performance

Configuring IGMP Message Options

Optional

Configuring IGMP Query and Response Parameters

Optional

Configuring IGMP Fast Leave Processing

Optional

 

&  Note:

l      Configurations performed in IGMP view are effective on all interfaces, while configurations performed in interface view are effective on the current interface only.

l      If a feature is not configured for an interface in interface view, the global configuration performed in IGMP view will apply to that interface. If a feature is configured in both IGMP view and interface view, the configuration performed in interface view will be given priority.

 

6.3  Configuring Basic Functions of IGMP

6.3.1  Configuration Prerequisites

Before configuring the basic functions of IGMP, complete the following tasks:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

l           Configure PIM-DM or PIM-SM

Before configuring the basic functions of IGMP, prepare the following data:

l           IGMP version

l           Multicast group and multicast source addresses for static group member configuration

l           ACL rule for multicast group filtering

6.3.2  Enabling IGMP

First, IGMP must be enabled on the interface on which the multicast group memberships are to be established and maintained.

Follow these steps to enable IGMP:

To do...

Use the command...

Description

Enter system view

system-view

Enable IP multicast routing

multicast routing-enable

Required

Disabled by default

Enter interface view

interface interface-type interface-number

Enable IGMP

igmp enable

Required

Disabled by default

 

6.3.3  Configuring IGMP Versions

Because messages vary with different IGMP versions, the same IGMP version should be configured for all routers on the same subnet before IGMP can work properly.

I. Configuring an IGMP version globally

Follow these steps to configure an IGMP version globally:

To do...

Use the command...

Description

Enter system view

system-view

Enter IGMP view

igmp

Configure an IGMP version globally

version version-number

Optional

IGMPv2 by default

 

II. Configuring an IGMP version on an interface

Follow these steps to configure an IGMP version on an interface:

To do...

Use the command...

Description

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure an IGMP version on the interface

igmp version version-number

Optional

IGMPv2 by default

 

6.3.4  Configuring a Static Member of a Multicast Group

After an interface is configured as a static member of a multicast group, it will act as a virtual member of the multicast group to receive multicast data addressed to that multicast group for the purpose of testing multicast data forwarding.

Follow these steps to configure an interface as a statically connected member of a multicast group:

To do...

Use the command...

Description

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure the interface as a static member of a multicast group

igmp static-group group-address [ source source-address ]

Required

An interface is not a static member of any multicast group by default.

 

&  Note:

l      Before you can configure an interface of a PIM-SM device as a static member of a multicast group, if the interface is PIM-SM enabled, it must be a PIM-SM DR; if this interface is IGMP enabled but not PIM-SM enabled, it must be an IGMP querier.

l      As a static member of a multicast group, an interface does not respond to the queries from the IGMP querier, nor does it send an unsolicited IGMP membership report or an IGMP leave group message when it joins or leaves a multicast group. In other words, the interface will not become a real member of the multicast group.

 

6.3.5  Configuring a Multicast Group Filter

You can configure a multicast group filter in IGMP Snooping. For details, see Configuring a Multicast Group Filter.

6.4  Adjusting IGMP Performance

 

&  Note:

For the configuration tasks described in this section:

l      Configurations performed in IGMP view are effective on all interfaces, while configurations performed in interface view are effective on the current interface only.

l      If the same feature is configured in both IGMP view and interface view, the configuration performed in interface view is given priority, regardless of the configuration sequence.

 

6.4.1  Configuration Prerequisites

Before adjusting IGMP performance, complete the following tasks:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

l           Configure basic functions of IGMP

Before adjusting IGMP performance, prepare the following data:

l           IGMP general query interval

l           IGMP querier’s robustness variable

l           Maximum response time for IGMP general queries

l           IGMP last-member query interval

l           Other querier present interval

6.4.2  Configuring IGMP Message Options

As IGMPv2 and IGMPv3 involve group-specific and group-and-source-specific queries, and multicast groups change dynamically, a device cannot join all multicast groups. Therefore, when receiving a multicast packet but unable to locate the outgoing interface for the destination multicast group, an IGMP router needs to leverage the Router-Alert option to pass the multicast packet to the upper-layer protocol for processing. For details about the Router-Alert option, refer to RFC 2113.

An IGMP message is processed differently depending whether it carries the Router-Alert option in the IP header:

l           By default, for the consideration of compatibility, the device does not check the Router-Alert option, namely it processes all the IGMP messages it received. In this case, IGMP messages are directly passed to the upper layer protocol, no matter whether the IGMP messages carry the Router-Alert option or not.

l           To enhance the device performance and avoid unnecessary costs, and also for the consideration of protocol security, you can configure the device to discard IGMP messages that do not carry the Router-Alert option.

I. Configuring IGMP packet options globally

Follow these steps to configure IGMP packet options globally:

To do...

Use the command...

Description

Enter system view

system-view

Enter IGMP view

igmp

Configure the router to discard any IGMP message that does not carry the Router-Alert option

require-router-alert

Optional

By default, the device does not check the Router-Alert option.

Enable the insertion of the Router-Alert option into IGMP messages

send-router-alert

Optional

By default, IGMP messages carry the Router-Alert option.

 

II. Configuring IGMP packet options on an interface

Follow these steps to configure IGMP packet options on an interface:

To do...

Use the command...

Description

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure the interface to discard any IGMP message that does not carry the Router-Alert option

igmp require-router-alert

Optional

By default, the device does not check the Router-Alert option.

Enable the insertion of the Router-Alert option into IGMP messages

igmp send-router-alert

Optional

By default, IGMP messages carry the Router-Alert option.

 

6.4.3  Configuring IGMP Query and Response Parameters

The IGMP querier periodically sends IGMP general queries at the “IGMP query interval” to determine whether any multicast group member exists on the network. You can tune the IGMP general query interval based on actual condition of the network.

On startup, the IGMP querier sends “startup query count” IGMP general queries at the “startup query interval”, which is 1/4 of the “IGMP query interval”. Upon receiving an IGMP leave message, the IGMP querier sends “last member query count” IGMP group-specific queries at the “IGMP last member query interval”. Both startup query count and last member query count are set to the IGMP querier robustness variable.

IGMP is robust to “robustness variable minus 1” packet losses on a network. Therefore, a greater value of the robustness variable makes the IGMP querier “more robust”, but results in a longer multicast group timeout time.

Upon receiving an IGMP query (general query or group-specific query), a host starts a delay timer for each multicast group it has joined. This timer is initialized to a random value in the range of 0 to the maximum response time, which is derived from the Max Response Time field in the IGMP query. When the timer value comes down to 0, the host sends an IGMP report to the corresponding multicast group.

An appropriate setting of the maximum response time for IGMP queries allows hosts to respond to queries quickly and avoids bursts of IGMP traffic on the network caused by reports simultaneously sent by a large number of hosts when the corresponding timers expires simultaneously.

l           For IGMP general queries, you can configure the maximum response time to fill their Max Response time field.

l           For IGMP group-specific queries, you can configure the IGMP last member query interval to fill their Max Response time field. Namely, for IGMP group-specific queries, the maximum response time equals the IGMP last member query interval.

When multiple multicast routers exist on the same subnet, the IGMP querier is responsible for sending IGMP queries. If a non-querier router receives no IGMP query from the querier within the “other querier present interval”, it will assume the querier to have expired and a new querier election process is launched; otherwise, the non-querier router will reset its “other querier present timer”.

I. Configuring IGMP query and response parameters globally

Follow these steps to configure IGMP query and response parameters globally:

To do...

Use the command...

Description

Enter system view

system-view

Enter IGMP view

igmp

Configure the IGMP query interval

timer query interval

Optional

60 seconds by default

Configure the IGMP querier robustness variable

robust-count robust-value

Optional

2 by default

Configure the maximum response time for IGMP general queries

max-response-time interval

Optional

10 seconds by default

Configure the IGMP last member query interval

Last-member-query-interval interval

Optional

1 second by default

Configure the other querier present interval

timer other-querier-present interval

Optional

For the system default, see “Note” below.

 

II. Configuring IGMP query and response parameters on an interface

Follow these steps to configure IGMP query and response parameters on an interface:

To do...

Use the command...

Description

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure IGMP query interval

igmp timer query interval

Optional

60 seconds by default

Configure the IGMP querier robustness variable

igmp robust-count robust-value

Optional

2 by default

Configure the maximum response time for IGMP general queries

igmp max-response-time interval

Optional

10 seconds by default

Configure the IGMP last member query interval

igmp last-member-query-interval interval

Optional

1 second by default

Configure the other querier present interval

igmp timer other-querier-present interval

Optional

For the system default, see “Note” below.

 

&  Note:

l      If not statically configured, the other querier present interval is [ IGMP query interval ] times [ IGMP robustness variable ] plus [ maximum response time for IGMP general queries ] divided by two. By default, the values of these three parameters are 60 (seconds), 2 and 10 (seconds) respectively, so the default value of the other querier present interval = 60 × 2 + 10 / 2 = 125 (seconds).

l      If statically configured, the other querier present interval takes the configured value.

 

  Caution:

l      Make sure that the other querier present interval is greater than the IGMP query interval; otherwise the IGMP querier may change frequently on the network.

l      Make sure that the IGMP query interval is greater than the maximum response time for IGMP general queries; otherwise, multicast group members may be wrongly removed.

l      The configurations of the maximum response time for IGMP general queries, the IGMP last member query interval and the IGMP other querier present interval are effective only for IGMPv2 or IGMPv3.

 

6.4.4  Configuring IGMP Fast Leave Processing

Fast leave processing is implemented by IGMP Snooping. For details, see Configuring Fast Leave Processing.

6.5  Displaying and Maintaining IGMP

To do...

Use the command...

Description

View IGMP multicast group information

display igmp group [ group-address | interface interface-type interface-number ] [ static | verbose ]

Available in any view

View IGMP layer 2 port information

display igmp group port-info [ vlan vlan-id ] [ verbose ]

Available in any view

View IGMP configuration and running information

display igmp interface [ interface-type interface-number ] [ verbose ]

Available in any view

View routing information in the IGMP routing table

display igmp routing-table [ source-address [ mask { mask | mask-length } ] | group-address [ mask { mask | mask-length } ] ] *

Available in any view

Clear IGMP forwarding entries

reset igmp group { all | interface interface-type interface-number { all | group-address [ mask { mask | mask-length } ] [ source-address [ mask { mask | mask-length } ] ] } }

Available in user view

Clear Layer 2 port information about IGMP multicast groups

reset igmp group port-info { all | group-address } [ vlan vlan-id ]

Available in user view

 

&  Note:

l      The reset igmp group command cannot clear the IGMP forwarding entries of static joins.

l      The reset igmp group port-info command cannot clear Layer 2 port information about IGMP multicast groups of static joins.

 

  Caution:

The reset igmp group command may cause an interruption of receivers’ reception of multicast data.

 

6.6  IGMP Configuration Example

I. Network requirements

l           Receivers receive VOD information through the multicast mode. Receivers of different organizations form stub networks N1 and N2, and Host A and Host C are receivers in N1 and N2 respectively.

l           Switch A in the PIM network connects to N1, and both Switch B and Switch C connect to N2.

l           Switch A connects to N1 through VLAN-interface 100, and to other devices in the PIM network through VLAN-interface 101.

l           Switch B and Switch C connect to N2 through their respective VLAN-interface 200, and to other devices in the PIM network through VLAN-interface 201 and VLAN-interface 202 respectively.

l           IGMPv3 is required between Switch A and N1. IGMPv2 is required between the other two switches and N2, with Switch B as the IGMP querier.

II. Network diagram

Figure 6-3 Network diagram for IGMP configuration

III. Configuration procedure

1)         Configure the IP addresses of the switch interfaces and configure a unicast routing protocol

Configure the IP address and subnet mask of each interface as per Figure 6-3. The detailed configuration steps are omitted here.

Configure the OSPF protocol for interoperation among the switches. Ensure the network-layer interoperation among Switch A, Switch B and Switch C on the PIM network and dynamic update of routing information among the switches through a unicast routing protocol. The detailed configuration steps are omitted here.

2)         Enable IP multicast routing, and enable IGMP on the host-side interfaces

# Enable IP multicast routing on Switch A, and enable IGMP (version 3) on VLAN-interface 100.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] igmp version 3

[SwitchA-Vlan-interface100] quit

# Enable IP multicast routing on Switch B, and enable IGMP (version 2) on VLAN-interface 200.

<SwitchB> system-view

[SwitchB] multicast routing-enable

[SwitchB] interface vlan-interface 200

[SwitchB-Vlan-interface200] igmp enable

[SwitchB-Vlan-interface200] igmp version 2

[SwitchB-Vlan-interface200] quit

# Enable IP multicast routing on Switch C, and enable IGMP (version 2) on VLAN-interface 200.

<SwitchC> system-view

[SwitchC] multicast routing-enable

[SwitchC] interface vlan-interface 200

[SwitchC-Vlan-interface200] igmp enable

[SwitchC-Vlan-interface200] igmp version 2

[SwitchC-Vlan-interface200] quit

3)         Verify the configuration

Carry out the display igmp interface command to view the IGMP configuration and running status on each switch interface. For example:

# View IGMP information on VLAN-interface 200 of Switch B.

[SwitchB] display igmp interface vlan-interface 200

Vlan-interface200(10.110.2.1):

   IGMP is enabled

   Current IGMP version is 2

   Value of query interval for IGMP(in seconds): 60

   Value of other querier timeout for IGMP(in seconds): 125

   Value of maximum query response time for IGMP(in seconds): 10

   Querier for IGMP: 10.110.2.1 (this router)

  Total 1 IGMP Group reported

6.7  Troubleshooting IGMP

6.7.1  No Member Information on the Receiver-Side Router

I. Symptom

When a host sends a report for joining multicast group G, there is no member information of the multicast group G on the router closest to that host.

II. Analysis

l           The correctness of networking and interface connections directly affects the generation of group member information.

l           Multicast routing must be enabled on the router.

l           If the igmp group-policy command has been configured on the interface, the interface cannot receive report messages that fail to pass filtering.

III. Solution

1)         Check that the networking is correct and interface connections are correct.

2)         Check that the interfaces and the host are on the same subnet. Use the display current-configuration interface command to view the IP address of the interface.

3)         Check that multicast routing is enabled. Carry out the display current-configuration command to check whether the multicast routing-enable command has been executed. If not, carry out the multicast routing-enable command in system view to enable IP multicast routing. In addition, check that IGMP is enabled on the corresponding interfaces.

4)         Check that the interface is in normal state and the correct IP address has been configured. Carry out the display igmp interface command to view the interface information. If no interface information is output, this means the interface is abnormal. Typically this is because the shutdown command has been executed on the interface, or the interface connection is incorrect, or no correct IP address has been configured on the interface.

5)         Check that no ACL rule has been configured to restrict the host from joining the multicast group G. Carry out the display current-configuration interface command to check whether the igmp group-policy command has been executed. If the host is restricted from joining the multicast group G, the ACL rule must be modified to allow receiving the reports for the multicast group G.

6.7.2  Inconsistent Memberships on Routers on the Same Subnet

I. Symptom

Different memberships are maintained on different IGMP routers on the same subnet.

II. Analysis

l           A router running IGMP maintains multiple parameters for each interface, and these parameters influence one another, forming very complicated relationships. Inconsistent IGMP interface parameter configurations for routers on the same subnet will surely result in inconsistency of memberships.

l           In addition, although IGMP routers are compatible with hosts, all routers on the same subnet must run the same version of IGMP. Inconsistent IGMP versions running on routers on the same subnet will also lead to inconsistency of IGMP memberships.

III. Solution

1)         Check the IGMP configuration. Carry out the display current-configuration command to view the IGMP configuration information on the interfaces.

2)         Carry out the display igmp interface command on all routers on the same subnet to check the IGMP-related timer settings. Make sure that the settings are consistent on all the routers.

3)         Use the display igmp interface command to check whether the routers are running the same version of IGMP.

 


Chapter 7  PIM Configuration

When configuring PIM, go to these sections for information you are interested in:

l           PIM Overview

l           Configuring PIM-DM

l           Configuring PIM-SM

l           Configuring PIM-SSM

l           Configuring PIM Common Information

l           Displaying and Maintaining PIM

l           PIM Configuration Examples

l           Troubleshooting PIM Configuration

 

&  Note:

The term “router” in this document refers to a router in a generic sense or a Layer 3 switch running the PIM protocol.

 

7.1  PIM Overview

Protocol Independent Multicast (PIM) provides IP multicast forwarding by leveraging static routes or unicast routing tables generated by any unicast routing protocol, such as routing information protocol (RIP), open shortest path first (OSPF), intermediate system to intermediate system (IS-IS), or border gateway protocol (BGP). Independent of the unicast routing protocols running on the device, multicast routing can be implemented as long as the corresponding multicast routing entries are created through unicast routes. PIM uses the reverse path forwarding (RPF) mechanism to implement multicast forwarding. When a multicast packet arrives on an interface of the device, it is subject to an RPF check. If the RPF check succeeds, the device creates the corresponding routing entry and forwards the packet; if the RPF check fails, the device discards the packet.

Based on the routing mechanism, PIM falls into two modes:

l           Protocol Independent Multicast–Dense Mode (PIM-DM), and

l           Protocol Independent Multicast–Sparse Mode (PIM-SM).

 

&  Note:

To facilitate description, a network comprising PIM-capable routers is referred to as a “PIM domain” in this document.

 

7.1.1  Introduction to PIM-DM

PIM-DM is a type of dense mode multicast protocol. It uses the “push mode” for multicast forwarding, and is suitable for small-sized networks with densely distributed multicast members.

The basic implementation of PIM-DM is as follows:

l           PIM-DM assumes that at least one multicast group member exists on each subnet of a network, and therefore multicast data is flooded to all nodes on the network. Then, branches without multicast forwarding are pruned from the forwarding tree, leaving only those branches that contain receivers. This “flood and prune” process takes place periodically, that is, pruned branches resume multicast forwarding when the pruned state times out and then data is re-flooded down these branches, and then are pruned again.

l           When a new receiver on a previously pruned branch joins a multicast group, to reduce the join latency, PIM-DM uses a graft mechanism to resume data forwarding to that branch.

Generally speaking, the multicast forwarding path is a source tree, namely a forwarding tree with the multicast source as its “root” and multicast group members as its “leaves”. Because the source tree is the shortest path from the multicast source to the receivers, it is also called shortest path tree (SPT).

7.1.2  How PIM-DM Works

The working mechanism of PIM-DM is summarized as follows:

l           Neighbor discovery

l           SPT building

l           Graft

l           Assert

I. Neighbor discovery

In a PIM domain, a PIM router discovers PIM neighbors, maintains PIM neighboring relationships with other routers, and builds and maintains SPTs by periodically multicasting hello messages to all other PIM routers (224.0.0.13).

 

&  Note:

Every activated interface on a router sends hello messages periodically, and thus learns the PIM neighboring information pertinent to the interface.

 

II. SPT establishment

The process of building an SPT is the process of “flood and prune”.

1)         In a PIM-DM domain, when a multicast source S sends multicast data to a multicast group G, the multicast packet is first flooded throughout the domain: The router first performs RPF check on the multicast packet. If the packet passes the RPF check, the router creates an (S, G) entry and forwards the data to all downstream nodes in the network. In the flooding process, an (S, G) entry is created on all the routers in the PIM-DM domain.

2)         Then, nodes without receivers downstream are pruned: A router having no receivers downstream sends a prune message to the upstream node to “tell” the upstream node to delete the corresponding interface from the outgoing interface list in the (S, G) entry and stop forwarding subsequent packets addressed to that multicast group down to this node.

 

&  Note:

l      An (S, G) entry contains the multicast source address S, multicast group address G, outgoing interface list, and incoming interface.

l      For a given multicast stream, the interface that receives the multicast stream is referred to as “upstream”, and the interfaces that forward the multicast stream are referred to as “downstream”.

 

A prune process is first initiated by a leaf router. As shown in Figure 7-1, a router without any receiver attached to it (the router connected with Host A, for example) sends a prune message, and this prune process goes on until only necessary branches are left in the PIM-DM domain. These branches constitute the SPT.

Figure 7-1 SPT establishment

The “flood and prune” process takes place periodically. A pruned state timeout mechanism is provided. A pruned branch restarts multicast forwarding when the pruned state times out and then is pruned again when it no longer has any multicast receiver.

 

&  Note:

Pruning has a similar implementation in PIM-SM.

 

III. Graft

When a host attached to a pruned node joins a multicast group, to reduce the join latency, PIM-DM uses a graft mechanism to resume data forwarding to that branch. The process is as follows:

1)         The node that needs to receive multicast data sends a graft message hop by hop toward the source, as a request to join the SPT again.

2)         Upon receiving this graft message, the upstream node puts the interface on which the graft was received into the forwarding state and responds with a graft-ack message to the graft sender.

3)         If the node that sent a graft message does not receive a graft-ack message from its upstream node, it will keep sending graft messages at a configurable interval until it receives an acknowledgment from its upstream node.

IV. Assert

If multiple multicast routers exist on a multi-access subnet, duplicate packets may flow to the same subnet. To shut off duplicate flows, the assert mechanism is used for election of a single multicast forwarder on a multi-access network.

Figure 7-2 Assert mechanism

As shown in Figure 7-2, after Router A and Router B receive an (S, G) packet from the upstream node, they both forward the packet to the local subnet. As a result, the downstream node Router C receives two identical multicast packets, and both Router A and Router B, on their own local interface, receive a duplicate packet forwarded by the other. Upon detecting this condition, both routers send an assert message to all PIM routers (224.0.0.13) through the interface on which the packet was received. The assert message contains the following information: the multicast source address (S), the multicast group address (G), and the preference and metric of the unicast route to the source. By comparing these parameters, either Router A or Router B becomes the unique forwarder of the subsequent (S, G) packets on the multi-access subnet. The comparison process is as follows:

1)         The router with a higher unicast route preference to the source wins;

2)         If both routers have the same unicast route preference to the source, the router with a smaller metric to the source wins;

3)         If there is a tie in route metric to the source, the router with a higher IP address of the local interface wins.

7.1.3  Introduction to PIM-SM

PIM-DM uses the “flood and prune” principle to build SPTs for multicast data distribution. Although an SPT has the shortest path, it is built with a low efficiency. Therefore the PIM-DM mode is not suitable for large- and medium-sized networks.

PIM-SM is a type of sparse mode multicast protocol. It uses the “pull mode” for multicast forwarding, and is suitable for large- and medium-sized networks with sparsely and widely distributed multicast group members.

The basic implementation of PIM-SM is as follows:

l           PIM-SM assumes that no hosts need to receive multicast data. In the PIM-SM mode, routers must specifically request a particular multicast stream before the data is forwarded to them. The core task for PIM-SM to implement multicast forwarding is to build and maintain rendezvous point trees (RPTs). An RPT is rooted at a router in the PIM domain as the common node, or rendezvous point (RP), through which the multicast data travels along the RPT and reaches the receivers.

l           When a receiver is interested in the multicast data addressed to a specific multicast group, the router connected to this receiver sends a join message to the RP corresponding to that multicast group. The path along which the message goes hop by hop to the RP forms a branch of the RPT.

l           When a multicast source sends a multicast packet to a multicast group, the router directly connected with the multicast source first registers the multicast source with the RP by sending a register message to the RP by unicast. The arrival of this message at the RP triggers the establishment of an SPT. Then, the multicast source sends subsequent multicast packets along the SPT to the RP. Upon reaching the RP, the multicast packet is duplicated and delivered to the receivers along the RPT.

 

&  Note:

Multicast traffic is duplicated only where the distribution tree branches, and this process automatically repeats until the multicast traffic reaches the receivers.

 

7.1.4  How PIM-SM Works

The working mechanism of PIM-SM is summarized as follows:

l           Neighbor discovery

l           DR election

l           RP discovery

l           RPT building

l           Multicast source registration

l           Switchover from RPT to SPT

l           Assert

I. Neighbor discovery

PIM-SM uses exactly the same neighbor discovery mechanism as PIM-DM does. Refer to Neighbor discovery.

II. DR election

PIM-SM also uses hello messages to elect a designated router (DR) for a multi-access network. The elected DR will be the only multicast forwarder on this multi-access network.

A DR must be elected in a multi-access network, no matter this network connects to multicast sources or to receivers. The DR at the receiver side sends join messages to the RP; the DR at the multicast source side sends register messages to the RP.

 

&  Note:

l      A DR is elected on a multi-access subnet by means of comparison of the priorities and IP addresses carried in hello messages. An elected DR is substantially meaningful to PIM-SM. PIM-DM itself does not require a DR. However, if IGMPv1 runs on any multi-access network in a PIM-DM domain, a DR must be elected to act as the IGMPv1 querier on that multi-access network.

l      IGMP must be enabled on a device that acts as a DR before receivers attached to this device can join multicast groups through this DR.

 

Figure 7-3 DR election

As shown in Figure 7-3, the DR election process is as follows:

1)         Routers on the multi-access network send hello messages to one another. The hello messages contain the router priority for DR election. The router with the highest DR priority will become the DR.

2)         In the case of a tie in the router priority, or if any router in the network does not support carrying the DR-election priority in hello messages, the router with the highest IP address will win the DR election.

When the DR fails, a timeout in receiving hello message triggers a new DR election process among the other routers.

III. RP discovery

The RP is the core of a PIM-SM domain. For a small-sized, simple network, one RP is enough for forwarding information throughout the network, and the position of the RP can be statically specified on each router in the PIM-SM domain. In most cases, however, a PIM-SM network covers a wide area and a huge amount of multicast traffic needs to be forwarded through the RP. To lessen the RP burden and optimize the topological structure of the RPT, each multicast group should have its own RP. Therefore, a bootstrap mechanism is needed for dynamic RP election. For this purpose, a bootstrap router (BSR) should be configured.

As the administrative core of a PIM-SM domain, the BSR collects advertisement messages (C-RP-Adv messages) from candidate-RPs (C-RPs) and chooses the appropriate C-RP information for each multicast group to form an RP-set, which is a database of mappings between multicast groups and RPs. The BSR then floods the RP-set to the entire PIM-SM domain. Based on the information in these RP-sets, all routers (including the DRs) in the network can calculate the location of the corresponding RPs.

A PIM-SM domain (or an administratively scoped region) can have only one BSR, but can have multiple candidate-BSRs (C-BSRs). Once the BSR fails, a new BSR is automatically elected from the C-BSRs through the bootstrap mechanism to avoid service interruption. Similarly, multiple C-RPs can be configured in a PIM-SM domain, and the position of the RP corresponding to each multicast group is calculated through the BSR mechanism.

Figure 7-4 shows the positions of C-RPs and the BSR in the network.

Figure 7-4 BSR and C-RPs

IV. RPT establishment

Figure 7-5 RPT establishment in a PIM-SM domain

As shown in Figure 7-5, the process of building an RPT is as follows:

1)         When a receiver joins a multicast group G, it uses an IGMP message to inform the directly connected DR.

2)         Upon getting the receiver information, the DR sends a join message, which is hop by hop forwarded to the RP corresponding to the multicast group.

3)         The routers along the path from the DR to the RP form an RPT branch. Each router on this branch generates a (*, G) entry in its forwarding table. The * means any multicast source. The RP is the root, while the DRs are the leaves, of the RPT.

The multicast data addressed to the multicast group G flows through the RP, reaches the corresponding DR along the established RPT, and finally is delivered to the receiver.

When a receiver is no longer interested in the multicast data addressed to a multicast group G, the directly connected DR sends a prune message, which goes hop by hop along the RPT to the RP. Upon receiving the prune message, the upstream node deletes its link with this downstream node from the outgoing interface list and checks whether it itself has receivers for that multicast group. If not, the router continues to forward the prune message to its upstream router.

V. Multicast source registration

The purpose of multicast source registration is to inform the RP about the existence of the multicast source.

Figure 7-6 Multicast registration

As shown in Figure 7-6, the multicast source registers with the RP as follows:

1)         When the multicast source S sends the first multicast packet to a multicast group G, the DR directly connected with the multicast source, upon receiving the multicast packet, encapsulates the packet in a PIM register message, and sends the message to the corresponding RP by unicast.

2)         When the RP receives the register message, it extracts the multicast packet from the register message and forwards the multicast packet down the RPT, and sends an (S, G) join message hop by hop toward the multicast source. Thus, the routers along the path from the RP to the multicast source constitute an SPT branch. Each router on this branch generates an (S, G) entry in its forwarding table. The multicast source is the root, while the RP is the leaf, of the SPT.

3)         The subsequent multicast data from the multicast source travels along the established SPT to the RP, and then the RP forwards the data along the RPT to the receivers. When the multicast traffic arrives at the RP along the SPT, the RP sends a register-stop message to the source-side DR by unicast to stop the source registration process.

VI. Switchover from RPT to SPT

Initially, multicast traffic flows along an RPT from the RP to the receivers. Because the RPT is not necessarily the tree that has the shortest path, upon receiving the first multicast packet along the RPT (by default), or when detecting that the multicast traffic rate reaches a configurable threshold (if so configured), the receiver-side DR initiates an RPT-to-SPT switchover process, as follows:

1)         First, the receiver-side DR sends an (S, G) join message hop by hop to the multicast source. When the join message reaches the source-side DR, all the routers on the path have installed the (S, G) entry in their forwarding table, and thus an SPT branch is established.

2)         Subsequently, the receiver-side DR sends a prune message hop by hop to the RP. Upon receiving this prune message, the RP forwards it toward the multicast source, thus to implement RPT-to-SPT switchover.

After the RPT-to-SPT switchover, multicast data can be directly sent from the source to the receivers. PIM-SM builds SPTs through RPT-to-SPT switchover more economically than PIM-DM does through the “flood and prune” mechanism.

VII. Assert

PIM-SM uses exactly the same assert mechanism as PIM-DM does. Refer to Assert.

7.1.5  Introduction to BSR Admin-scope Regions in PIM-SM

I. Division of PIM-SM domains

Typically, a PIM-SM domain contains only one BSR, which is responsible for advertising RP-set information within the entire PIM-SM domain. The information for all multicast groups is forwarded within the network scope administered by the BSR.

To implement refined management and group-specific services, a PIM-SM domain can be divided into one global scope zone and multiple BSR administratively scoped regions (BSR admin-scope regions).

Specific to particular multicast groups, the BSR administrative scoping mechanism effectively lessens the management workload of a single-BSR domain and provides group-specific services.

II. Relationship between BSR admin-scope regions and the global scope zone

A better understanding of the global scope zone and BSR admin-scope regions should be based on two aspects: geographical space and group address range.

1)         Geographical space

BSR admin-scope regions are logical regions specific to particular multicast groups, and each BSR admin-scope region must be geographically independent of every other one, as shown in Figure 7-7.

Figure 7-7 Relationship between BSR admin-scope regions and the global scope zone in geographic space

BSR admin-scope regions are geographically separated from one another. Namely, a router must not serve different BSR admin-scope regions. In other words, different BSR admin-scope regions contain different routers, whereas the global scope zone covers all routers in the PIM-SM domain.

2)         In terms of multicast group address ranges

Each BSR admin-scope region serves specific multicast groups. Usually, these addresses have no intersections; however, they may overlap one another.

Figure 7-8 Relationship between BSR admin-scope regions and the global scope zone in group address ranges

In Figure 7-8, the group address ranges of admin-scope-scope regions BSR1 and BSR2 have no intersection, whereas the group address range of BSR3 is a subset of the address range of BSR1. The group address range of the global scope zone covers all the group addresses other than those of all the BSR admin-scope regions. That is, the group address range of the global scope zone is G-G1-G2. In other words, there is a supplementary relationship between the global scope zone and all the BSR admin-scope regions in terms of group address ranges.

Relationships between BSR admin-scope regions and the global scope zone are as follows:

l           The global scope zone and each BSR admin-scope region have their own C-RPs and BSR. These devices are effective only in their respective admin-scope regions. Namely, the BSR election and RP election are implemented independently within each admin-scope region.

l           Each BSR admin-scope region has its own boundary. The multicast information (such as C-RP-Adv messages and BSR bootstrap messages) can be transmitted only within the domain.

l           Likewise, the multicast information in the global scope zone cannot enter any BSR admin-cope region.

l           In terms of multicast information propagation, BSR admin-scope regions are independent of one another and each BSR admin-scope region is independent of the global scope zone, and no overlapping is allowed between any two BSR admin-scope regions.

7.1.6  SSM Model Implementation in PIM

The source-specific multicast (SSM) model and the any-source multicast (ASM) model are two opposite models. Presently, the ASM model includes the PIM-DM and PIM-SM modes. The SSM model can be implemented by leveraging part of the PIM-SM technique.

The SSM model provides a solution for source-specific multicast. It maintains the relationships between hosts and routers through IGMPv3.

In actual application, part of the PIM-SM technique is adopted to implement the SSM model. In the SSM model, receivers know exactly where a multicast source is located by means of advertisements, consultancy, and so on. Therefore, no RP is needed, no RPT is required, there is no source registration process, and there is no need of using the multicast source discovery protocol (MSDP) for discovering sources in other PIM domains.

Compared with the ASM model, the SSM model only needs the support of IGMPv3 and some subsets of PIM-SM. The operation mechanism of PIM-SSM can be summarized as follows:

l           Neighbor discovery

l           DR election

l           SPT building

I. Neighbor discovery

PIM-SSM uses the same neighbor discovery mechanism as in PIM-DM and PIM-SM. Refer to Neighbor discovery.

II. DR election

PIM-SSM uses the same DR election mechanism as in PIM-SM. Refer to DR election.

III. Construction of SPT

Whether to build an RPT for PIM-SM or an SPT for PIM-SSM depends on whether the multicast group the receiver is to join falls in the SSM group range (SSM group range reserved by IANA is 232.0.0.0/8).

Figure 7-9 SPT establishment in PIM-SSM

As shown in Figure 7-9, Host B and Host C are multicast information receivers. They send IGMPv3 report messages denoted as (Include S, G) to the respective DRs to express their interest in the information of the specific multicast source S. If they need information from other sources than S, they send an (Exclude S, G) report. No matter what the description is, the position of multicast source S is explicitly specified for receivers.

The DR that has received the report first checks whether the group address in this message falls in the SSM group range:

l           If so, the DR sends a subscribe message for channel subscription hop by hop toward the multicast source S. An (Include S, G) or (Exclude S, G) entry is created on all routers on the path from the DR to the source. Thus, an SPT is built in the network, with the source S as its root and receivers as its leaves. This SPT is the transmission channel in PIM-SSM.

l           If not, the PIM-SM process is followed: the DR needs to send a (*, G) join message to the RP, and a multicast source registration process is needed.

 

&  Note:

In PIM-SSM, the “channel” concept is used to refer to a multicast group, and the “channel subscription” concept is used to refer to a join message.

 

7.1.7  Protocols and Standards

PIM-related specifications are as follows:

l           RFC 2362: Protocol Independent Multicast-sparse Mode (PIM-SM): Protocol Specification

l           RFC 3973: Protocol Independent Multicast-Dense Mode (PIM-DM): Protocol Specification(Revised)

l           draft-ietf-pim-sm-v2-new-06: Protocol Independent Multicast-Sparse Mode (PIM-SM)

l           draft-ietf-pim-dm-new-v2-02: Protocol Independent Multicast-Dense Mode (PIM-DM)

l           draft-ietf-pim-v2-dm-03: Protocol Independent Multicast Version 2 Dense Mode Specification

l           draft-ietf-pim-sm-bsr-03: Bootstrap Router (BSR) Mechanism for PIM Sparse Mode

l           draft-ietf-ssm-arch-02: Source-Specific Multicast for IP

l           draft-ietf-ssm-overview-04: An Overview of Source-Specific Multicast (SSM)

7.2  Configuring PIM-DM

7.2.1  PIM-DM Configuration Task List

Complete these tasks to configure PIM-DM:

Task

Remarks

Enabling PIM-DM

Required

Enabling State Refresh

Optional

Configuring State Refresh Parameters

Optional

Configuring PIM-DM Graft Retry Period

Optional

Configuring PIM Common Information

Optional

 

7.2.2  Configuration Prerequisites

Before configuring PIM-DM, complete the following task:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

Before configuring PIM-DM, prepare the following data:

l           The interval between state refresh messages

l           Minimum time to wait before receiving a new refresh message

l           TTL value of state refresh messages

l           Graft retry period

7.2.3  Enabling PIM-DM

With PIM-DM enabled, a router sends hello messages periodically to discover PIM neighbors and processes messages from PIM neighbors. When deploying a PIM-DM domain, you are recommended to enable PIM-DM on all interfaces of non-border routers (border routers are PIM-enabled routers located on the boundary of BSR admin-scope regions).

Follow these steps to enable PIM-DM:

To do...

Use the command...

Remarks

Enter system view

system-view

Enable IP multicast routing

multicast routing-enable

Required

Disable by default

Enter interface view

interface interface-type interface-number

Enable PIM-DM

pim dm

Required

Disabled by default

 

  Caution:

l      All the interfaces of the same router must work in the same PIM mode.

l      PIM-DM cannot be used for multicast groups in the SSM group grange.

 

7.2.4  Enabling State Refresh

An interface without the state refresh capability cannot forward state refresh messages.

Follow these steps to enable the state refresh capability:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Enable state refresh

pim state-refresh-capable

Optional

Enabled by default

 

7.2.5  Configuring State Refresh Parameters

To avoid the resource-consuming reflooding of unwanted traffic caused by timeout of pruned interfaces, the router directly connected with the multicast source periodically sends an (S, G) state refresh message, which is forwarded hop by hop along the initial multicast flooding path of the PIM-DM domain, to refresh the prune timer state of all the routers on the path.

A router may receive multiple state refresh messages within a short time, of which some may be duplicated messages. To keep a router from receiving such duplicated messages, you can configure the time the router must wait before receiving the next state refresh message. If a new state refresh message is received within the waiting time, the router will discard it; if this timer times out, the router will accept a new state refresh message, refresh its own PIM state, and reset the waiting timer.

The TTL value of a state refresh message decrements by 1 whenever it passes a router before it is forwarded to the downstream node until the TTL value comes down to 0. In a small network, a state refresh message may cycle in the network. To effectively control the propagation scope of state refresh messages, you need to configure an appropriate TTL value based on the network size.

Follow these steps to configure state refresh parameters:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure the interval between state refresh messages

state-refresh-interval interval

Optional

60 seconds by default

Configure the time to wait before receiving a new state refresh message

state-refresh-rate-limit interval

Optional

30 seconds by default

Configure the TTL value of state refresh messages

state-refresh-ttl ttl-value

Optional

255 by default

 

7.2.6  Configuring PIM-DM Graft Retry Period

In PIM-DM, graft is the only type of message that uses the acknowledgment mechanism. In a PIM-DM domain, if a router does not receive a graft-ack message from the upstream router within the specified time after it sends a graft message, the router keeps sending new graft messages at a configurable interval, namely graft retry period, until it receives a graft-ack from the upstream router.

Follow these steps to configure graft retry period:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure graft retry period

pim timer graft-retry interval

Optional

3 seconds by default

 

&  Note:

For the configuration of other timers in PIM-DM, refer to Configuring PIM Common Timers.

 

7.3  Configuring PIM-SM

 

&  Note:

A device can serve as a C-RP and a C-BSR at the same time.

 

7.3.1  PIM-SM Configuration Task List

Complete these tasks to configure PIM-SM:

Task

Remarks

Configuring PIM-SM

Required

Configuring a BSR

Performing basic C-BSR configuration

Optional

Configuring a global-scope C-BSR

Optional

Configuring an admin-scope C-BSR

Optional

Configuring a BSR admin-scope region boundary

Optional

Configuring global C-BSR parameters

Optional

Configuring an RP

Configuring a static RP

Optional

Configuring a C-RP

Optional

Enabling auto-RP

Optional

Configuring C-RP timers

Optional

Configuring PIM-SM Register Messages

Optional

Disabling RPT-to-SPT Switchover

Optional

Configuring PIM Common Information

Optional

 

7.3.2  Configuration Prerequisites

Before configuring PIM-SM, complete the following task:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

Before configuring PIM-SM, prepare the following data:

l           An ACL rule defining a legal BSR address range

l           Hash mask length for RP selection calculation

l           C-BSR priority

l           Bootstrap interval

l           Bootstrap timeout time

l           An ACL rule defining a legal C-RP address range and the range of multicast groups to be served

l           C-RP-Adv interval

l           C-RP timeout time

l           The IP address of a static RP

l           An ACL rule for register message filtering

l           Register suppression timeout time

l           Probe time

l           ACL rules and ACL order for disabling RPT-to-SPT switchover

7.3.3  Enabling PIM-SM

With PIM-SM enabled, a router sends hello messages periodically to discover PIM neighbors and processes messages from PIM neighbors. When deploying a PIM-SM domain, you are recommended to enable PIM-SM on all interfaces of non-border routers (border routers are PIM-enabled routers located on the boundary of BSR admin-scope regions).

Follow these steps to enable PIM-SM:

To do...

Use the command...

Remarks

Enter system view

system-view

Enable IP multicast routing

multicast routing-enable

Required

Disable by default

Enter interface view

interface interface-type interface-number

Enable PIM-SM

pim sm

Required

Disabled by default

 

  Caution:

All the interfaces of the same router must work in the same PIM mode.

 

7.3.4  Configuring a BSR

 

&  Note:

The BSR is dynamically elected from a number of C-BSRs. Because it is unpredictable which router will finally win a BSR election, the commands introduced in this section must be configured on all C-BSRs.

About the Hash mask length and C-BSR priority for RP selection calculation:

l      You can configure these parameters at three levels: global configuration level, global scope level, and BSR admin-scope level.

l      By default, the global scope parameters and BSR admin-scope parameters are those configured at the global configuration level.

l      Parameters configured at the global scope level or BSR admin-scope level have higher priority than those configured at the global configuration level.

 

I. Performing basic C-BSR configuration

A PIM-SM domain can have only one BSR, but must have at least one C-BSR. Any router can be configured as a C-BSR. Elected from C-BSRs, a BSR is responsible for collecting and advertising RP information in the PIM-SM.

C-BSRs should be configured on routers in the backbone network. When configuring a router as a C-BSR, make sure that router is PIM-SM enabled. The BSR election process is as follows:

l           Initially, every C-BSR assumes itself to be the BSR of this PIM-SM domain, and uses its interface IP address as the BSR address to send bootstrap messages.

l           When a C-BSR receives the bootstrap message of another C-BSR, it first compares its own priority with the other C-BSR’s priority carried in the message. The C-BSR with a higher priority wins. If there is a tie in the priority, the C-BSR with a higher IP address wins. The loser uses the winner’s BSR address to replace its own BSR address and no longer assumes itself to be the BSR, while the winner keeps its own BSR address and continues assuming itself to be the BSR.

Configuring a legal range of BSR addresses enables filtering of BSR messages based on the address range, thus to prevent malicious hosts from initiating attacks by disguising themselves as legitimate BSRs. To protect legitimate BSRs from being maliciously replaced, preventive measures are taken specific to the following two situations:

1)         Some malicious hosts intend to fool routers by forging BSR messages and change the RP mapping relationship. Such attacks often occur on border routers. Because a BSR is inside the network whereas hosts are outside the network, you can protect a BSR against attacks from external hosts by enabling border routers to perform neighbor check and RPF check on BSR messages and discard unwanted messages.

2)         When a router in the network is controlled by an attacker or when an illegal router is present in the network, the attacker can configure such a router to be a C-BSR and make it win BSR election so as to gain the right of advertising RP information in the network. After being configured as a C-BSR, a router automatically floods the network with BSR messages. As a BSR message has a TTL value of 1, the whole network will not be affected as long as the neighbor router discards these BSR messages. Therefore, if a legal BSR address range is configured on all routers in the entire network, all routers will discard BSR messages from out of the legal address range, and thus this kind of attacks can be prevented.

The above-mentioned preventive measures can partially protect the security of BSRs in a network. However, if a legal BSR is controlled by an attacker, the above-mentioned problem will also occur.

Follow these steps to complete basic C-BSR configuration:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure an interface as a C-BSR

c-bsr interface-type interface-number [ hash-length [ priority ] ]

Required

No C-BSR is configured by default

Configure a legal BSR address range

bsr-policy acl-number

Optional

No restrictions on BSR address range by default

 

&  Note:

Since a large amount of information needs to be exchanged between a BSR and the other devices in the PIM-SM domain, a relatively large bandwidth should be provided between the C-BSR and the other devices in the PIM-SM domain.

 

II. Configuring a global-scope C-BSR

Follow these steps to configure a global-scope C-BSR:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure a global-scope C-BSR

c-bsr global [ hash-length hash-length | priority priority ] *

Required

No global-scope C-BSRs by default

 

III. Configuring an admin-scope C-BSR

By default, a PIM-SM domain has only one BSR. The entire network should be managed by this BSR. To manage your network more effectively and specifically, you can divide a PIM-SM domain into multiple BSR admin-scope regions, with each BSR admin-scope region having one BSR, which serves specific multicast groups.

Specific to particular multicast groups, the BSR administrative scoping mechanism effectively lessens the management workload of a single-BSR domain and provides group-specific services.

In a network divided into BSR admin-scope regions, BSRs are elected from multitudinous C-BSRs to serve different multicast groups. The C-RPs in a BSR admin-scope region send C-RP-Adv messages to only the corresponding BSR. The BSR summarizes the advertisement messages into an RP-set and advertises it to all the routers in the BSR admin-scope region. All the routers use the same algorithm to get the RP addresses corresponding to specific multicast groups.

Follow these steps to configure an admin-scope C-BSR:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Enable BSR administrative scoping

c-bsr admin-scope

Required

Disabled by default

Configure an admin-scope C-BSR

c-bsr group group-address { mask | mask-length } [ hash-length hash-length | priority priority ] *

Optional

No admin-scope BSRs by default

 

IV. Configuring a BSR admin-scope region boundary

A BSR has its specific service scope. A number of BSR boundary interfaces divide a network into different BSR admin-scope regions. Bootstrap messages cannot cross the admin-scope region boundary, while other types of PIM messages can.

Follow these steps to configure a BSR admin-scope region boundary:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure a BSR admin-scope region boundary

pim bsr-boundary

Required

No BSR admin-scope region boundary by default

 

V. Configuring global C-BSR parameters

The BSR election winner advertises its own IP address and RP-set information throughout the region it serves through bootstrap messages. The BSR floods bootstrap messages throughout the network periodically. Any C-BSR that receives a bootstrap message maintains the BSR state for a configurable period of time (BSR state timeout), during which no BSR election takes place. When the BSR state times out, a new BSR election process will be triggered among the C-BSRs.

Follow these steps to configure global C-BSR parameters:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure the Hash mask length for RP selection calculation

c-bsr hash-length hash-length

Optional

30 by default

Configure the C-BSR priority

c-bsr priority priority

Optional

0 by default

Configure the bootstrap interval

c-bsr interval interval

Optional

For the system default, see “Note” below.

Configure the bootstrap timeout time

c-bsr holdtime interval

Optional

For the system default, see “Note” below.

 

&  Note:

About the bootstrap timeout time:

l      By default, the bootstrap timeout time is determined by this formula: Bootstrap timeout = Bootstrap interval × 2 + 10. The default bootstrap interval is 60 seconds, so the default bootstrap timeout = 60 × 2 + 10 = 130 (seconds).

l      If this parameter is manually configured, the system will use the configured value.

About the bootstrap interval:

l      By default, the bootstrap interval is determined by this formula: Bootstrap interval = (Bootstrap timeout – 10) / 2. The default bootstrap timeout is 130 seconds, so the default bootstrap interval = (130 – 10) / 2 = 60 (seconds).

l      If this parameter is manually configured, the system will use the configured value.

 

  Caution:

In configuration, make sure that the bootstrap interval is smaller than the bootstrap timeout time.

 

7.3.5  Configuring an RP

An RP can be manually configured or dynamically elected through the BSR mechanism. For a large PIM network, static RP configuration is a tedious job. Generally, static RP configuration is just a backup means for the dynamic RP election mechanism to enhance the robustness and operation manageability of a multicast network.

I. Configuring a static RP

If there is only one dynamic RP in a network, manually configuring a static RP can avoid communication interruption due to single-point failures and avoid frequent message exchange between C-RPs and the BSR. To enable a static RP to work normally, you must perform this configuration on all the devices in the PIM-SM domain and specify the same RP address.

Follow these steps to configure a static RP

To do…

Use the command…

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure a static RP

static-rp rp-address [ acl-number ] [ preferred ]

Optional

No static RP by default

 

II. Configuring a C-RP

In a PIM-SM domain, you can configure routers that intend to become the RP as C-RPs. The BSR collects the C-RP information by receiving the C-RP-Adv messages from C-RPs or auto-RP announcements from other routers and organizes the information into an RP-set, which is flooded throughout the entire network. Then, the other routers in the network calculate the mappings between specific group ranges and the corresponding RPs based on the RP-set. We recommend that you configure C-RPs on backbone routers.

To guard against C-RP spoofing, you need to configure a legal C-RP address range and the range of multicast groups to be served on the BSR. In addition, because every C-BSR has a chance to become the BSR, you need to configure the same filtering policy on all C-BSRs.

Follow these steps to configure a C-RP:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure an interface to be a C-RP

c-rp interface-type interface-number [ group-policy acl-number | priority priority | holdtime hold-interval | advertisement-interval adv-interval ] *

Optional

No C-RPs are configured by default

Configure a legal C-RP address range and the range of multicast groups to be served

crp-policy acl-number

Optional

No restrictions by default

 

&  Note:

l      When configuring a C-RP, ensure a relatively large bandwidth between this C-RP and the other devices in the PIM-SM domain.

l      An RP can serve multiple multicast groups or all multicast groups. Only one RP can forward multicast traffic for a multicast group at a moment.

 

III. Enabling auto-RP

Auto-RP announcement and discovery messages are respectively addressed to the multicast group addresses 224.0.1.39 and 224.0.1.40. With auto-RP enabled on a device, the device can receive these two types of messages and record the RP information carried in such messages.

Follow these steps to enable auto-RP:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Enable auto-RP

auto-rp enable

Optional

Disabled by default

 

IV. Configuring C-RP timers

To enable the BSR to distribute the RP-set information within the PIM-SM domain, C-RPs must periodically send C-RP-Adv messages to the BSR. The BSR learns the RP-set information from the received messages, and encapsulates its own IP address together with the RP-set information in its bootstrap messages. The BSR then floods the bootstrap messages to all PIM routers (224.0.0.13) in the network.

Each C-RP encapsulates a timeout value in its C-RP-Adv message. Upon receiving this message, the BSR obtains this timeout value and starts a C-RP timeout timer. If the BSR fails to hear a subsequent C-RP-Adv message from the C-RP when the timer times out, the BSR assumes the C-RP to have expired or become unreachable.

Follow these steps to configure C-RP timers:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure the C-RP-Adv interval

c-rp advertisement-interval interval

Optional

60 seconds by default

Configure C-RP timeout time

c-rp holdtime interval

Optional

150 seconds by default

 

&  Note:

 

7.3.6  Configuring PIM-SM Register Messages

Within a PIM-SM domain, the source-side DR sends register messages to the RP, and these register messages have different multicast source or group addresses. You can configure a filtering rule to filter register messages so that the RP can serve specific multicast groups. If an (S, G) entry is denied by the filtering rule, or the action for this entry is not defined in the filtering rule, the RP will send a register-stop message to the DR to stop the registration process for the multicast data.

In view of information integrity of register messages in the transmission process, you can configure the device to calculate the checksum based on the entire register messages. However, to reduce the workload of encapsulating data in register messages and for the sake of interoperability, this method of checksum calculation is not recommended.

When receivers stop receiving multicast data addressed to a certain multicast group through the RP (that is, the RP stops serving the receivers of a specific multicast group), or when the RP formally starts receiving multicast data from the multicast source, the RP sends a register-stop message to the source-side DR. Upon receiving this message, the DR stops sending register messages encapsulated with multicast data and enters the register suppression state.

In a probe suppression cycle, the DR can send a null register message (a register message without multicast data encapsulated), a certain length of time defined by the probe time before the register suppression timer expires, to the RP to indicate that the multicast source is active. When the register suppression timer expires, the DR starts sending register messages again. A smaller register suppression timeout setting will cause the RP to receive bursting multicast data more frequently, while a larger timeout setting will result in a larger delay for new receivers to join the multicast group they are interested in.

Follow these steps to configure PIM-SM register-related parameters:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure a filtering rule for register messages

register-policy acl-number

Optional

No register filtering rule by default

Configure the device to calculate the checksum based on the entire register messages

register-header-checksum

Optional

By default, the checksum is calculated based on the header of register messages

Configure the register suppression timeout time

register-suppression-timeout interval

Optional

60 seconds by default

Configure the probe time

probe-interval interval

Optional

5 seconds by default

 

&  Note:

Typically, you need to configure the above-mentioned parameters on the receiver-side DR and the RP only. Since both the DR and RP are elected, however, you should carry out these configurations on the routers that may win the DR election and on the C-RPs that may win RP elections.

 

7.3.7  Disabling RPT-to-SPT Switchover

Initially, multicast traffic flows along an RPT to the receivers. By default, the last-hop switch initiates an RPT-to-SPT switchover process when it receives the first multicast packet from the RPT. You can disable RPT-to-SPT switchover through the following configuration.

Follow these steps to disable RPT-to-SPT switchover:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Disable RPT-to-SPT switchover

spt-switch-threshold infinity [ group-policy acl-number [ order order-value] ]

Optional

By default, the device switches to the SPT immediately after it receives the first multicast packet from the RPT.

 

&  Note:

l      The support for the timer spt-switch command depends on the specific device model.

l      Typically, you need to configure the above-mentioned parameters on the receiver-side DR and the RP only. Since both the DR and RP are elected, however, you should carry out these configurations on the routers that may win the DR election and on the C-RPs that may win RP elections.

l      If the multicast source is learned through MSDP, the device will switch to the SPT immediately after it receives the first multicast packet from the RPT, no matter how big the traffic rate threshold is set (this threshold is not configurable on a switch).

 

7.4  Configuring PIM-SSM

 

&  Note:

The PIM-SSM model needs the support of IGMPv3. Therefore, be sure to enable IGMPv3 on PIM routers with multicast receivers.

 

7.4.1  PIM-SSM Configuration Task List

Complete these tasks to configure PIM-SSM:

Task

Remarks

Enabling PIM-SM

Required

Configuring the SSM Group Range

Optional

Configuring PIM Common Information

Optional

 

7.4.2  Configuration Prerequisites

Before configuring PIM-SSM, complete the following task:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

Before configuring PIM-SSM, prepare the following data:

l           The SSM group range

7.4.3  Enabling PIM-SM

The SSM model is implemented based on some subsets of PIM-SM. Therefore, a router is PIM-SSM capable after you enable PIM-SM on it.

When deploying a PIM-SM domain, you are recommended to enable PIM-SM on all interfaces of non-border routers (border routers are PIM-enabled routers located on the boundary of BSR admin-scope regions).

Follow these steps to enable PIM-SM:

To do...

Use the command...

Remarks

Enter system view

system-view

Enable IP multicast routing

multicast routing-enable

Required

Disable by default

Enter interface view

interface interface-type interface-number

Enable PIM-SM

pim sm

Required

Disabled by default

 

  Caution:

All the interfaces of the same router must work in the same PIM mode.

 

7.4.4  Configuring the SSM Group Range

As for whether the information from a multicast source is delivered to the receivers based on the PIM-SSM model or the PIM-SM model, this depends on whether the group address in the (S, G) channel subscribed by the receivers falls in the SSM group range. All PIM-SM-enabled interfaces assume that multicast groups within this address range are using the PIM-SSM model.

Follow these steps to configure an SSM multicast group range:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure the SSM group range

ssm-policy acl-number

Optional

232.0.0.0/8 by default

 

&  Note:

The commands introduced in this section are to be configured on all routers in the PIM domain.

 

  Caution:

l      Make sure that the same SSM group range is configured on all routers in the entire domain. Otherwise, multicast information cannot be delivered through the SSM model.

l      When a member of a multicast group in the SSM group range sends an IGMPv1 or IGMPv2 report message, the device does not trigger a (*, G) join.

 

7.5  Configuring PIM Common Information

 

&  Note:

For the configuration tasks described in this section:

l      Configurations performed in PIM view are effective to all interfaces, while configurations performed in interface view are effective to the current interface only.

l      If the same function or parameter is configured in both PIM view and interface view, the configuration performed in interface view is given priority, regardless of the configuration sequence.

 

7.5.1  PIM Common Information Configuration Task List

Complete these tasks to configure PIM common information:

Task

Remarks

Configuring a PIM Filter

Optional

Configuring PIM Hello Options

Optional

Configuring PIM Common Timers

Optional

Configuring Join/Prune Message Limits

Optional

 

7.5.2  Configuration Prerequisites

Before configuring PIM common information, complete the following tasks:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

l           Configure PIM-DM, or PIM-SM, or PIM-SSM.

Before configuring PIM common information, prepare the following data:

l           An ACL rule as multicast data filter

l           Priority for DR election (global value/interface level value)

l           PIM neighbor timeout time (global value/interface value)

l           Prune delay (global value/interface level value)

l           Prune override interval (global value/interface level value)

l           Hello interval (global value/interface level value)

l           Maximum delay between hello message (interface level value)

l           Assert timeout time (global value/interface value)

l           Join/prune interval (global value/interface level value)

l           Join/prune timeout (global value/interface value)

l           Multicast source lifetime

l           Maximum size of join/prune messages

l           Maximum number of (S, G) entries in a join/prune message

7.5.3  Configuring a PIM Filter

No matter in a PIM-DM domain or a PIM-SM domain, routers can check passing-by multicast data based on the configured filtering rules and determine whether to continue forwarding the multicast data. In other words, PIM routers can act as multicast data filters. These filters can help implement traffic control on one hand, and control the information available to receivers downstream to enhance data security on the other hand.

Follow these steps to configure a PIM filter:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure a multicast group filter

source-policy acl-number

Required

No multicast data filter by default

 

&  Note:

l      Generally, a smaller distance from the filter to the multicast source results in a more remarkable filtering effect.

l      This filter works not only on independent multicast data but also on multicast data encapsulated in register messages.

 

7.5.4  Configuring PIM Hello Options

No matter in a PIM-DM domain or a PIM-SM domain, the hello messages sent among routers contain many configurable options, including:

l           DR_Priority (for PIM-SM only): priority for DR election. The device with the highest priority wins the DR election. You can configure this parameter on all the routers in a multi-access network directly connected to multicast sources or receivers.

l           Holdtime: the timeout time of PIM neighbor reachability state. When this timer times out, if the router has received no hello message from a neighbor, it assumes that this neighbor has expired or become unreachable. You can configure this parameter on all routers in the PIM domain. If you configure different values for this timer on different neighboring routers, the largest value will take effect.

l           LAN_Prune_Delay: the delay of prune messages on a multi-access network. This option consists of LAN-delay (namely, prune delay), override-interval, and neighbor tracking flag bit. You can configure this parameter on all routers in the PIM domain. If different LAN-delay or override-interval values result from the negotiation among all the PIM routers, the largest value will take effect.

The LAN-delay setting will cause the upstream routers to delay processing received prune messages. If the LAN-delay setting is too small, it may cause the upstream router to stop forwarding multicast packets before a downstream router sends a prune override message. Therefore, be cautious when configuring this parameter.

The override-interval sets the length of time a downstream router is allowed to wait before sending a prune override message. When a router receives a prune message from a downstream router, it does not perform the prune action immediately; instead, it maintains the current forwarding state for a period of time defined by LAN-delay. If the downstream router needs to continue receiving multicast data, it must send a prune override message within the prune override interval; otherwise, the upstream route will perform the prune action when the LAN-delay timer times out.

A hello message sent from a PIM router contains a generation ID option. The generation ID is a random value for the interface on which the hello message is sent. Normally, the generation ID of a PIM router does not change unless the status of the router changes (for example, when PIM is just enabled on the interface or the device is restarted). When the router starts or restarts sending hello messages, it generates a new generation ID. If a PIM router finds that the generation ID in a hello message from the upstream router has changed, it assumes that the status of the upstream neighbor is lost or the upstream neighbor has changed. In this case, it triggers a join message for state update.

If you disable join suppression (namely, enable neighbor tracking), the upstream router will explicitly track which downstream routers are joined to it. The join suppression feature should be enabled or disabled on all PIM routers on the same subnet.

I. Configuring hello options globally

Follow these steps to configure hello options globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure the priority for DR election

hello-option dr-priority priority

Optional

1 by default

Configure PIM neighbor timeout time

hello-option holdtime interval

Optional

105 seconds by default

Configure the prune delay time (LAN-delay)

hello-option lan-delay interval

Optional

500 milliseconds by default

Configure the prune override interval

hello-option override-interval interval

Optional

2,500 milliseconds by default

Disable join suppression

hello-option neighbor-tracking

Optional

Enabled by default

 

II. Configuring hello options on an interface

Follow these steps to configure hello options on an interface:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure the priority for DR election

pim hello-option dr-priority priority

Optional

1 by default

Configure PIM neighbor timeout time

pim hello-option holdtime interval

Optional

105 seconds by default

Configure the prune delay time (LAN-delay)

pim hello-option lan-delay interval

Optional

500 milliseconds by default

Configure the prune override interval

pim hello-option override-interval interval

Optional

2,500 milliseconds by default

Disable join suppression

pim hello-option neighbor-tracking

Optional

Enabled by default

Configure the interface to reject hello messages without a generation ID

pim require-genid

Optional

By default, hello messages without Generation_ID are accepted

 

7.5.5  Configuring PIM Common Timers

PIM routers discover PIM neighbors and maintain PIM neighboring relationships with other routers by periodically sending out hello messages.

Upon receiving a hello message, a PIM router waits a random period, which is equal to or smaller than the maximum delay between hello messages, before sending out a hello message. This avoids collisions that occur when multiple PIM routers send hello messages simultaneously.

Any router that has lost assert election will prune its downstream interface and maintain the assert state for a period of time. When the assert state times out, the assert losers will resume multicast forwarding.

A PIM router periodically sends join/prune messages to its upstream for state update. A join/prune message contains the join/prune timeout time. The upstream router sets a join/prune timeout timer for each pruned downstream interface, and resumes the forwarding state of the pruned interface when this timer times out.

When a router fails to receive subsequent multicast data from the multicast source S, the router will not immediately delete the corresponding (S, G) entries; instead, it maintains (S, G) entries for a period of time, namely the multicast source lifetime, before deleting the (S, G) entries.

I. Configuring PIM common timers globally

Follow these steps to configure PIM common timers globally:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure the hello interval

timer hello interval

Optional

30 seconds by default

Configure assert timeout time

holdtime assert interval

Optional

180 seconds by default

Configure the join/prune interval

timer join-prune interval

Optional

60 seconds by default

Configure the join/prune timeout time

holdtime join-prune interval

Optional

210 seconds by default

Configure the multicast source lifetime

source-lifetime interval

Optional

210 seconds by default

 

II. Configuring PIM common timers on an interface

Follow these steps to configure PIM common timers on an interface:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure the hello interval

pim timer hello interval

Optional

30 seconds by default

Configure the maximum delay between hello messages

pim triggered-hello-delay interval

Optional

5 seconds by default

Configure assert timeout time

pim holdtime assert interval

Optional

180 seconds by default

Configure the join/prune interval

pim timer join-prune interval

Optional

60 seconds by default

Configure the join/prune timeout time

pim holdtime join-prune interval

Optional

210 seconds by default

 

&  Note:

If there are no special networking requirements, we recommend that you use the default settings.

 

7.5.6  Configuring Join/Prune Message Limits

A larger join/prune message size will result in loss of a larger amount of information when a message is lost; with a reduced join/message size, the loss of a single message will bring relatively minor impact.

By controlling the maximum number of (S, G) entries in a join/prune message, you can effectively reduce the number of (S, G) entries sent per unit of time.

Follow these steps to configure join/prune message limits:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter PIM view

pim

Configure the maximum size of a join/prune message

jp-pkt-size packet-size

Optional

8,100 bytes by default

Configure the maximum number of (S, G) entries in a join/prune message

jp-queue-size queue-size

Optional

1,020 by default

 

7.6  Displaying and Maintaining PIM

To do...

Use the command...

Remarks

View the BSR information in the PIM-SM domain and locally configured C-RP information in effect

display pim bsr-info

Available in any view

View the information of unicast routes used by PIM

display pim claimed-route [ source-address ]

Available in any view

View the number of PIM control messages

display pim control-message counters [ message-type { probe | register | register-stop } | [ interface interface-type interface-number | message-type { assert | bsr | crp | graft | graft-ack | hello | join-prune | state-refresh } ] * ]

Available in any view

View the information about unacknowledged graft messages

display pim grafts

Available in any view

View the PIM information on an interface or all interfaces

display pim interface [ interface-type interface-number ] [ verbose ]

Available in any view

View the information of join/prune messages to send

display pim join-prune mode { sm [ flags flag-value ] | ssm } [ interface interface-type interface-number | neighbor neighbor-address ] * [ verbose ]

Available in any view

View PIM neighboring information

display pim neighbor [ interface interface-type interface-number | neighbor-address | verbose ] *

Available in any view

View the content of the PIM routing table

display pim routing-table [ group-address [ mask { mask-length | mask } ] | source-address [ mask { mask-length | mask } ] | incoming-interface [ interface-type interface-number | register ] | outgoing-interface { include | exclude | match } { interface-type interface-number | register } | mode mode-type | flags flag-value | fsm ] *

Available in any view

View the RP information

display pim rp-info [ group-address ]

Available in any view

Reset PIM control message counters

reset pim control-message counters [ interface interface-type interface-number ]

Available in user view

 

7.7  PIM Configuration Examples

7.7.1  PIM-DM Configuration Example

I. Network requirements

l           Receivers receive VOD information through multicast. The receiver groups of different organizations form stub networks, and one or more receiver hosts exist in each stub network. The entire PIM domain operates in the dense mode.

l           Host A and Host C are multicast receivers in two stub networks.

l           Switch D connects to the network that comprises the multicast source (Source) through VLAN-interface 300.

l           Switch A connects to stub network N1 through VLAN-interface 100, and to Switch D through VLAN-interface 103.

l           Switch B and Switch C connect to stub network N2 through their respective VLAN-interface 200, and to Switch D through VLAN-interface 101 and VLAN-interface 102 respectively.

l           IGMPv2 is to run between Switch A and N1, and between Switch B/Switch C and N2.

II. Network diagram

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int100

10.110.1.1/24

Switch D

Vlan-int300

10.110.5.1/24

 

Vlan-int103

192.168.1.1/24

 

Vlan-int103

192.168.1.2/24

Switch B

Vlan-int200

10.110.2.1/24

 

Vlan-int101

192.168.2.2/24

 

Vlan-int101

192.168.2.1/24

 

Vlan-int102

192.168.3.2/24

Switch C

Vlan-int200

10.110.2.2/24

 

 

 

 

Vlan-int102

192.168.3.1/24

 

 

 

Figure 7-10 Network diagram for PIM-DM configuration

III. Configuration procedure

1)         Configure the interface IP addresses and unicast routing protocol for each switch

Configure the IP address and subnet mask for each interface as per Figure 7-10. Detailed configuration steps are omitted here.

Configure the OSPF protocol for interoperation among the switches in the PIM-DM domain. Ensure the network-layer interoperation among Switch A, Switch B, Switch C and Switch D in the PIM-DM domain and enable dynamic update of routing information among the switches through a unicast routing protocol. Detailed configuration steps are omitted here.

2)         Enable IP multicast routing, and enable PIM-DM on each interface

# Enable IP multicast routing on Switch A, enable PIM-DM on each interface, and enable IGMPv2 on VLAN-interface 100, which connects Switch A to the stub network.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] pim dm

[SwitchA-Vlan-interface100] quit

[SwitchA] interface vlan-interface 103

[SwitchA-Vlan-interface103] pim dm

[SwitchA-Vlan-interface103] quit

The configuration on Switch B and Switch C is similar to that on Switch A.

# Enable IP multicast routing on Switch D, and enable PIM-DM on each interface.

<SwitchD> system-view

[SwitchD] multicast routing-enable

[SwitchD] interface vlan-interface 300

[SwitchD-Vlan-interface300] pim dm

[SwitchD-Vlan-interface300] quit

[SwitchD] interface vlan-interface 103

[SwitchD-Vlan-interface103] pim dm

[SwitchD-Vlan-interface103] quit

[SwitchD] interface vlan-interface 101

[SwitchD-Vlan-interface101] pim dm

[SwitchD-Vlan-interface101] quit

[SwitchD] interface vlan-interface 102

[SwitchD-Vlan-interface102] pim dm

[SwitchD-Vlan-interface102] quit

3)         Verify the configuration

Use the display pim interface command to view the PIM configuration and running status on each interface. For example:

# View the PIM configuration information on Switch D.

[SwitchD] display pim interface

Interface           NbrCnt HelloInt   DR-Pri     DR-Address

 Vlan300             0      30         1         10.110.5.1     (local)

 Vlan103             1      30         1         192.168.1.2    (local)

 Vlan101             1      30         1         192.168.2.2    (local)

 Vlan102             1      30         1         192.168.3.2    (local)

Carry out the display pim neighbor command to view the PIM neighboring relationships among the switches. For example:

# View the PIM neighboring relationships on Switch D.

[SwitchD] display pim neighbor

 Total Number of Neighbors = 3

 

 Neighbor       Interface           Uptime       Expires      Dr-Priority

 192.168.1.1    Vlan103             00:02:22     00:01:27     1

 192.168.2.1    Vlan101             00:00:22     00:01:29     3

 192.168.3.1    Vlan102             00:00:23     00:01:31     5

Assume that Host A needs to receive the information addressed to a multicast group G (225.1.1.1/24). After multicast source S (10.110.5.100/24) sends multicast packets to the multicast group G, an SPT is established through traffic flooding. Switches on the SPT path (Switch A and Switch D) have their (S, G) entries. Host A registers with Switch A, and a (*, G) entry is generated on Switch A. You can use the display pim routing-table command to view the PIM routing table information on each switch. For example:

# View the PIM routing table information on Switch A.

[SwitchA] display pim routing-table

Total 1 (*, G) entry; 1 (S, G) entry

 

 (*, 225.1.1.1)

     Protocol: pim-dm, Flag: WC

     UpTime: 00:04:25

     Upstream interface: NULL

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface100

                  Protocol: igmp, UpTime: 00:04:25, Expires: never

 (10.110.5.100, 225.1.1.1)

     Protocol: pim-dm, Flag: ACT

     UpTime: 00:06:14

     Upstream interface: Vlan-interface103,

         Upstream neighbor: 192.168.1.2

         RPF prime neighbor: 192.168.1.2

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface100

                  Protocol: pim-dm, UpTime: 00:04:25, Expires: never

The information on Switch B and Switch C is similar to that on Switch A.

# View the PIM routing table information on Switch D.

[SwitchD] display pim routing-table

Total 0 (*, G) entry; 1 (S, G) entry

 

 (10.110.5.100, 225.1.1.1)

     Protocol: pim-dm, Flag: LOC ACT

     UpTime: 00:03:27

     Upstream interface: Vlan-interface300

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

       Total number of downstreams: 3

           1: Vlan-interface103

                  Protocol: pim-dm, UpTime: 00:03:27, Expires: never

           2: Vlan-interface101

                  Protocol: pim-dm, UpTime: 00:03:27, Expires: never

           3: Vlan-interface102

                  Protocol: pim-dm, UpTime: 00:03:27, Expires: never

7.7.2  PIM-SM Configuration Example

I. Network requirements

l           Receivers receive VOD information through multicast. The receiver groups of different organizations form stub networks, and one or more receiver hosts exist in each stub network. The entire PIM domain operates in the sparse mode (not divided into different BSR admin-scope regions).

l           Host A and Host C are multicast receivers in two stub networks.

l           Switch D connects to the network that comprises the multicast source (Source) through VLAN-interface 300.

l           Switch A connects to stub network N1 through VLAN-interface 100, and to Switch D and Switch E through VLAN-interface 101 and VLAN-interface 102 respectively.

l           Switch B and Switch C connect to stub network N2 through their respective VLAN-interface 200, and to Switch E through VLAN-interface 103 and VLAN-interface 104 respectively.

l           Switch E connects to Switch A, Switch B, Switch C and Switch D, and its VLAN-interface 102 interface acts a C-BSR and a C-RP, with the range of multicast groups served by the C-RP being 225.1.1.0/24.

l           IGMPv2 is to run between Switch A and N1, and between Switch B/Switch C and N2.

II. Network diagram

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int100

10.110.1.1/24

Switch D

Vlan-int300

10.110.5.1/24

 

Vlan-int101

192.168.1.1/24

 

Vlan-int101

192.168.1.2/24

 

Vlan-int102

192.168.9.1/24

 

Vlan-int105

192.168.4.2/24

Switch B

Vlan-int200

10.110.2.1/24

Switch E

Vlan-int104

192.168.3.2/24

 

Vlan-int103

192.168.2.1/24

 

Vlan-int103

192.168.2.2/24

Switch C

Vlan-int200

10.110.2.2/24

 

Vlan-int102

192.168.9.2/24

 

Vlan-int104

192.168.3.1/24

 

Vlan-int105

192.168.4.1/24

Figure 7-11 Network diagram for PIM-SM domain configuration

III. Configuration procedure

1)         Configure the interface IP addresses and unicast routing protocol for each switch

Configure the IP address and subnet mask for each interface as per Figure 7-11. Detailed configuration steps are omitted here.

Configure the OSPF protocol for interoperation among the switches in the PIM-SM domain. Ensure the network-layer interoperation among Switch A, Switch B, Switch C, Switch D and Switch E in the PIM-SM domain and enable dynamic update of routing information among the switches through a unicast routing protocol. Detailed configuration steps are omitted here.

2)         Enable IP multicast routing, and enable PIM-SM on each interface

# Enable IP multicast routing on Switch A, enable PIM-SM on each interface, and enable IGMPv2 on VLAN-interface 100, which connects Switch A to the stub network.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] pim sm

[SwitchA-Vlan-interface100] quit

[SwitchA] interface vlan-interface 101

[SwitchA-Vlan-interface101] pim sm

[SwitchA-Vlan-interface101] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim sm

[SwitchA-Vlan-interface102] quit

The configuration on Switch B and Switch C is similar to that on Switch A. The configuration on Switch D and Switch E is also similar to that on Switch A except that it is not necessary to enable IGMP on the corresponding interfaces on these two switches.

3)         Configure a C-BSR and a C-RP

# Configure the service scope of RP advertisements and the positions of the C-BSR and C-RP on Switch E.

<SwitchE> system-view

[SwitchE] acl number 2005

[SwitchE-acl-basic-2005] rule permit source 225.1.1.0 0.0.0.255

[SwitchE-acl-basic-2005] quit

[SwitchE] pim

[SwitchE-pim] c-bsr vlan-interface 102

[SwitchE-pim] c-rp vlan-interface 102 group-policy 2005

[SwitchE-pim] quit

4)         Verify the configuration

Carry out the display pim interface command to view the PIM configuration and running status on each interface. For example:

# View the PIM configuration information on Switch A.

[SwitchA] display pim interface

 Interface             NbrCnt HelloInt   DR-Pri   DR-Address

 Vlan100               0      30         1        10.110.1.1     (local)

 Vlan101               1      30         1        192.168.1.2

 Vlan102               1      30         1        192.168.9.2

To view the BSR election information and the locally configured C-RP information in effect on a switch, use the display pim bsr-info command. For example:

# View the BSR information and the locally configured C-RP information in effect on Switch A.

[SwitchA] display pim bsr-info

 Elected BSR Address: 192.168.9.2

     Priority: 0

     Hash mask length: 30

     State: Accept Preferred

     Scope: Not scoped

     Uptime: 01:40:40

     Next BSR message scheduled at: 00:01:42

# View the BSR information and the locally configured C-RP information in effect on Switch E.

[SwitchE] display pim bsr-info

  Elected BSR Address: 192.168.9.2

     Priority: 0

     Hash mask length: 30

     State: Elected

     Scope: Not scoped

     Uptime: 00:00:18

     Next BSR message scheduled at: 00:01:52

 Candidate BSR Address: 192.168.9.2

     Priority: 0

     Hash mask length: 30

     State: Pending

     Scope: Not scoped

 

Candidate RP: 192.168.9.2(Vlan-interface102)

     Priority: 0

     HoldTime: 150

     Advertisement Interval: 60

     Next advertisement scheduled at: 00:00:48

To view the RP information discovered on a switch, use the display pim rp-info command. For example:

# View the RP information on Switch A.

[SwitchA] display pim rp-info

 Vpn-instance: public net

PIM-SM BSR RP information:

 Group/MaskLen: 225.1.1.0/24

     RP: 192.168.9.2

     Priority: 0

     HoldTime: 150

     Uptime: 00:51:45

     Expires: 00:02:22

Assume that Host A needs to receive information addressed to the multicast group G (225.1.1.1/24). An RPT will be built between Switch A and Switch E. When the multicast source S (10.110.5.100/24) registers with the RP, an SPT will be built between Switch D and Switch E. Upon receiving multicast data, Switch A immediately switches from the RPT to the SPT. Switches on the RPT path (Switch A and Switch E) have a (*, G) entry, while switches on the SPT path (Switch A and Switch D) have an (S, G) entry. You can use the display pim routing-table command to view the PIM routing table information on the switches. For example:

# View the PIM routing table information on Switch A.

[SwitchA] display pim routing-table

Total 1 (*, G) entry; 1 (S, G) entry

 (*, 225.1.1.1)

     RP: 192.168.9.2

     Protocol: pim-sm, Flag: WC

     UpTime: 00:13:46

     Upstream interface: Vlan-interface102,

         Upstream neighbor: 192.168.9.2

         RPF prime neighbor: 192.168.9.2

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface100

                  Protocol: igmp, UpTime: 00:13:46, Expires:00:03:06

 (10.110.5.100, 225.1.1.1)

     RP: 192.168.9.2

     Protocol: pim-sm, Flag: SPT ACT

     UpTime: 00:00:42

     Upstream interface: Vlan-interface101,

         Upstream neighbor: 192.168.9.2

         RPF prime neighbor: 192.168.9.2

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface100

                  Protocol: pim-sm, UpTime: 00:00:42, Expires:00:03:06

The information on Switch B and Switch C is similar to that on Switch A.

# View the PIM routing table information on Switch D.

[SwitchD] display pim routing-table

Total 0 (*, G) entry; 1 (S, G) entry

 (10.110.5.100, 225.1.1.1)

     RP: 192.168.9.2

     Protocol: pim-sm, Flag: SPT ACT

     UpTime: 00:00:42

     Upstream interface: Vlan-interface300

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface105

                  Protocol: pim-sm, UpTime: 00:00:42, Expires:00:02:06

# View the PIM routing table information on Switch E.

[SwitchE] display pim routing-table

Total 1 (*, G) entry; 0 (S, G) entry

 

 (*, 225.1.1.1)

     RP: 192.168.9.2 (local)

     Protocol: pim-sm, Flag: WC

     UpTime: 00:13:16

     Upstream interface: Register

         Upstream neighbor: 192.168.4.2

         RPF prime neighbor: 192.168.4.2

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface102

                  Protocol: pim-sm, UpTime: 00:13:16, Expires: 00:03:22

7.7.3  PIM-SSM Configuration Example

I. Network requirements

l           Receivers receive VOD information through multicast. The receiver groups of different organizations form stub networks, and one or more receiver hosts exist in each stub network. The entire PIM domain operates in the SSM mode.

l           Host A and Host C are multicast receivers in two stub networks.

l           Switch D connects to the network that comprises the multicast source (Source) through VLAN-interface 300.

l           Switch A connects to stub network N1 through VLAN-interface 100, and to Switch D and Switch E through VLAN-interface 101 and VLAN-interface 102 respectively.

l           Switch B and Switch C connect to stub network N2 through their respective VLAN-interface 200, and to Switch E through VLAN-interface 103 and VLAN-interface 104 respectively.

l           Switch E connects to Switch A, Switch B, Switch C and Switch D.

l           The SSM group range is 232.1.1.0/24.

l           IGMPv3 is to run between Switch A and N1, and between Switch B/Switch C and N2.

II. Network diagram

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int100

10.110.1.1/24

Switch D

Vlan-int300

10.110.5.1/24

 

Vlan-int101

192.168.1.1/24

 

Vlan-int101

192.168.1.2/24

 

Vlan-int102

192.168.9.1/24

 

Vlan-int105

192.168.4.2/24

Switch B

Vlan-int200

10.110.2.1/24

Switch E

Vlan-int104

192.168.3.2/24

 

Vlan-int103

192.168.2.1/24

 

Vlan-int103

192.168.2.2/24

Switch C

Vlan-int200

10.110.2.2/24

 

Vlan-int102

192.168.9.2/24

 

Vlan-int104

192.168.3.1/24

 

Vlan-int105

192.168.4.1/24

Figure 7-12 Network diagram for PIM-SSM configuration

III. Configuration procedure

1)         Configure the interface IP addresses and unicast routing protocol for each switch

Configure the IP address and subnet mask for each interface as per Figure 7-12. Detailed configuration steps are omitted here.

Configure the OSPF protocol for interoperation among the switches in the PIM-SM domain. Ensure the network-layer interoperation among Switch A, Switch B, Switch C, Switch D and Switch E in the PIM-SM domain and enable dynamic update of routing information among the switches through a unicast routing protocol. Detailed configuration steps are omitted here.

2)         Enable IP multicast routing, and enable PIM-SM on each interface

# Enable IP multicast routing on Switch A, enable PIM-SM on each interface, and enable IGMPv3 on VLAN-interface 100, which connects Switch A to the stub network.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] igmp enable

[SwitchA-Vlan-interface100] igmp version 3

[SwitchA-Vlan-interface100] pim sm

[SwitchA-Vlan-interface100] quit

[SwitchA] interface vlan-interface 101

[SwitchA-Vlan-interface101] pim sm

[SwitchA-Vlan-interface101] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim sm

[SwitchA-Vlan-interface102] quit

The configuration on Switch B and Switch C is similar to that on Switch A. The configuration on Switch D and Switch E is also similar to that on Switch A except that it is not necessary to enable IGMP on the corresponding interfaces on these two switches.

3)         Configure the SSM group range

# Configure the SSM group range to be 232.1.1.0/24 one Switch A.

[SwitchA] acl number 2000

[SwitchA-acl-basic-2000] rule permit source 232.1.1.0 0.0.0.255

[SwitchA-acl-basic-2000] quit

[SwitchA] pim

[SwitchA-pim] ssm-policy 2000

[SwitchA-pim] quit

The configuration on Switch B, Switch C, Switch D and Switch E is similar to that on Switch A.

4)         Verify the configuration

Carry out the display pim interface command to view the PIM configuration and running status on each interface. For example:

# View the PIM configuration information on Switch A.

[SwitchA] display pim interface

 Interface             NbrCnt HelloInt   DR-Pri     DR-Address

 Vlan100               0      30         1          10.110.1.1     (local)

 Vlan101               1      30         1          192.168.1.2

 Vlan102               1      30         1          192.168.9.2

Assume that Host A needs to receive the information a specific multicast source S (10.110.5.100/24) sends to multicast group G (232.1.1.1/24). Switch A builds an SPT toward the multicast source. Switches on the SPT path (Switch A and Switch D) have generated an (S, G) entry, while Switch E, which is not on the SPT path, does not have multicast routing entries. You can use the display pim routing-table command to view the PIM routing table information on each switch. For example:

# View the PIM routing table information on Switch A.

[SwitchA] display pim routing-table

Total 0 (*, G) entry; 1 (S, G) entry

 

 (10.110.5.100, 232.1.1.1)

     Protocol: pim-ssm, Flag:

     UpTime: 00:13:25

     Upstream interface: Vlan-interface101

         Upstream neighbor: 192.168.1.2

         RPF prime neighbor: 192.168.1.2

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface100

                  Protocol: igmp, UpTime: 00:13:25, Expires: -

The information on Switch B and Switch C is similar to that on Switch A.

# View the PIM routing table information on Switch D.

[SwitchD] display pim routing-table

Total 0 (*, G) entry; 1 (S, G) entry

 (10.110.5.100, 232.1.1.1)

     Protocol: pim-ssm, Flag:LOC

     UpTime: 00:12:05

     Upstream interface: Vlan-interface300

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

       Total number of downstreams: 1

           1: Vlan-interface105

                  Protocol: pim-ssm, UpTime: 00:12:05, Expires: 00:03:25

7.8  Troubleshooting PIM Configuration

7.8.1  Failure of Building a Multicast Distribution Tree Correctly

I. Symptom

None of the routers in the network (including routers directly connected with multicast sources and receivers) has multicast forwarding entries. That is, a multicast distribution tree cannot be built correctly and clients cannot receive multicast data.

II. Analysis

l           When PIM-DM runs on the entire network, multicast data is flooded from the first hop router connected with the multicast source to the last hop router connected with the clients along the SPT. When the multicast data is flooded to a router, no matter which router is, it creates (S, G) entries only if it has a route to the multicast source. If the router does not have a route to the multicast source, or if PIM-DM is not enabled on the router’s RPF interface to the multicast source, the router cannot create (S, G) entries.

l           When PIM-SM runs on the entire network, and when a router is to join the SPT, the router creates (S, G) entries only if it has a route to the multicast source. If the router does not have a router to the multicast source, or if PIM-DM is not enabled on the router’s RPF interface to the multicast source, the router cannot create (S, G) entries.

l           When a multicast router receives a multicast packet, it searches the existing unicast routing table for the optimal route to the RPF check object. The outgoing interface of this route will act as the RPF interface and the next hop will be taken as the RPF neighbor. The RPF interface completely relies on the existing unicast route, and is independent of PIM. The RPF interface must be PIM-enabled, and the RPF neighbor must also be a PIM neighbor. If PIM is not enabled on the router where the RPF interface or the RPF neighbor resides, the establishment of a multicast distribution tree will surely fail, causing abnormal multicast forwarding.

l           Because a hello message does not carry the PIM mode information, a router running PIM is unable to know what PIM mode its PIM neighbor is running. If different PIM modes are enabled on the RPF interface and on the corresponding interface of the RPF neighbor router, the establishment of a multicast distribution tree will surely fail, causing abnormal multicast forwarding.

l           The same PIM mode must run on the entire network. Otherwise, the establishment of a multicast distribution tree will surely fail, causing abnormal multicast forwarding.

III. Solution

1)         Check unicast routes. Use the display ip routing-table command to check whether a unicast route exists from the receiver host to the multicast source.

2)         Check that PIM is enabled on the interfaces, especially on the RPF interface. Use the display pim interface command to view the PIM information on each interface. If PIM is not enabled on the interface, use the pim dm or pim sm command to enable PIM-DM or PIM-SM.

3)         Check that the RPF neighbor is a PIM neighbor. Use the display pim neighbor command to view the PIM neighbor information.

4)         Check that PIM and IGMP are enabled on the interfaces directly connecting to the multicast source and to the receivers.

5)         Check that the same PIM mode is enabled on related interfaces. Use the display pim interface verbose command to check whether the same PIM mode is enabled on the RPF interface and the corresponding interface of the RPF neighbor router.

6)         Check that the same PIM mode is enabled on all the routers in the entire network. Make sure that the same PIM mode is enabled on all the routers: PIM-SM on all routers, or PIM-DM on all routers. In the case of PIM-SM, also check that the BSR and RP configurations are correct.

7.8.2  Multicast Data Abnormally Terminated on an Intermediate Router

I. Symptom

An intermediate router can receive multicast data successfully, but the data cannot reach the last hop router. An interface on the intermediate router receives data but no corresponding (S, G) entry is created in the PIM routing table.

II. Analysis

l           If a multicast forwarding boundary has been configured through the multicast boundary command, any multicast packet will be kept from crossing the boundary, and therefore no routing entry can be created in the PIM routing table.

l           In addition, the source-policy command is used to filter received multicast packets. If the multicast data fails to pass the ACL rule defined in this command, PIM cannot create the route entry, either.

III. Solution

1)         Check the multicast forwarding boundary configuration. Use the display current-configuration command to check the multicast forwarding boundary settings. Use the multicast boundary command to change the multicast forwarding boundary settings.

2)         Check the multicast filter configuration. Use the display current-configuration command to check the multicast filter configuration. Change the ACL rule defined in the source-policy command so that the source/group address of the multicast data can pass ACL filtering.

7.8.3  RPs Unable to Join SPT in PIM-SM

I. Symptom

An RPT cannot be established correctly, or the RPs cannot join the SPT to the multicast source.

II. Analysis

l           As the core of a PIM-SM domain, the RPs serve specific multicast groups. Multiple RPs can coexist in a network. Make sure that the RP information on all routers is exactly the same, and a specific group is mapped to the same RP. Otherwise, multicast forwarding will fail.

l           If the static RP mechanism is used, the same static RP command must be executed on all the routers in the entire network. Otherwise, multicast forwarding will fail.

III. Solution

1)         Check that a route is available to the RP. Carry out the display ip routing-table command to check whether a route is available on each router to the RP.

2)         Check the dynamic RP information. Use the display pim rp-info command to check whether the RP information is consistent on all routers.

3)         Check the configuration of static RPs. Use the display pim rp-info command to check whether the same static RP address has been configured on all the routers in the entire network.

7.8.4  No Unicast Route Between BSR and C-RPs in PIM-SM

I. Symptom

C-RPs cannot unicast advertise messages to the BSR. The BSR does not advertise bootstrap messages containing C-RP information and has no unicast route to any C-RP. An RPT cannot be established correctly, or the DR cannot perform source register with the RP.

II. Analysis

l           The C-RPs periodically send C-RP-Adv messages to the BSR by unicast. If a C-RP has no unicast route to the BSR, the BSR cannot receive C-RP-Adv messages from that C-RP and the bootstrap message of the BSR will not contain the information of that C-RP.

l           In addition, if the BSR does not have a unicast router to a C-RP, it will discard the C-RP-Adv messages from that C-RP, and therefore the bootstrap messages of the BSR will not contain the information of that C-RP.

l           The RP is the core of a PIM-SM domain. Make sure that the RP information on all routers is exactly the same, a specific group G is mapped to the same RP, and unicast routes are available to the RP.

III. Solution

1)         Check whether routes to C-RPs, the RP and the BSR are available. Carry out the display ip routing-table command to check whether routes are available on each router to the RP and the BSR, and whether a route is available between the RP and the BSR. Make sure that each C-RP has a unicast route to the BSR, the BSR has a unicast route to each C-RP, and all the routers in the entire network have a unicast route to the RP.

2)         Check the RP and BSR information. PIM-SM needs the support of the RP and BSR. Use the display pim bsr-info command to check whether the BSR information is available on each router, and then use the display pim rp-info command to check whether the RP information is correct.

3)         View PIM neighboring relationships. Use the display pim neighbor command to check whether the normal PIM neighboring relationships have been established among the routers

 


Chapter 8  MSDP Configuration

When configuring MSDP, go to these sections for information you are interested in:

l           MSDP Overview

l           MSDP Configuration Task List

l           Displaying and Maintaining MSDP

l           MSDP Configuration Examples

l           Troubleshooting MSDP

 

&  Note:

 

8.1  MSDP Overview

8.1.1  Introduction to MSDP

Multicast source discovery protocol (MSDP) is an inter-domain multicast solution developed to address the interconnection of protocol independent multicast sparse mode (PIM-SM) domains. It is used to discover multicast source information in other PIM-SM domains.

In the basic PIM-SM mode, a multicast source registers only with the RP in the local PIM-SM domain, and the multicast source information of a domain is isolated from that of another domain. As a result, the RP is aware of the source information only within the local domain and a multicast distribution tree is built only within the local domain to deliver multicast data from a local multicast source to local receivers. If there is a mechanism that allows RPs of different PIM-SM domains to share their multicast source information, the local RP will be able to join multicast sources in other domains and multicast data can be transmitted among different domains.

MSDP achieves this objective. By establishing MSDP peer relationships among RPs of different PIM-SM domains, source active (SA) messages can be forwarded among domains and the multicast source information can be shared.

 

  Caution:

l      MSDP is applicable only if the intra-domain multicast protocol is PIM-SM.

l      MSDP is meaningful only for the any-source multicast (ASM) model.

 

8.1.2  How MSDP Works

I. MSDP peers

With one or more pairs of MSDP peers configured in the network, an MSDP interconnection map is formed, where the RPs of different PIM-SM domains are interconnected in series. Relayed by these MSDP peers, an SA message sent by an RP can be delivered to all other RPs.

Figure 8-1 Where MSDP peers are in the network

As shown in Figure 8-1, an MSDP peer can be created on any PIM-SM router. MSDP peers created on PIM-SM routers that assume different roles function differently.

1)         MSDP peers on RPs

l           Source-side MSDP peer: the MSDP peer nearest to the multicast source (Source), typically the source-side RP, like RP 1. The source-side RP creates SA messages and sends the messages to its remote MSDP peer to notify the MSDP peer of the locally registered multicast source information. A source-side MSDP must be created on the source-side RP; otherwise it will not be able to advertise the multicast source information out of the PIM-SM domain.

l           Receiver-side MSDP peer: the MSDP peer nearest to the receivers, typically the receiver-side RP, like RP 3. Upon receiving an SA message, the receiver-side MSDP peer resolves the multicast source information carried in the message and joins the SPT rooted at the source across the PIM-SM domain. When multicast data from the multicast source arrives, the receiver-side MSDP peer forwards the data to the receivers along the RPT.

l           Intermediate MSDP peer: an MSDP peer with multicast remote MSDP peers, like RP 2. An intermediate MSDP peer forwards SA messages received from one remote MSDP peer to other remote MSDP peers, functioning as a relay of multicast source information.

2)         MSDP peers created on common PIM-SM routers (other than RPs)

Router A and Router B are MSDP peers on common multicast routers. Such MSDP peers just forward received SA messages.

 

&  Note:

An RP is dynamically elected from C-RPs. To enhance network robustness, a PIM-SM network typically has more than one C-RP. As the RP election result is unpredictable, MSDP peering relationships should be built among all C-RPs so that the winner C-RP is always on the "MSDP interconnection map”, while loser C-RPs will assume the role of common PIM-SM routers on the “MSDP interconnection map”.

 

II. Implementing inter-domain multicast delivery by leveraging MSDP peers

As shown in Figure 8-2, an active source (Source) exists in the domain PIM-SM 1, and RP 1 has learned the existence of Source through multicast source registration. If RPs in PIM-SM 2 and PIM-SM 3 also wish to know the specific location of Source so that receiver hosts can receive multicast traffic originated from it, MSDP peering relationships should be established between RP 1 and RP 3 and between RP 3 and RP 2 respectively.

Figure 8-2 MSDP peering relationships

The process of implementing inter-domain multicast delivery by leveraging MSDP peers is as follows:

1)         When the multicast source in PIM-SM 1 sends the first multicast packet to multicast group G, DR 1 encapsulates the multicast data within a register message and sends the register message to RP 1. Then, RP 1 gets aware of the information related to the multicast source.

2)         As the source-side RP, RP 1 creates SA messages and periodically sends the SA messages to its MSDP peer. An SA message contains the source address (S), the multicast group address (G), and the address of the RP which has created this SA message (namely RP 1).

3)         On MSDP peers, each SA message is subject to a reverse path forwarding (RPF) check and multicast policy–based filtering, so that only SA messages that have arrived along the correct path and passed the filtering are received and forwarded. This avoids delivery loops of SA messages. In addition, you can configure MSDP peers into an MSDP mesh group so as to avoid flooding of SA messages between MSDP peers.

4)         SA messages are forwarded from one MSDP peer to another, and finally the information of the multicast source traverses all PIM-SM domains with MSDP peers (PIM-SM 2 and PIM-SM 3 in this example).

5)         Upon receiving the SA message create by RP 1, RP 2 in PIM-SM 2 checks whether there are any receivers for the multicast group in the domain.

l           If so, the RPT for the multicast group G is maintained between RP 2 and the receivers. RP 2 creates an (S, G) entry, and sends an (S, G) join message hop by hop towards DR 1 at the multicast source side, so that it can directly join the SPT rooted at the source over other PIM-SM domains. Then, the multicast data can flow along the SPT to RP 2 and is forwarded by RP 2 to the receivers along the RPT. Upon receiving the multicast traffic, the DR at the receiver side (DR 2) decides whether to initiate an RPT-to-SPT switchover process.

l           If no receivers for the group exist in the domain, RP 2 does dot create an (S, G) entry and does join the SPT rooted at the source.

 

&  Note:

l      An MSDP mesh group refers to a group of MSDP peers that have MSDP peering relationships among one another and share the same group name.

l      When using MSDP for inter-domain multicasting, once an RP receives information form a multicast source, it no longer relies on RPs in other PIM-SM domains. The receivers can override the RPs in other domains and directly join the multicast source based SPT.

 

III. RPF check rules for SA messages

As shown in Figure 8-3, there are five autonomous systems in the network, AS 1 through AS 5, with IGP enabled on routers within each AS and EBGP as the interoperation protocol among different ASs. Each AS contains at least one PIM-SM domain and each PIM-SM domain contains one ore more RPs. MSDP peering relationships have been established among different RPs. RP 3, RP 4 and RP 5 are in an MSDP mesh group. On RP 7, RP 6 is configured as its static RPF peer.

 

&  Note:

If only one MSDP peer exists in a PIM-SM domain, this PIM-SM domain is also called a stub domain. For example, AS 4 in Figure 8-3 is a stub domain. The MSDP peer in a stub domain can have multiple remote MSDP peers at the same time. You can configure one or more remote MSDP peers as static RPF peers. When an RP receives an SA message from a static RPF peer, the RP accepts the SA message and forwards it to other peers without performing an RPF check.

 

Figure 8-3 Diagram for RPF check for SA messages

As illustrated in Figure 8-3, these MSDP peers dispose of SA messages according to the following RPF check rules:

1)         When RP 2 receives an SA message from RP 1

Because the source-side RP address carried in the SA message is the same as the MSDP peer address, which means that the MSDP peer where the SA is from is the RP that has created the SA message, RP 2 accepts the SA message and forwards it to its other MSDP peer (RP 3).

2)         When RP 3 receives the SA message from RP 2

Because the SA message is from an MSDP peer (RP 2) in the same AS, and the MSDP peer is the next hop on the optimal path to the source-side RP, RP 3 accepts the message and forwards it to other peers (RP 4 and RP 5).

3)         When RP 4 and RP 5 receive the SA message from RP 3

Because the SA message is from an MSDP peer (RP 3) in the same mesh group, RP 4 and RP 5 both accept the SA message, but they do not forward the message to other members in the mesh group; instead, they forward it to other MSDP peers (RP 6 in this example) out of the mesh group.

4)         When RP 6 receives the SA messages from RP 4 and RP 5 (suppose RP 5 has a higher IP address)

Although RP 4 and RP 5 are in the same AS (AS 3) and both are MSDP peers of RP 6, because RP 5 has a higher IP address, RP 6 accepts only the SA message from RP 5.

5)         When RP 7 receives the SA message from RP 6

Because the SA message is from a static RPF peer (RP 6), RP 7 accepts the SA message and forwards it to other peer (RP 8).

6)         When RP 8 receives the SA message from RP 7

An EBGP route exists between two MSDP peers in different ASs. Because the SA message is from an MSDP peer (RP 7) in a different AS, and the MSDP peer is the next hop on the EBGP route to the source-side RP, RP 8 accepts the message and forwards it to its other peer (RP 9).

7)         When RP 9 receives the SA message from RP 8

Because RP 9 has only one MSDP peer, RP 9 accepts the SA message.

SA messages from other paths than described above will not be accepted nor forwarded by MSDP peers.

IV. Implementing intra-domain Anycast RP by leveraging MSDP peers

Anycast RP refers to such an application that enables load balancing and redundancy backup between two or more RPs within a PIM-SM domain by configuring the same IP address for, and establishing MSDP peering relationships between, these RPs.

As shown in Figure 8-4, within a PIM-SM domain, a multicast source sends multicast data to multicast group G, and Receiver is a member of the multicast group. To implement Anycast RP, configure the same IP address (known as anycast RP address, typically a private address) on Router A and Router B, configure these interfaces as C-RPs, and establish an MSDP peering relationship between Router A and Router B.

 

&  Note:

Usually an Anycast RP address is configured on a logic interface, like a loopback interface.

 

Figure 8-4 Typical network diagram of Anycast RP

The work process of Anycast RP is as follows:

1)         The multicast source registers with the nearest RP. In this example, Source registers with RP 1, with its multicast data encapsulated in the register message. When the register message arrives to RP 1, RP 1 decapsulates the message.

2)         Receivers send join messages to the nearest RP to join in the RPT rooted as this RP. In this example, Receiver joins the RPT rooted at RP 2.

3)         RPs share the registered multicast information by means of SA messages. In this example, RP 1 creates an SA message and sends it to RP 2, with the multicast data from Source encapsulated in the SA message. When the SA message reaches RP 2, RP 2 decapsulates the message.

4)         Receivers receive the multicast data along the RPT and directly join the SPT rooted at the multicast source. In this example, RP 2 forwards the multicast data down the RPT. When Receiver receives the multicast data from Source, it directly joins the SPT rooted at Source.

The significance of Anycast RP is as follows:

l           Optimal RP path: A multicast source registers with the nearest RP so that an SPT with the optimal path is built; a receiver joins the nearest RP so that an RPT with the optimal path is built.

l           Load balancing between RPs: Each RP just needs to maintain part of the source/group information within the PIM-SM domain and forward part of the multicast data, thus achieving load balancing between different RPs.

l           Redundancy backup between RPs: When an RP fails, the multicast source previously registered on it or the receivers previous joined it will register with or join another nearest RP, thus achieving redundancy backup between RPs.

 

l      Be sure to configure a 32-bit subnet mask (255.255.255.255) for the Anycast RP address, namely configure the Anycast RP address into a host address.

l      An MSDP peer address must be different from the Anycast RP address.

 

8.1.3  Protocols and Standards

MSDP is documented in the following specifications:

l           RFC 3618: Multicast Source Discovery Protocol (MSDP)

l           RFC 3446: Anycast Rendezvous Point (RP) mechanism using Protocol Independent Multicast (PIM) and Multicast Source Discovery Protocol (MSDP)

8.2  MSDP Configuration Task List

Complete these tasks to configure MSDP:

Task

Remarks

Configuring Basic Functions of MSDP

Enabling MSDP

Required

Creating an MSDP Peer Connection

Required

Configuring a Static RPF Peer

Optional

Configuring an MSDP Peer Connection

Configuring MSDP Peer Description

Optional

Configuring an MSDP Mesh Group

Optional

Configuring MSDP Peer Connection Control

Optional

Configuring SA Messages Related Parameters

Configuring SA Message Content

Optional

Configuring SA Request Messages

Optional

Configuring an SA Message Filtering Rule

Optional

Configuring SA Message Cache

Optional

 

8.3  Configuring Basic Functions of MSDP

 

&  Note:

All the configuration tasks should be carried out on RPs in PIM-SM domains, and each of these RPs acts as an MSDP peer.

 

8.3.1  Configuration Prerequisites

Before configuring the basic functions of MSDP, complete the following tasks:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

l           Configuring PIM-SM to enable intra-domain multicast forwarding.

Before configuring the basic functions of MSDP, prepare the following data:

l           IP addresses of MSDP peers

l           Address prefix list for an RP address filtering policy

8.3.2  Enabling MSDP

Follow these steps to enable MSDP:

To do...

Use the command...

Remarks

Enter system view

system-view

Enable IP multicast routing

multicast routing-enable

Required

Disabled by default

Enable MSDP and enter MSDP view

msdp

Required

Disabled by default

 

8.3.3  Creating an MSDP Peer Connection

An MSDP peering relationship is identified by an address pair, namely the address of the local MSDP peer and that of the remote MSDP peer. An MSDP peer connection must be created on both devices that are a pair of MSDP peers.

Follow these steps to create an MSDP peer connection:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Create an MSDP peer connection

peer peer-address connect-interface interface-type interface-number

Required

No MSDP peer connection created by default

 

&  Note:

If an interface of the router is shared by an MSDP peer and a BGP peer at the same time, we recommend that you configuration the same IP address for the MSDP peer and BGP peer.

 

8.3.4  Configuring a Static RPF Peer

Configuring static RPF peers avoids RPF check of SA messages.

Follow these steps to configure a static RPF peer:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Configure a static RPF peer

static-rpf-peer peer-address [ rp-policy ip-prefix-name ]

Required

No static RPF peer configured by default

 

&  Note:

If only one MSDP peer is configured on a router, this MSDP will be registered as a static RPF peer.

 

8.4  Configuring an MSDP Peer Connection

8.4.1  Configuration Prerequisites

Before configuring MSDP peer connection, complete the following tasks:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

l           Configuring basic functions of MSDP

Before configuring an MSDP peer connection, prepare the following data:

l           Description information of MSDP peers

l           Name of an MSDP mesh group

l           MSDP peer connection retry interval

8.4.2  Configuring MSDP Peer Description

With the MSDP peer description information, the administrator can easily distinguish different MSDP peers and thus better manage MSDP peers.

Follow these steps to configure description for an MSDP peer:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Configure description for an MSDP peer

peer peer-address description text

Required

No description for MSDP peers by default

 

8.4.3  Configuring an MSDP Mesh Group

An AS may contain multiple MSDP peers. You can use the MSDP mesh group mechanism to avoid SA message flooding among these MSDP peers and optimize the multicast traffic.

On one hand, an MSDP peer in an MSDP mesh group forwards SA messages from outside the mesh group that have passed the RPF check to the other members in the mesh group; on the other hand, a mesh group member accepts SA messages from inside the group without performing an RPF check, and does not forward the message within the mesh group either. This mechanism not only avoids SA flooding but also simplifies the RPF check mechanism, because BGP is not needed to run between these MSDP peers.

By configuring the same mesh group name for multiple MSDP peers, you can create a mesh group with these MSDP peers.

Follow these steps to create an MSDP mesh group:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Create an MSDP peer as a mesh group member

peer peer-address mesh-group name

Required

An MSDP peer does not belong to any mesh group by default

 

&  Note:

l      Before grouping multiple routers into an MSDP mesh group, make sure that these routers are interconnected with one another.

l      If you configure more than one mesh group name on an MSDP peer, only the last configuration is effective.

 

8.4.4  Configuring MSDP Peer Connection Control

MSDP peers are interconnected over TCP (port number 639). You can flexibly control sessions between MSDP peers by manually deactivating and reactivating the MSDP peering connections. When the connection between two MSDP peers is deactivated, SA messages will no longer be delivered between them, and the TCP connection is closed without any connection setup retry, but the configuration information will remain unchanged.

When a new MSDP peer is created, or when a previously deactivated MSDP peer connection is reactivated, or when a previously failed MSDP peer attempts to resume operation, a TCP connection is required. You can flexibly adjust the interval between MSDP peering connection retries.

Follow these steps to configure MSDP peer connection control:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Deactivate an MSDP peer

shutdown peer-address

Optional

Active by default

Configure the interval between MSDP peer connection retries

timer retry interval

Optional

30 seconds by default

 

8.5  Configuring SA Messages Related Parameters

8.5.1  Configuration Prerequisites

Before configuring SA message delivery, complete the following tasks:

l           Configure any unicast routing protocol so that all devices in the domain are interoperable at the network layer.

l           Configuring basic functions of MSDP

Before configuring SA message delivery, prepare the following data:

l           ACL as a filtering rule for SA request messages

l           ACL as an SA message creation rule

l           ACL as a filtering rule for receiving or forwarding SA messages

l           Minimum TTL value of multicast packets encapsulated in SA messages

l           Maximum SA message cache size

8.5.2  Configuring SA Message Content

Some multicast sources send multicast data at an interval longer than the aging time of (S, G) entries. In this case, the source-side DR has to encapsulate multicast data packet by packet in register messages and send them to the source-side RP. The source-side RP transmits the (S, G) information to the remote RP through SA messages. Then the remote RP joins the source-side DR and builds an SPT. Since the (S, G) entries have timed out, remote receivers can never receive the multicast data from the multicast source.

If the source-side RP is enabled to encapsulate register messages in SA messages, when there is a multicast packet to deliver, the source-side RP encapsulates a register message containing the multicast packet in an SA message and sends it out. After receiving the SA message, the remote RP decapsulates the SA message and delivers the multicast data contained in the register message to the receivers along the RPT.

The MSDP peers deliver SA messages to one another. Upon receiving an SA message, a router performs RPF check on the message. If the router finds that the remote RP address is the same as the local RP address, it will discard the SA message. In the Anycast RP application, however, you need to configure RPs with the same IP address on two or more routers in the same PIM-SM domain, and configure these routers as MSDP peers to one another. Therefore, a logic RP address (namely the RP address on the logic interface) that is different from the actual RP address must be designated for SA messages so that the messages can pass the RPF check.

Follow these steps to configure the SA message content:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Enable encapsulation of a register message

encap-data-enable

Optional

Disabled by default

Configure the interface address as the RP address in SA messages

originating-rp interface-type interface-number

Optional

PIM RP address by default

 

8.5.3  Configuring SA Request Messages

By default, upon receiving a new Join message, a router does not send an SA request message to its designated MSDP peer; instead, it waits for the next SA message from its MSDP peer. This will cause the receiver to delay obtaining multicast source information. To enable a new receiver to get the currently active multicast source information as early as possible, you can configure routers to send SA request messages to the designated MSDP peers upon receiving a Join message of a new receiver.

Follow these steps to configure SA message transmission and filtering:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Enable the device to send SA request messages

peer peer-address request-sa-enable

Optional

Disabled by default

Configure a filtering rule for SA request messages

peer peer-address sa-request-policy [ acl acl-number ]

Optional

SA request messages are not filtered by default

 

  Caution:

Before you can enable the device to send SA requests, be sure to disable the SA message cache mechanism.

 

8.5.4  Configuring an SA Message Filtering Rule

By configuring an SA message creation rule, you can enable the router to filter the (S, G) entries to be advertised when creating an SA message, so that the propagation of messages of multicast sources is controlled.

In addition to controlling SA message creation, you can also configure filtering rules for forwarding and receiving SA messages, so as to control the propagation of multicast source information in the SA messages.

l           By configuring a filtering rule for receiving or forwarding SA messages, you can enable the router to filter the (S, G) forwarding entries to be advertised when receiving or forwarding an SA message, so that the propagation of multicast source information is controlled at SA message reception or forwarding.

l           An SA message with encapsulated multicast data can be forwarded to a designated MSDP peer only if the TTL value in its IP header exceeds the threshold. Therefore, you can control the forwarding of such an SA message by configuring the TTL threshold of the encapsulated data packet.

Follow these steps to configure a filtering rule for receiving or forwarding SA messages:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Configure an SA message creation rule

import-source [ acl acl-number ]

Required

No restrictions on (S, G) entries by default

Configure a filtering rule for receiving or forwarding SA messages

peer peer-address sa-policy { import | export } [ acl acl-number ]

Required

No filtering rule by default

Configure the minimum TTL value of multicast packets to be encapsulated in SA messages

peer peer-address minimum-ttl ttl-value

Optional

0 by default

 

8.5.5  Configuring SA Message Cache

To reduce the time spent in obtaining the multicast source information, you can have SA messages cached on the router. However, the more SA messages are cached, the larger memory space of the router is used.

With the SA cache mechanism enabled, when receiving a new Join message, the router will not send an SA request message to its MSDP peer; instead, it acts as follows:

l           If there is no SA message in the cache, the router will wait for the SA message sent by its MSDP peer in the next cycle;

l           If there is an SA message in the cache, the router will obtain the information of all active sources directly from the SA message and join the corresponding SPT.

To protect the router against denial of service (DoS) attacks, you can configure the maximum number of SA messages the route can cache.

Follow these steps to configure the SA message cache:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter MSDP view

msdp

Enable the SA message cache mechanism

cache-sa-enable

Optional

Enabled by default

Configure the maximum number of SA messages the router can cache

peer peer-address sa-cache-maximum sa-limit

Optional

8192 by default

 

8.6  Displaying and Maintaining MSDP

To do...

Use the command...

Remarks

View the brief information of MSDP peers

display msdp brief [ state { connect | down | listen | shutdown | up } ]

Available in any view

View the detailed information about the status of MSDP peers

display msdp peer-status [ peer-address ]

Available in any view

View the (S, G) entry information in the MSDP cache

display msdp sa-cache [ group-address | source-address | as-number ] *

Available in any view

View the number of SA messages in the MSDP cache

display msdp sa-count [ as-number ]

Available in any view

Reset the TCP connection with an MSDP peer

reset msdp peer [ peer-address ]

Available in user view

Clear (S, G) entries in the MSDP cache

reset msdp sa-cache [ group-address ]

Available in user view

Clear all statistics information of an MSDP peer

reset msdp statistics [ peer-address ]

Available in user view

 

8.7  MSDP Configuration Examples

8.7.1  Inter-AS Multicast Configuration Leveraging BGP Routes

I. Network requirements

l           There are two ASs in the network, AS 100 and AS 200 respectively. OSPF is running within each AS, and BGP is running between the two ASs.

l           PIM-SM 1 belongs to AS 100, while PIM-SM 2 and PIM-SM 3 belong to AS 200.

l           Each PIM-SM domain has zero or one multicast source and receiver. OSPF runs within each domain to provide unicast routes.

l           It is required that the respective Loopback 0 of Switch B, Switch C and Switch E be configured as the C-BSR and C-RP of the respective PIM-SM domains.

l           It is required that an MSDP peering relationship be set up between Switch B and Switch C through EBGP, and between Switch C and Switch E through IBGP.

II. Network diagram

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int103

10.110.1.2/24

Switch D

Vlan-int104

10.110.4.2/24

 

Vlan-int100

10.110.2.1/24

 

Vlan-int300

10.110.5.1/24

 

Vlan-int200

10.110.3.1/24

Switch E

Vlan-int105

10.110.6.1/24

Switch B

Vlan-int103

10.110.1.1/24

 

Vlan-int102

192.168.3.2/24

 

Vlan-int101

192.168.1.1/24

 

Loop0

3.3.3.3/32

 

Loop0

1.1.1.1/32

Switch F

Vlan-int105

10.110.6.2/24

Switch C

Vlan-int104

10.110.4.1/24

 

Vlan-int400

10.110.7.1/24

 

Vlan-int102

192.168.3.1/24

Source 1

10.110.2.100/24

 

Vlan-int101

192.168.1.2/24

Source 2

10.110.5.100/24

 

Loop0

2.2.2.2/32

 

 

 

Figure 8-5 Network diagram for inter-AS multicast configuration leveraging BGP routes

III. Configuration procedure

1)         Configure the interface IP addresses and unicast routing protocol for each switch

Configure the IP address and subnet mask for each interface as per Figure 8-5. Detailed configuration steps are omitted here.

Configure OSPF for interconnection between switches in each AS. Ensure the network-layer interoperation among each AS, and ensure the dynamic update of routing information between the switches through a unicast routing protocol. Detailed configuration steps are omitted here.

2)         Enable IP multicast routing, enable PIM-SM on each interface, and configure a PIM-SM domain border

# Enable IP multicast routing on Switch A, enable PIM-SM on each interface, and enable IGMP on the host-side interface VLAN-interface 200.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] interface vlan-interface 103

[SwitchA-Vlan-interface103] pim sm

[SwitchA-Vlan-interface103] quit

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] pim sm

[SwitchA-Vlan-interface100] quit

[SwitchA] interface vlan-interface 200

[SwitchA-Vlan-interface200] igmp enable

[SwitchA-Vlan-interface200] pim sm

[SwitchA-Vlan-interface200] quit

The configuration on Switch B, Switch C, Switch D, Switch E, and Switch F is similar to the configuration on Switch A.

# Configure a PIM domain border on Switch B.

[SwitchB] interface vlan-interface 101

[SwitchB-Vlan-interface101] pim bsr-boundary

[SwitchB-Vlan-interface101] quit

The configuration on Switch C and Switch E is similar to the configuration on Switch B.

3)         Configure C-BSRs and C-RPs

# Configure Loopback 0 as a C-BSR and a C-RP on Switch B.

[SwitchB] pim

[SwitchB-pim] c-bsr loopback 0

[SwitchB-pim] c-rp loopback 0

[SwitchB-pim] quit

The configuration on Switch C and Switch E is similar to the configuration on Switch B.

4)         Configure BGP for mutual route redistribution between BGP and OSPF

# Configure EBGP on Switch B, and redistribute OSPF routes.

[SwitchB] bgp 100

[SwitchB-bgp] router-id 1.1.1.1

[SwitchB-bgp] peer 192.168.1.2 as-number 200

[SwitchB-bgp] import-route ospf 1

[SwitchB-bgp] quit

# Configure IBGP and EBGP on Switch C, and redistribute OSPF routes.

[SwitchC] bgp 200

[SwitchC-bgp] router-id 2.2.2.2

[SwitchC-bgp] peer 192.168.1.1 as-number 100

[SwitchC-bgp] peer 192.168.3.2 as-number 200

[SwitchC-bgp] import-route ospf 1

[SwitchC-bgp] quit

# Configure IBGP on Switch E, and redistribute OSPF routes.

[SwitchE] bgp 200

[SwitchE-bgp] router-id 3.3.3.3

[SwitchE-bgp] peer 192.168.3.1 as-number 200

[SwitchE-bgp] import-route ospf 1

[SwitchE-bgp] quit

# Redistribute BGP routes into OSPF on Switch B.

[SwitchB] ospf 1

[SwitchB-ospf-1] import-route bgp

[SwitchB-ospf-1] quit

The configuration on Switch C and Switch E is similar to the configuration on Switch B.

5)         Configure MSDP peers

# Configure an MSDP peer on Switch B.

[SwitchB] msdp

[SwitchB-msdp] peer 192.168.1.2 connect-interface vlan-interface 101

[SwitchB-msdp] quit

# Configure an MSDP peer on Switch C.

[SwitchC] msdp

[SwitchC-msdp] peer 192.168.1.1 connect-interface vlan-interface 101

[SwitchC-msdp] peer 192.168.3.2 connect-interface vlan-interface 102

[SwitchC-msdp] quit

# Configure MSDP peers on Switch E.

[SwitchE] msdp

[SwitchE-msdp] peer 192.168.3.1 connect-interface vlan-interface 102

[SwitchE-msdp] quit

6)         Verify the configuration

Carry out the display bgp peer command to view the BGP peering relationships between the switches. For example:

# View the information about BGP peering relationships on Switch B.

[SwitchB] display bgp peer

 

 BGP local router ID : 1.1.1.1

 Local AS number : 100

 Total number of peers : 1                 Peers in established state : 1

 

  Peer         V  AS  MsgRcvd  MsgSent  OutQ PrefRcv Up/Down  State

 

  192.168.1.2  4 200       24       21     0       6 00:13:09 Established

# View the information about BGP peering relationships on Switch C.

[SwitchC] display bgp peer

 

 BGP local router ID : 2.2.2.2

 Local AS number : 200

 Total number of peers : 2                 Peers in established state : 2

 

  Peer         V  AS  MsgRcvd  MsgSent  OutQ PrefRcv Up/Down  State

 

  192.168.1.1  4 100       18       16     0       1 00:12:04 Established

  192.168.3.2  4 200       21       20     0       6 00:12:05 Established

# View the information about BGP peering relationships on Switch E.

[SwitchE] display bgp peer

 

BGP local router ID : 3.3.3.3

 Local AS number : 200

 Total number of peers : 1                 Peers in established state : 1

 

  Peer         V  AS  MsgRcvd  MsgSent  OutQ PrefRcv Up/Down  State

 

  192.168.3.1  4 200      16       14     0       1 00:10:58 Established

To view the BGP routing table information on the switches, use the display bgp routing-table command. For example:

# View the BGP routing table information on Switch C.

[SwitchC] display bgp routing-table

 

 Total Number of Routes: 13

 

 BGP Local router ID is 2.2.2.2

 Status codes: * - valid, > - best, d - damped,

               h - history,  i - internal, s - suppressed, S - Stale

               Origin : i - IGP, e - EGP, ? - incomplete

      Network            NextHop        MED        LocPrf    PrefVal Path/Ogn

 

 *>   1.1.1.1/32        192.168.1.1   0                    0       100?

 *>i  2.2.2.2/32        192.168.3.2   0          100       0       ?

 *>   3.3.3.3/32        0.0.0.0       0                    0       ?

 *>   192.168.1.0       0.0.0.0       0                    0       ?

 *                      192.168.1.1   0                    0       100?

 *>   192.168.1.1/32    0.0.0.0       0                    0       ?

 *>   192.168.1.2/32    0.0.0.0       0                    0       ?

 *                      192.168.1.1   0                    0       100?

 *>   192.168.3.0       0.0.0.0       0                    0       ?

 * i                    192.168.3.2   0          100       0       ?

 *>   192.168.3.1/32    0.0.0.0       0                    0       ?

 *>   192.168.3.2/32    0.0.0.0       0                    0       ?

 * i                    192.168.3.2   0          100       0       ?

When the multicast source in PIM-SM 1 (Source 1) and the multicast source in PIM-SM 2 (Source 2) send multicast information, receivers in PIM-SM 1 and PIM-SM 3 can receive the multicast data. You can use the display msdp brief command to view the brief information of MSDP peering relationships between the switches. For example:

# View the brief information about MSDP peering relationships on Switch B.

[SwitchB] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  1            1            0            0            0            0

 

  Peer's Address     State     Up/Down time    AS     SA Count   Reset Count

  192.168.1.2        Up        00:12:27       200    13         0

# View the brief information about MSDP peering relationships on Switch C.

[SwitchC] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  2            2            0            0            0            0

 

  Peer's Address    State    Up/Down time    AS     SA Count   Reset Count

  192.168.3.2       Up       00:15:32        200    8          0

  192.168.1.1       Up       00:06:39        100    13         0

# View the brief information about MSDP peering relationships on Switch E.

[SwitchE] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  1            1            0            0            0            0

 

  Peer's Address    State    Up/Down time    AS     SA Count   Reset Count

  192.168.3.1       Up       01:07:08        200    8          0

# View the detailed MSDP peer information on Switch B.

[SwitchB] display msdp peer-status

  MSDP Peer 192.168.1.2, AS 200

  Description:

  Information about connection status:

    State: Up

    Up/down time: 00:15:47

    Resets: 0

    Connection interface: Vlan-interface101 (192.168.1.1)

    Number of sent/received messages: 16/16

    Number of discarded output messages: 0

    Elapsed time since last connection or counters clear: 00:17:51

  Information about (Source, Group)-based SA filtering policy:

    Import policy: none

    Export policy: none

  Information about SA-Requests:

    Policy to accept SA-Request messages: none

    Sending SA-Requests status: disable

  Minimum TTL to forward SA with encapsulated data: 0

  SAs learned from this peer: 0, SA-cache maximum for the peer: none

  Input queue size: 0, Output queue size: 0

  Counters for MSDP message:

    Count of RPF check failure: 0

    Incoming/outgoing SA messages: 0/0

    Incoming/outgoing SA requests: 0/0

    Incoming/outgoing SA responses: 0/0

    Incoming/outgoing data packets: 0/0

8.7.2  Inter-AS Multicast Configuration Leveraging Static RPF Peers

I. Network requirements

l           There are two ASs in the network, AS 100 and AS 200 respectively. OSPF is running within each AS, and BGP is running between the two ASs.

l           PIM-SM 1 belongs to AS 100, while PIM-SM 2 and PIM-SM 3 belong to AS 200.

l           Each PIM-SM domain has zero or one multicast source and receiver. OSPF runs within each domain to provide unicast routes.

l           PIM-SM 2 and PIM-SM 3 are both stub domains, and BGP or MBGP is not required between these two domains and PIM-SM 1. Instead, static RPF peers are configured to avoid RPF check on SA messages.

l           It is required that the respective loopback 0 of Switch B, Switch C and Switch E be configured as the C-BSR and C-RP of the respective PIM-SM domains.

l           It is required that Switch C and Switch E be configured as static RPF peers of Switch B, and Switch B be configured as the only static RPF peer of Switch C and Switch E, so that any switch can receive SA messages only from its static RPF peer(s) and permitted by the corresponding filtering policy.

II. Network diagram

Device

Interface

IP address

Device

Interface

IP address

Switch A

Vlan-int103

10.110.1.2/24

Switch D

Vlan-int104

10.110.4.2/24

 

Vlan-int100

10.110.2.1/24

 

Vlan-int300

10.110.5.1/24

 

Vlan-int200

10.110.3.1/24

Switch E

Vlan-int105

10.110.6.1/24

Switch B

Vlan-int103

10.110.1.1/24

 

Vlan-int102

192.168.3.2/24

 

Vlan-int101

192.168.1.1/24

 

Loop0

3.3.3.3/32

 

Vlan-int102

192.168.3.1/24

Switch F

Vlan-int105

10.110.6.2/24

 

Loop0

1.1.1.1/32

 

Vlan-int400

10.110.7.1/24

Switch C

Vlan-int101

192.168.1.2/24

Source 1

10.110.2.100/24

 

Vlan-int104

10.110.4.1/24

Source 2

10.110.5.100/24

 

Loop0

2.2.2.2/32

 

 

 

Figure 8-6 Network diagram for inter-AS multicast configuration leaveraging static RPF peers

III. Configuration procedure

1)         Configure the interface IP addresses and unicast routing protocol for each switch

Configure the IP address and subnet mask for each interface as per Figure 8-6. Detailed configuration steps are omitted here.

Configure OSPF for interconnection between the switches. Ensure the network-layer interoperation in each AS, and ensure the dynamic update of routing information among the switches through a unicast routing protocol. Detailed configuration steps are omitted here.

2)         Enable IP multicast routing, enable PIM-SM and IGMP, and configure a PIM-SM domain border

# Enable IP multicast routing on Switch A, enable PIM-SM on each interface, and enable IGMP on the host-side interface VLAN-interface 200.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] interface vlan-interface 103

[SwitchA-Vlan-interface103] pim sm

[SwitchA-Vlan-interface103] quit

[SwitchA] interface vlan-interface 100

[SwitchA-Vlan-interface100] pim sm

[SwitchA-Vlan-interface100] quit

[SwitchA] interface vlan-interface 200

[SwitchA-Vlan-interface200] igmp enable

[SwitchA-Vlan-interface200] pim sm

[SwitchA-Vlan-interface200] quit

The configuration on Switch B, Switch C, Switch D, Switch E, and Switch F is similar to the configuration on Switch A.

# Configure PIM domain borders on Switch B.

[SwitchB] interface vlan-interface 102

[SwitchB-Vlan-interface102] pim bsr-boundary

[SwitchB-Vlan-interface102] quit

[SwitchB] interface vlan-interface 101

[SwitchB-Vlan-interface101] pim bsr-boundary

[SwitchB-Vlan-interface101] quit

The configuration on Switch C and Switch E is similar to the configuration on Switch B.

3)         Configure C-BSRs and C-RPs

# Configure Loopback 0 as a C-BSR and a C-RP on Switch B.

[SwitchB] pim

[SwitchB-pim] c-bsr loopback 0

[SwitchB-pim] c-rp loopback 0

[SwitchB-pim] quit

The configuration on Switch C and Switch E is similar to the configuration on Switch B.

4)         Configure a static RPF peer

# Configure Switch C and Switch E as a static RPF peers of Switch B.

[SwitchB] ip ip-prefix list-df permit 192.168.0.0 16 greater-equal 16 less-equal 32

[SwitchB] msdp

[SwitchB-msdp] peer 192.168.3.1 connect-interface vlan-interface 102

[SwitchB-msdp] peer 192.168.1.2 connect-interface vlan-interface 101

[SwitchB-msdp] static-rpf-peer 192.168.3.1 rp-policy list-df

[SwitchB-msdp] static-rpf-peer 192.168.1.2 rp-policy list-df

[SwitchB-msdp] quit

# Configure Switch B as a static RPF peer of Switch C.

[SwitchC] ip ip-prefix list-c permit 192.168.0.0 16 greater-equal 16 less-equal 32

[SwitchC] msdp

[SwitchC-msdp] peer 192.168.3.2 connect-interface vlan-interface 102

[SwitchC-msdp] static-rpf-peer 192.168.3.2 rp-policy list-c

[SwitchC-msdp] quit

# Configure Switch B as a static RPF peer of Switch E.

[SwitchE] ip ip-prefix list-c permit 192.168.0.0 16 greater-equal 16 less-equal 32

[SwitchE] msdp

[SwitchE-msdp] peer 192.168.3.2 connect-interface vlan-interface 102

[SwitchE-msdp] static-rpf-peer 192.168.3.2 rp-policy list-c

[SwitchE-msdp] quit

5)         Verify the configuration

Carry out the display bgp peer command to view the BGP peering relationships between the switches. If the command gives no output information, a BGP peering relationship has not been established between the switches.

When the multicast source in PIM-SM 1 (Source 1) and the multicast source in PIM-SM 2 (Source 2) send multicast information, receivers in PIM-SM 1 and PIM-SM 3 can receive the multicast data. You can use the display msdp brief command to view the brief information of MSDP peering relationships between the switches. For example:

# View the brief MSDP peer information on Switch B.

[SwitchB] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  2            2            0            0            0            0

 

  Peer's Address    State    Up/Down time    AS     SA Count   Reset Count

  192.168.3.2       Up       01:07:08        ?      8          0

  192.168.1.2       Up       00:16:39        ?      13         0

# View the brief MSDP peer information on Switch C.

[SwitchC] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  1            1            0            0            0            0

 

  Peer's Address    State    Up/Down time    AS     SA Count   Reset Count

  192.168.1.1       Up       01:07:09        ?      8          0

# View the brief MSDP peer information on Switch E.

[SwitchE] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  1            1            0            0            0            0

 

  Peer's Address    State    Up/Down time    AS     SA Count   Reset Count

  192.168.3.1       Up       00:16:40        ?      13         0

8.7.3  Anycast RP Configuration

I. Network requirements

l           The PIM-SM domain has multiple multicast sources and receivers. OSPF runs within the domain to provide unicast routes.

l           It is required to configure the anycast RP application so that the receiver-side DRs and the source-side DRs can initiate a Join message to their respective RPs that are the topologically nearest to them.

l           On Switch B and Switch D, configure the interface Loopback 10 as a C-BSR, and Loopback 20 as a C-RP.

l           The router ID of Switch B is 1.1.1.1, while the router ID of Switch D is 2.2.2.2. Set up an MSDP peering relationship between Switch B and Switch D.

II. Network diagram

Device

Interface

IP address

Device

Interface

IP address

Source 1

10.110.5.100/24

Switch C

Vlan-int101

192.168.1.2/24

Source 2

10.110.6.100/24

 

Vlan-int102

192.168.2.2/24

Switch A

Vlan-int300

10.110.5.1/24

Switch D

Vlan-int200

10.110.3.1/24

 

Vlan-int103

10.110.2.2/24

 

Vlan-int104

10.110.4.1/24

Switch B

Vlan-int100

10.110.1.1/24

 

Vlan-int102

192.168.2.1/24

 

Vlan-int103

10.110.2.1/24

 

Loop0

2.2.2.2/32

 

Vlan-int101

192.168.1.1/24

 

Loop10

4.4.4.4/32

 

Loop0

1.1.1.1/32

 

Loop20

10.1.1.1/32

 

Loop10

3.3.3.3/32

Switch E

Vlan-int400

10.110.6.1/24

 

Loop20

10.1.1.1/32

 

Vlan-int104

10.110.4.2/24

Figure 8-7 Network diagram for anycast RP configuration

III. Configuration procedure

1)         Configure the interface IP addresses and unicast routing protocol for each switch

Configure the IP address and subnet mask for each interface as per Figure 8-7. Detailed configuration steps are omitted here.

Configure OSPF for interconnection between the switches. Ensure the network-layer interoperation among the switches, and ensure the dynamic update of routing information between the switches through a unicast routing protocol. Detailed configuration steps are omitted here.

2)         Enable IP multicast routing, and enable PIM-SM and IGMP

# Enable IP multicast routing on Switch B, enable PIM-SM on each interface, and enable IGMP on the host-side interface VLAN-interface100.

<SwitchB> system-view

[SwitchB] multicast routing-enable

[SwitchB] interface vlan-interface 100

[SwitchB-Vlan-interface100] igmp enable

[SwitchB-Vlan-interface100] pim sm

[SwitchB-Vlan-interface100] quit

[SwitchB] interface vlan-interface 103

[SwitchB-Vlan-interface103] pim sm

[SwitchB-Vlan-interface103] quit

[SwitchB] interface Vlan-interface 101

[SwitchB-Vlan-interface101] pim sm

[SwitchB-Vlan-interface101] quit

[SwitchB] interface loopback 0

[SwitchB-LoopBack0] pim sm

[SwitchB-LoopBack0] quit

[SwitchB] interface loopback 10

[SwitchB-LoopBack10] pim sm

[SwitchB-LoopBack10] quit

[SwitchB] interface loopback 20

[SwitchB-LoopBack20] pim sm

[SwitchB-LoopBack20] quit

The configuration on Switch A, Switch C, Switch D, and Switch E is similar to the configuration on Switch B.

3)         Configure C-BSRs and C-RPs

# Configure Loopback 10 as a C-BSR and Loopback 20 as a C-RP on Switch B.

[SwitchB] pim

[SwitchB-pim] c-bsr loopback 10

[SwitchB-pim] c-rp loopback 20

[SwitchB-pim] quit

The configuration on Switch D is similar to the configuration on Switch B.

4)         Configure MSDP peers

# Configure an MSDP peer on Loopback 0 of Switch B.

[SwitchB] msdp

[SwitchB-msdp] originating-rp loopback 0

[SwitchB-msdp] peer 2.2.2.2 connect-interface loopback 0

[SwitchB-msdp] quit

# Configure an MSDP peer on Loopback 0 of Switch D.

[SwitchD] msdp

[SwitchD-msdp] originating-rp loopback 0

[SwitchD-msdp] peer 1.1.1.1 connect-interface loopback 0

[SwitchD-msdp] quit

5)         Verify the configuration

You can use the display msdp brief command to view the brief information of MSDP peering relationships between the switches.

# View the brief MSDP peer information on Switch B.

[SwitchB] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  1            1            0            0            0            0

 

  Peer's Address    State    Up/Down time    AS     SA Count   Reset Count

  2.2.2.2           Up       00:10:17        ?      0          0

# View the brief MSDP peer information on Switch D.

[SwitchD] display msdp brief

MSDP Peer Brief Information

  Configured   Up           Listen       Connect      Shutdown     Down

  1            1            0            0            0            0

 

  Peer's Address    State    Up/Down time    AS     SA Count   Reset Count

  1.1.1.1           Up       00:10:18        ?      0          0

To view the PIM routing information on the switches, use the display pim routing-table command. When Source 1 (10.110.5.100/24) sends multicast data to multicast group G (225.1.1.1), Receiver 1 joins multicast group G. By comparing the PIM routing information displayed on Switch B with that displayed on Switch D, you can see that Switch B acts now as the RP for Source 1 and Receiver 1.

# View the PIM routing information on Switch B.

[SwitchB] display pim routing-table

 Total 1 (*, G) entry; 1 (S, G) entry

 

 (*, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: WC

     UpTime: 00:15:04

     Upstream interface: Register

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface100

             Protocol: igmp, UpTime: 00:15:04, Expires: -

 

 (10.110.5.100, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: SPT 2MSDP ACT

     UpTime: 00:46:28

     Upstream interface: Vlan-interface103

         Upstream neighbor: 10.110.2.2

         RPF prime neighbor: 10.110.2.2

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface100

             Protocol: pim-sm, UpTime:  - , Expires:  -

# View the PIM routing information on Switch D.

[SwitchD] display pim routing-table

No information is output on Switch D.

Receiver 1 has left multicast group G. Source 1 has stopped sending multicast data to multicast group G. When Source 2 (10.110.6.100/24) sends multicast data to G, Receiver 2 joins G. By comparing the PIM routing information displayed on Switch B with that displayed on Switch D, you can see that Switch D acts now as the RP for Source 2 and Receiver 2.

# View the PIM routing information on Switch B.

[SwitchB] display pim routing-table

No information is output on Switch B.

# View the PIM routing information on Switch D.

[SwitchD] display pim routing-table

 Total 1 (*, G) entry; 1 (S, G) entry

 

 (*, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: WC

     UpTime: 00:12:07

     Upstream interface: Register

         Upstream neighbor: NULL

         RPF prime neighbor: NULL

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface200

             Protocol: igmp, UpTime: 00:12:07, Expires: -

 

 (10.110.6.100, 225.1.1.1)

     RP: 10.1.1.1 (local)

     Protocol: pim-sm, Flag: SPT 2MSDP ACT

     UpTime: 00:40:22

     Upstream interface: Vlan-interface104

         Upstream neighbor: 10.110.4.2

         RPF prime neighbor: 10.110.4.2

     Downstream interface(s) information:

     Total number of downstreams: 1

         1: Vlan-interface200

             Protocol: pim-sm, UpTime:  - , Expires:  -

8.8  Troubleshooting MSDP

8.8.1  MSDP Peers Stay in Down State

I. Symptom

The configured MSDP peers stay in the down state.

II. Analysis

l           A TCP connection–based MSDP peering relationship is established between the local interface address and the MSDP peer after the configuration.

l           The TCP connection setup will fail if there is a consistency between the local interface address and the MSDP peer address configured on the router.

l           If no route is available between the MSDP peers, the TCP connection setup will also fail.

III. Solution

1)         Check that a route is available between the routers. Carry out the display ip routing-table command to check whether the unicast route between the routers is correct.

2)         Check that a unicast route is available between the two routers that will become MSDP peers to each other.

3)         Verify the interface address consistency between the MSDP peers. Use the display current-configuration command to verify that the local interface address and the MSDP peer address of the remote router are the same.

8.8.2  No SA Entries in the Router’s SA Cache

I. Symptom

MSDP fails to send (S, G) entries through SA messages.

II. Analysis

l           The import-source command is used to control sending (S, G) entries through SA messages to MSDP peers. If this command is executed without the acl-number argument, all the (S, G) entries will be filtered off, namely no (S, G) entries of the local domain will be advertised.

l           If the import-source command is not executed, the system will advertise all the (S, G) entries of the local domain. If MSDP fails to send (S, G) entries through SA messages, check whether the import-source command has been correctly configured.

III. Solution

1)         Check that a route is available between the routers. Carry out the display ip routing-table command to check whether the unicast route between the routers is correct.

2)         Check that a unicast route is available between the two routers that will become MSDP peers to each other.

3)         Check configuration of the import-source command and its acl-number argument and make sure that ACL rule can filter appropriate (S, G) entries.

8.8.3  Inter-RP Communication Faults in Anycast RP Application

I. Symptom

RPs fail to exchange their locally registered (S, G) entries with one another in the Anycast RP application.

II. Analysis

l           In the Anycast RP application, RPs in the same PIM-SM domain are configured to be MSDP peers to achieve load balancing among the RPs.

l           An MSDP peer address must be different from the anycast RP address, and the C-BSR and C-RP must be configured on different devices or interfaces.

l           If the originating-rp command is executed, MSDP will replace the RP address in the SA messages with the address of the interface specified in the command.

l           When an MSDP peer receives an SA message, it performs RPF check on the message. If the MSDP peer finds that the remote RP address is the same as the local RP address, it will discard the SA message.

III. Solution

1)         Check that a route is available between the routers. Carry out the display ip routing-table command to check whether the unicast route between the routers is correct.

2)         Check that a unicast route is available between the two routers that will become MSDP peer to each other.

3)         Check the configuration of the originating-rp command. In the Anycast RP application environment, be sure to use the originating-rp command to configure the RP address in the SA messages, which must be the local interface address.

4)         Verify that the C-BSR address is different from the anycast RP address.

 


Chapter 9  Multicast Routing and Forwarding Configuration

When configuring multicast routing and forwarding, go to these sections for information you are interested in:

l           Multicast Routing and Forwarding Overview

l           Configuring Multicast Routing and Forwarding

l           Displaying and Maintaining Multicast Routing and Forwarding

l           Configuration Examples

l           Troubleshooting Multicast Routing and Forwarding

 

&  Note:

The term "router" in this document refers to a router in a generic sense or a Layer 3 switch running an IP routing protocol.

 

9.1  Multicast Routing and Forwarding Overview

9.1.1  Introduction to Multicast Routing and Forwarding

In multicast implementations, multicast routing and forwarding are implemented by three types of tables:

l           Each multicast routing protocol has its own multicast routing table, such as PIM routing table.

l           The information of different multicast routing protocols forms a general multicast routing table.

l           The multicast forwarding table is directly used to control the forwarding of multicast packets.

A multicast forwarding table consists of a set of (S, G) entries, each indicating the routing information for delivering multicast data from a multicast source to a multicast group. If a router supports multiple multicast protocols, its multicast routing table will include routes generated by multiple protocols. The router chooses the optimal route from the multicast routing table based on the configured multicast routing and forwarding policy and installs the route entry into its multicast forwarding table.

9.1.2  RPF Mechanism

When creating multicast routing table entries, a multicast routing protocol uses the reverse path forwarding (RPF) mechanism to ensure multicast data delivery along the correct path.

The RPF mechanism enables routers to correctly forward multicast packets based on the multicast route configuration. In addition, the RPF mechanism also helps avoid data loops caused by various reasons.

I. Implementation of the RPF mechanism

Upon receiving a multicast packet that a multicast source S sends to a multicast group G, the router first searches its multicast forwarding table:

1)         If the corresponding (S, G) entry exists, and the interface on which the packet actually arrived is the incoming interface in the multicast forwarding table, the router forwards the packet to all the outgoing interfaces.

2)         If the corresponding (S, G) entry exists, but the interface on which the packet actually arrived is not the incoming interface in the multicast forwarding table, the multicast packet is subject to an RPF check.

l           If the result of the RPF check shows that the RPF interface is the incoming interface of the existing (S, G) entry, this means that the (S, G) entry is correct but the packet arrived from a wrong path. The packet is to be discarded.

l           If the result of the RPF check shows that the RPF interface is not the incoming interface of the existing (S, G) entry, this means that the (S, G) entry is no longer valid. The router replaces the incoming interface of the (S, G) entry with the interface on which the packet actually arrived and forwards the packet to all the outgoing interfaces.

3)         If no corresponding (S, G) entry exists in the multicast forwarding table, the packet is also subject to an RPF check. The router creates an (S, G) entry based on the relevant routing information and using the RPF interface as the incoming interface, and installs the entry into the multicast forwarding table.

l           If the interface on which the packet actually arrived is the RPF interface, the RPF check is successful and the router forwards the packet to all the outgoing interfaces.

l           If the interface on which the packet actually arrived is not the RPF interface, the RPF check fails and the router discards the packet.

II. RPF check

The basis for an RPF check is a unicast route or a multicast static route. A unicast routing table contains the shortest path to each destination subnet, while a multicast static routing table lists the RPF routing information defined by the user through static configuration. A multicast routing protocol does not independently maintain any type of unicast route; instead, it relies on the existing unicast routing information or multicast static routes in creating multicast routing entries.

When performing an RPF check, a router searches its unicast routing table and multicast static routing table at the same time. The specific process is as follows:

1)         The router first chooses an optimal route from the unicast routing table and multicast static routing table:

l           The router automatically chooses an optimal unicast route by searching its unicast routing table, using the IP address of the “packet source” as the destination address. The outgoing interface in the corresponding routing entry is the RPF interface and the next hop is the RPF neighbor. The router considers the path along which the packet from the RPF neighbor arrived on the RPF interface to be the shortest path that leads back to the source.

l           The router automatically chooses an optimal multicast static route by searching its multicast static routing table, using the IP address of the “packet source” as the destination address. The corresponding routing entry explicitly defines the RPF interface and the RPF neighbor.

2)         Then, the router selects one from these two optimal routes as the RPF route. The selection is as follows:

l           If configured to use the longest match principle, the router selects the longest match route from the two; if these two routes have the same mask, the route selects the route with a higher priority; if the two routes have the same priority, the router selects the multicast static route.

l           If not configured to use the longest match principle, the router selects the route with a higher priority; if the two routes have the same priority, the router selects the multicast static route.

 

&  Note:

The above-mentioned “packet source” can mean different things in different situations:

l      For a packet traveling along the shortest path tree (SPT) from the multicast source to the receivers or the source-based tree from the multicast source to the rendezvous point (RP), “packet source” means the multicast source.

l      For a packet traveling along the rendezvous point tree (RPT) from the RP to the receivers, “packet source” means the RP.

l      For a bootstrap message from the bootstrap router (BSR), “packet source” means the BSR.

For details about the concepts of SPT, RPT and BSR, refer to PIM Configuration.

 

Assume that unicast routes exist in the network and no multicast static routes have been configured on Switch C, as shown in Figure 9-1. Multicast packets travel along the SPT from the multicast source to the receivers.

Figure 9-1 RPF check process

l           A multicast packet from Source arrives on VLAN-interface 1 of Switch C, and the corresponding forwarding entry does not exist in the multicast forwarding table of Switch C. Switch C performs an RPF check, and finds in its unicast routing table that the outgoing interface to 192.168.0.0/24 is VLAN-interface 2. This means that the interface on which the packet actually arrived is not the RPF interface. The RPF check fails and the packet is discarded.

l           A multicast packet from Source arrives on VLAN-interface 2 of Switch C, and the corresponding forwarding entry does not exist in the multicast forwarding table of Switch C. The switch performs an RPF check, and finds in its unicast routing table that the outgoing interface to 192.168.0.0/24 is the interface on which the packet actually arrived. The RPF check succeeds and the packet is forwarded.

9.1.3  Multicast Static Routes

If the topology structure of a multicast network is the same as that of a unicast network, receivers can receive multicast data via unicast routes. However, the topology structure of a multicast network may differ from that of a unicast network, and some routers may support only unicast but not multicast. In this case, you can configure multicast static routes to provide multicast transmission paths that are different from those for unicast traffic. Note the following two points:

l           A multicast static route only affects RPF checks, and not guides multicast forwarding, so it is also called an RPF static route.

l           A multicast static route is effective on the multicast router on which it is configured, and will not be broadcast throughout the network or injected to other routers.

A multicast static route is an important basis for RPF checks. With a multicast static route configured on a router, the router searches the unicast routing table and the multicast static routing table simultaneously in a RPF check, chooses the optimal unicast RPF route and the optimal multicast static route respectively from the routing tables, and uses one of them as the RPF route after comparison.

Figure 9-2 Multicast static route

As shown in Figure 9-2, when no multicast static route is configured, Switch C’s RPF neighbor on the path back to Source is Switch A and the multicast information from Source travels along the path from Switch A to Switch C, which is the unicast route between the two switches; with a static route configured on Switch C and Switch B as Switch C’s RPF neighbor on the path back to Source, the multicast information from Source travels from Switch A to Switch B and then to Switch C.

9.1.4  Multicast Traceroute

The multicast traceroute utility is used to trace the path that a multicast stream flows down from the multicast source to the last-hop router.

I. Concepts in multicast traceroute

1)         Last-hop router: If a router has one of its interfaces connecting to the subnet the given destination address is on, and if the router is able to forward multicast streams from the given multicast source onto that subnet, that router is called last-hop router.

2)         First-hop router: the router that directly connects to the multicast source.

3)         Querier: the router requesting the multicast traceroute.

II. Introduction to multicast traceroute packets

A multicast traceroute packet is a special IGMP packet, which differs from common IGMP packets in that its IGMP Type field is set to 0x1F or 0x1E and that its destination IP address is a unicast address. There are three types of multicast traceroute packets:

l           Query, with the IGMP Type field set to 0x1F,

l           Request, with the IGMP Type field set to 0x1F, and

l           Response, with the IGMP Type field set to 0x1E.

III. Process of multicast traceroute

1)         The querier sends a query to the last-hop router.

2)         Upon receiving the query, the last-hop router turns the query packet into a request packet by adding a response data block containing its interface addresses and packet statistics to the end of the packet, and forwards the request packet via unicast to the previous hop for the given multicast source and group.

3)         From the last-hop router to the multicast source, each hop adds a response data block to the end of the request packet and unicasts it to the previous hop.

4)         When the first-hop router receives the request packet, it changes the packet type to indicate a response packet, and then sends the completed packet via unicast to the multicast traceroute querier.

9.2  Configuration Task List

Complete these tasks to configure multicast routing and forwarding:

Task

Remarks

Enabling IP Multicast Routing

Required

Configuring Multicast Static Routes

Optional

Configuring a Multicast Route Match Rule

Optional

Configuring Multicast Load Splitting

Optional

Configuring a Multicast Forwarding Range

Optional

Configuring the Multicast Forwarding Table Size

Optional

Tracing a Multicast Path

Optional

 

9.3  Configuring Multicast Routing and Forwarding

9.3.1  Configuration Prerequisites

Before configuring multicast routing and forwarding, complete the following tasks:

l           Configure a unicast routing protocol so that all devices in the domain are interoperable at the network layer.

l           Enable PIM (PIM-DM or PIM-SM).

Before configuring multicast routing and forwarding, prepare the following data:

l           The minimum TTL value required for a multicast packet to be forwarded

l           The maximum number of downstream nodes for a single route in a multicast forwarding table

l           The maximum number of routing entries in a multicast forwarding table

9.3.2  Enabling IP Multicast Routing

Before configuring any Layer 3 multicast functionality, you must enable IP multicast routing.

Follow these steps to enable IP multicast routing:

To do...

Use the command...

Remarks

Enter system view

system-view

Enable IP multicast routing

multicast routing-enable

Required

Disable by default

 

  Caution:

IP multicast does not support the use of secondary IP address segments. Namely, multicast can be routed and forwarded only through primary IP addresses, rather than secondary addresses, even if configured on interfaces.

For details about primary and secondary IP addresses, refer to IP Addressing and Performance Configuration.

 

9.3.3  Configuring Multicast Static Routes

Based on the application environment, a multicast static route has the following two functions:

l           Changing an RPF route. If the multicast topology structure is the same as the unicast topology in a network, the delivery path of multicast traffic is the same as in unicast. By configuring a multicast static route, you can change the RPF route so as to create a transmission path that is different from the unicast traffic transmission path.

l           Creating an RPF route. When a unicast route is interrupted, multicast traffic forwarding is stopped due to lack of an RPF route. By configuring a multicast static route, you can create an RPF route so that a multicast routing entry is created to guide multicast traffic forwarding.

Follow these steps to configure a multicast static route:

To do...

Use the command...

Remarks

Enter system view

system-view

Configure a multicast static route

ip rpf-route-static source-address { mask | mask-length } [ protocol [ process-id ] ] [ route-policy policy-name ] { rpf-nbr-address | interface-type interface-number } [ preference preference ] [ order order-number ]

Required

No multicast static route configured by default.

 

  Caution:

When configuring a multicast static route, you cannot designate an RPF neighbor by specifying an interface (by means of the interface-type interface-number command argument combination) if the interface type of that switch is Loopback or VLAN-interface; instead, you can designate an RPF neighbor only by specifying an address (rpf-nbr-address).

 

9.3.4  Configuring a Multicast Route Match Rule

If more than one route exists to the same subnet, a router chooses a route based on the sequence of route configuration.

Follow these steps to configure a multicast route match rule:

To do...

Use the command...

Remarks

Enter system view

system-view

Configure the device to select a route based on the longest match

multicast longest-match

Required

In order of routing table entries by default

 

9.3.5  Configuring Multicast Load Splitting

With the load splitting feature enabled, multicast traffic will be evenly distributed among different routes.

Follow these steps to configure multicast load splitting:

To do...

Use the command...

Remarks

Enter system view

system-view

Configuring multicast load splitting

multicast load-splitting { source | source-group }

Required

Disabled by default

 

9.3.6  Configuring a Multicast Forwarding Range

Multicast packets do not travel without a boundary in a network. The multicast data corresponding to each multicast group must be transmitted within a definite scope.

You can configure a forwarding boundary specific to a particular multicast group on all interfaces that support multicast forwarding. A multicast forwarding boundary sets the boundary condition for the multicast groups in the specified range. If the destination address of a multicast packet matches the set boundary condition, the packet will not be forwarded. Once a multicast boundary is configured on an interface, this interface can no longer forward multicast packets (including packets sent from the local device) or receive multicast packets.

Follow these steps to configure a multicast forwarding range:

To do...

Use the command...

Remarks

Enter system view

system-view

Enter interface view

interface interface-type interface-number

Configure a multicast forwarding boundary

multicast boundary group-address { mask | mask-length }

Required

No forwarding boundary by default

 

9.3.7  Configuring the Multicast Forwarding Table Size

Too many multicast routing entries can exhaust the router’s memory and thus result in lower router performance. Therefore, the number of multicast routing entries should be limited. You can set a limit on the number of entries in the multicast routing table based on the actual networking situation and the performance requirements. In any case, the number of route entries must not exceed the maximum number allowed by the system. This maximum value varies with different device models.

If the configured maximum number of downstream nodes (namely, the maximum number of outgoing interfaces) for a routing entry in the multicast forwarding table is smaller than the current number, the downstream nodes in excess of the configured limit will not be deleted immediately; instead they must be deleted by the multicast routing protocol. In addition, newly added downstream nodes cannot be installed to the routing entry into the forwarding table.

If the configured maximum number of routing entries in the multicast forwarding table is smaller than the current number, the routes in excess of the configured limit will not be deleted immediately; instead they must be deleted by the multicast routing protocol. In addition, newly added route entries cannot be installed to the forwarding table.

Follow these steps to configure the multicast forwarding table size:

To do...

Use the command...

Remarks

Enter system view

system-view

Configure the maximum number of downstream nodes for a single route in the multicast forwarding table

multicast forwarding-table downstream-limit limit

Optional

The default is 128.

Configure the maximum number of routing entries in the multicast forwarding table

multicast forwarding-table route-limit limit

Optional

The default is 1024.

 

9.3.8  Tracing a Multicast Path

You can run the mtracert command to trace the path down which the multicast traffic flows from a given multicast source to the last-hop router for troubleshooting purposes.

To do…

Use the command…

Remarks

Trace a multicast path

mtracert source-address [ [ last-hop-router-address ] group-address ]

Required

Available in any view

 

9.4  Displaying and Maintaining Multicast Routing and Forwarding

To do...

Use the command...

Remarks

View the multicast boundary information

display multicast boundary [ group-address [ mask | mask-length ] ] [ interface interface-type interface-number ]

Available in any view

View the multicast forwarding table information

display multicast forwarding-table [ source-address [ mask { mask | mask-length } ] | group-address [ mask { mask | mask-length } ] | incoming-interface { interface-type interface-number | register } | outgoing-interface { { exclude | include | match } { interface-type interface-number | register } } | statistics ] * [ port-info ]

Available in any view

View the multicast routing table information

display multicast routing-table [ source-address [ mask { mask | mask-length } ] | group-address [ mask { mask | mask-length } ] | incoming-interface { interface-type interface-number | register } | outgoing-interface { { exclude | include | match } { interface-type interface-number | register } } ] *

Available in any view

View the information of the multicast static routing table

display multicast routing-table static [ config ] [ source-address { mask-length | mask } ]

Available in any view

View the RPF route information of the specified multicast source

display multicast rpf-info source-address [ group-address ]

Available in any view

Clear forwarding entries from the multicast forwarding table

reset multicast forwarding-table { { source-address [ mask { mask | mask-length } ] | group-address [ mask { mask | mask-length } ] | incoming-interface { interface-type interface-number | register } } * | all }

Available in user view

Clear routing entries from the multicast routing table

reset multicast routing-table { { source-address [ mask { mask | mask-length } ] | group-address [ mask { mask | mask-length } ] | incoming-interface { interface-type interface-number | register } } * | all }

Available in user view

 

  Caution:

l      The reset command clears the information in the multicast routing table or the multicast forwarding table, and thus may cause failure of multicast transmission.

l      When a routing entry is deleted from the multicast routing table, the corresponding forwarding entry will also be deleted from the multicast forwarding table.

l      When a forwarding entry is deleted from the multicast forwarding table, the corresponding route entry will also be deleted from the multicast routing table.

 

9.5  Configuration Examples

9.5.1  Changing an RPF Route

I. Network requirements

l           PIM-DM runs in the network. All switches in the network support multicast.

l           Switch A, Switch B and Switch C run OSPF.

l           Typically, Receiver can receive the multicast data from Source through the path Switch A – Switch B, which is the same as the unicast route.

l           Perform the following configuration so that Receiver can receive the multicast data from Source through the path Switch A – Switch C – Switch B, which is different from the unicast route.

II. Network diagram

Figure 9-3 Network diagram for RPF route alternation configuration

III. Configuration procedure

1)         Configure the interface IP addresses and enable unicast routing on each switch

Configure the IP address and subnet mask for each interface as per Figure 9-3. The detailed configuration steps are omitted here.

Enable OSPF on the switches in the PIM-DM domain. Ensure the network-layer interoperation among the switches in the PIM-DM domain. Ensure that the switches can dynamically update their routing information by leveraging the unicast routing protocol. The specific configuration steps are omitted here.

2)         Enable IP multicast routing, and enable PIM-DM and IGMP

# Enable IP multicast routing on Switch B, enable PIM-DM on each interface, and enable IGMP on the host-side interface Ethernet 1/0.

<SwitchB> system-view

[SwitchB] multicast routing-enable

[SwitchB] interface vlan-interface 100

[SwitchB-Vlan-interface100] igmp enable

[SwitchB-Vlan-interface100] pim dm

[SwitchB-Vlan-interface100] quit

[SwitchB] interface vlan-interface 101

[SwitchB-Vlan-interface101] pim dm

[SwitchB-Vlan-interface101] quit

[SwitchB] interface vlan-interface 102

[SwitchB-Vlan-interface102] pim dm

[SwitchB-Vlan-interface102] quit

# Enable IP multicast routing on Switch A, and enable PIM-DM on each interface.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] interface vlan-interface 200

[SwitchA-Vlan-interface200] pim dm

[SwitchA-Vlan-interface200] quit

[SwitchA] interface vlan-interface 102

[SwitchA-Vlan-interface102] pim dm

[SwitchA-Vlan-interface102] quit

[SwitchA] interface vlan-interface 103

[SwitchA-Vlan-interface103] pim dm

[SwitchA-Vlan-interface103] quit

The configuration on Switch C is similar to the configuration on Switch A. The specific configuration steps are omitted here.

# Use the display multicast rpf-info command to view the RPF route to Source on Switch B.

[SwitchB] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface102, RPF neighbor: 30.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: igp

     Route selection rule: preference-preferred

     Load splitting rule: disable

As shown above, the current RPF route on Switch B is contributed by a unicast routing protocol and the RPF neighbor is Switch A.

3)         Configure a multicast static route

# Configure a multicast static route on Switch B, specifying Switch C as its RPF neighbor on the route to Source.

[SwitchB] ip rpf-route-static 50.1.1.100 24 20.1.1.2

4)         Verify the configuration

# Use the display multicast rpf-info command to view the information about the RPF route to Source on Switch B.

[SwitchB] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface101, RPF neighbor: 20.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: multicast static

     Route selection rule: preference-preferred

     Load splitting rule: disable

As shown above, the RPF route on Switch B has changed. It is now the configured multicast static route, and the RPF neighbor is now Switch C.

9.5.2  Creating an RPF Route

I. Network requirements

l           PIM-DM runs in the network and all switches in the network support IP multicast.

l           Switch B and Switch C run OSPF, and have no unicast routes to Switch A.

l           Typically, Receiver can receive the multicast data from Source 1 in the OSPF domain.

l           Perform the following configuration so that Receiver can receive multicast data from Source 2, which is outside the OSPF domain.

II. Network diagram

Figure 9-4 Network diagram for creating an RPF route

III. Configuration procedure

1)         Configure the interface IP addresses and unicast routing protocol for each switch

Configure the IP address and subnet mask for each interface as per Figure 9-4. The detailed configuration steps are omitted here.

Enable OSPF on Switch B and Switch C. Ensure the network-layer interoperation among Switch B and Switch C. Ensure that the switches can dynamically update their routing information by leveraging the unicast routing protocol. The specific configuration steps are omitted here.

2)         Enable IP multicast routing, and enable PIM-DM and IGMP

# Enable IP multicast routing on Switch C, enable PIM-DM on each interface, and enable IGMP on the host-side interface VLAN-interface 100.

<SwitchC> system-view

[SwitchC] multicast routing-enable

[SwitchC] interface vlan-interface 100

[SwitchC-Vlan-interface100] igmp enable

[SwitchC-Vlan-interface100] pim dm

[SwitchC-Vlan-interface100] quit

[SwitchC] interface vlan-interface 101

[SwitchC-Vlan-interface101] pim dm

[SwitchC-Vlan-interface101] quit

# Enable IP multicast routing on Switch A and enable PIM-DM on each interface.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchC] interface vlan-interface 300

[SwitchC-Vlan-interface300] pim dm

[SwitchC-Vlan-interface300] quit

[SwitchC] interface vlan-interface 102

[SwitchC-Vlan-interface102] pim dm

[SwitchC-Vlan-interface102] quit

The configuration on Switch B is similar to that on Switch A. The specific configuration steps are omitted here.

# Use the display multicast rpf-info command to view the RPF routes to Source 2 on Switch B and Switch C.

[SwitchB] display multicast rpf-info 50.1.1.100

[SwitchC] display multicast rpf-info 50.1.1.100

No information is displayed. This means that no RPF route to Source 2 exists on Switch B and Switch C.

3)         Configure a multicast static route

# Configure a multicast static route on Switch B, specifying Switch A as its RPF neighbor on the route to Source 2.

[SwitchB] ip rpf-route-static 50.1.1.100 24 30.1.1.2

# Configure a multicast static route on Switch C, specifying Switch B as its RPF neighbor on the route to Source 2.

[SwitchC] ip rpf-route-static 50.1.1.100 24 20.1.1.2

4)         Verify the configuration

# Use the display multicast rpf-info command to view the RPF routes to Source 2 on Switch B and Switch C.

[SwitchB] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface102, RPF neighbor: 30.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: multicast static

     Route selection rule: preference-preferred

     Load splitting rule: disable

[SwitchC] display multicast rpf-info 50.1.1.100

 RPF information about source 50.1.1.100:

     RPF interface: Vlan-interface101, RPF neighbor: 20.1.1.2

     Referenced route/mask: 50.1.1.0/24

     Referenced route type: multicast static

     Route selection rule: preference-preferred

     Load splitting rule: disable

As shown above, the RPF routes to Source 2 exist on Switch B and Switch C. The source is the configured static route.

9.6  Troubleshooting Multicast Routing and Forwarding

9.6.1  Multicast Static Route Failure

I. Symptom

No dynamic routing protocol is enabled on the routers, and the physic status and link layer status of interfaces are both up, but the multicast static route fails.

II. Analysis

l           If the multicast static route is not configured or updated correctly to match the current network conditions, the route entry does not exist in the multicast route configuration table and multicast routing table.

l           If the optimal route is found, the multicast static route may also fail.

III. Solution

1)         In the configuration, you can use the display multicast routing-table static config command to view the detailed configuration information of multicast static routes to verify that the multicast static route has been correctly configured and the route entry exists.

2)         In the configuration, you can use the display multicast routing-table static command to view the information of multicast static routes to verify that the multicast static route has been correctly configured and the route entry exists in the multicast routing table.

3)         Check the next hop interface type of the multicast static route. If the interface is not a point-to-point interface, be sure to specify the next hop address to configure the outgoing interface when you configure the multicast static route.

4)         Check that the multicast static route matches the specified routing protocol. If a protocol was specified in multicast static route configuration, enter the display ip routing-table command to check if an identical route was added by the protocol.

5)         Check that the multicast static route matches the specified routing policy. If a routing policy was specified when the multicast static route was configured, enter the display route-policy command to check the configured routing policy.

9.6.2  Multicast Data Fails to Reach Receivers

I. Symptom

The multicast data can reach some routers but fails to reach the last hop router.

II. Analysis

If a multicast forwarding boundary has been configured through the multicast boundary command, any multicast packet will be kept from crossing the boundary.

III. Solution

1)         Use the display pim routing-table command to check whether the corresponding (S, G) entries exist on the router. If so, the router has received the multicast data; otherwise, the router has not received the data.

2)         Use the display multicast boundary command to view the multicast boundary information on the interfaces. Use the multicast boundary command to change the multicast forwarding boundary setting.

3)         In the case of PIM-SM, use the display current-configuration command to check the BSR and RP information.

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网