- Released At: 26-07-2024
- Page Views:
- Downloads:
- Table of Contents
- Related Documents
-
MRP Technology White Paper
Copyright © 2024 New H3C Technologies Co., Ltd. All rights reserved.
No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd.
Except for the trademarks of New H3C Technologies Co., Ltd., any trademarks that may be mentioned in this document are the property of their respective owners.
The content in this article is generic technical information. Some information may not be applicable to the product you have purchased.
Overview
Technical background
Currently, factories around the world rely on Ethernet technology to meet industrial application requirements. The simplicity of Ethernet technology in network deployment and its high performance characteristics provide a wide application space in industrial automation scenarios. The Ethernet technology used in automation industrial-level scenarios breaks through the star structure of traditional commercial Ethernet and adopts a ring structure to provide redundancy and quick restoration. This satisfies the two key requirements of automated industrial production - reliability and real-time capability.
The convergence time of traditional Ethernet ring protocols such as Spanning Tree Protocol (STP) often reaches the level of seconds, increasing with the expansion of the network radius. Thus, in industries where the scale of devices is typically large, STP with slow convergence is no longer applicable. The IEC group has formulated a standardized network protocol called MRP (Media Redundancy Protocol) for industrial Ethernet. MRP can eliminate loops and prevent broadcast storms in the ring network; it can also provide redundancy for nodes and links. When a single point failure occurs in a device or the link between devices in the ring network, MRP can quickly restore the function of the network to meet the real-time and reliability requirements of the industrial scenario.
Benefits
· High Availability (HA)
The MRP protocol provides redundancy for nodes and links. In the event of a single point failure in the ring network, MRP can automatically adjust the state of the device to restore network availability.
· The convergence speed is fast.
According to the protocol standard, the convergence speed of MRP Protocol for ring network failure is almost unaffected by the number of devices in the ring network. With the help of Ethernet device hardware performance, the maximum convergence time for ring network failure can reach 10 milliseconds.
· The configuration is simple.
Users only need to configure some simple parameters on devices in the ring network to start the MRP protocol operation. Moreover, when modifying the convergence time of the MRP ring, users do not need to configure complex parameters through calculations. Instead, they can switchover the maximum convergence time planned by the protocol standard with a single command.
· Good compatibility.
The MRP protocol, implemented by the IEC62439-2 standard established by the IEC organization, allows our company's devices to interconnect with devices from other manufacturers that also support the IEC62439 standard.
MRP implementation
Concepts
MRP domains
MRP Redundancy Domain
In a network utilizing the MRP protocol, each device that supports MRP has exactly two ports connecting to other devices, ultimately forming a ring connection. Such a ring connection is referred to as an MRP ring.
Figure 1 MRP Redundancy Domain Schematic Diagram
As shown in Figure 1, a device can belong to multiple ring topologies via physical link connections, such as Device A. Within the same ring topology, multiple MRP loops can be formed by connecting through multiple ports' physical links, as seen with the red loop and blue loop in the figure. Different MRP loops are distinguished by the concept of "redundancy domains", utilizing the MRP redundancy domain to identify a specific MRP loop. Within different MRP redundancy domains, network administrators can use different VLANs to carry MRP protocol frames. Devices can also assume different roles in different MRP redundancy domains to enable flexible planning of the MRP network.
On a device, an ID of a redundancy domain is identified by three types: redundancy domain ID, redundancy domain name, and redundancy domain UUID. Among them, redundancy domain ID and redundancy domain name only have local significance, while UUID is the unique identifier of the MRP redundancy domain in the network.
MRP Interconnect Domain
In actual network deployment, multiple MRP rings may be connected together. In a network running the MRP protocol, two links are used to provide redundancy for interconnecting links between MRP rings.
Figure 2 Schematic Diagram of MRP Interconnection Domain
As illustrated in Figure 2, a new loop is formed between two MRP (Multiple Ring Protocol) rings. To manage this loop, MRP introduces the concept of 'interconnection domain', the scope of which encompasses the four devices forming the loop between the two MRP rings. Similar to redundancy domain, a device can belong to multiple MRP interconnection topologies (for example, Device A in Figure 3). Within the same interconnection topology, devices can also establish multiple interconnection domains by creating links on multiple physical ports (for example, interconnection domain 1 and 2 in Figure 3). Different interconnection domains can use different VLANs to carry MRP protocol frames. Devices can also play different roles in different MRP redundancy domains to achieve flexible planning of the MRP network.
Figure 3 Schematic Diagram of Multiple MRP Interconnection Domains
On a device, an MRP interconnected domain is identified by three types: domain ID, domain name, and InID. Among them, domain ID and domain name only have local significance, while InID is the unique identifier of the MRP interconnected domain in the network.
MRP roles
In the MRP network, the network administrator needs to manually assign the roles of devices that support the MRP function. This chapter introduces various roles that devices can undertake in the MRP protocol based on different MRP domains.
Roles in an MRP redundancy domain
In the MRP redundancy domain, the roles of the devices are divided into the following categories:
· MRM (Media Redundancy Manager): The MRM device serves to monitor loops and control links in the ring network. In response to loop or link faults, the MRM device either blocks its own ports within the ring network or releases blocked ports, thus eliminating loops when the ring is closed and restoring the communication link between nodes when there is a link fault within the ring network.
· MRC (Media Redundancy Client): This refers to any MRP (Media Redundancy Protocol) device on the MRP ring, other than the MRM device. The MRC monitors the link state at the ring port on its own device, and announces any link changes to the MRM. The MRM then responds appropriately based on these link changes.
· MRA (Media Redundancy Automanager) refers to a setup within a MRP redundancy domain where any device capable of MRP can function as MRM or MRC. However, at any given time, there can and must only be a single device operating as MRM. This is because, if the MRM encounters a fault, it would be unable to ensure the management of the ring network, thereby affecting the reliability of the MRP protocol. To mitigate this, MRP introduced a transitional role, the MRA, which leverages a selection mechanism to provide redundancy for the MRM site. Therefore, once the entire system is initialized, all MRAs within the same MRP redundancy domain will automatically contest until a unique MRM is elected, with the remaining MRAs operating as MRC. Should the elected MRM encounter a fault, an unaffected MRA within the same MRP redundancy domain will automatically re-contest to select the MRM. This process enhances the reliability of the MRP protocol.
In an MRP redundancy domain, devices with MRP capabilities can be configured by the network administrator as MRM or MRC. However, at the same time, there can and must be only one device in the MRM operating state.
Roles in an MRP interconnection domain
In the MRP interconnection domain, a device can assume one of the following two roles:
· MIM (Media Redundancy Interconnection Manager) is a device that monitors and controls the loop and link in the MRP interconnection domain. Responding to loop or link faults, the MIM device blocks its ports within the interconnection domain, OR unblocks the blocked ports, eliminating the loop when it is closed, and restoring the communication link between nodes when a link fault occurs within the interconnection domain.
· MIC (Media Redundancy Interconnection Client): All devices other than MIM devices within the MRP interconnection domain. MIC monitors the link state of interconnection ports on its own device, and announces changes to the MIM. The MIM then processes accordingly based on the changes in link state.
In an MRP interconnected domain, devices with MRP capability can be configured as either MIM or MIC by the network administrator. However, at any given time, there can and should only be one device operating in the MIM state.
Ports of the MRP device
MRP ring port
The ports of a device running the MRP protocol and connecting to the MRP ring are called MRP ring ports. As illustrated in Figure 4, within the same MRP redundancy domain, each device has and only has two ring ports. Apart from ring ports, other ports on the device do not participate in the MRP protocol process and are solely used for user terminal connections and so on.
Figure 4 Schematic diagram of the ring port in the MRP redundancy domain
The ring port supports the following two states:
· Blocked: In this state, the ring port discards all frames except for MRP protocol frames and those defined by the IEEE 802.1D standard.
· Forwarding: In this state, the ring port can forward all packets.
In the MRP redundancy domain, the port on the MRM device with the link state that goes up first is called the primary port. The port with the link state that goes up afterward is called the secondary port.
MRP Interconnected Port
A device running the MRP protocol connected to the port of another MRP ring is called an MRP interconnection port. Within the same MRP interconnection domain, each device has only one interconnection port, as shown in Figure 5.
Figure 5 Schematic diagram of the interconnected ports in the MRP interconnection domain
The interconnection port supports the following two states:
· Blocked: In this state, except for MRP protocol frames, frames defined by IEEE 802.1D standard, and link detection frames defined by IEEE 802.1Q standard, all other frames are discarded by the interconnected port.
· In Forwarding state, the interconnected port can forward all frames.
MRP frames
Protocol frames of the MRP redundancy domain
In the MRP redundancy domain, the various types of protocol frames that affect the operation of the MRP protocol and their functions are shown as in Table 1.
Table 1 Protocol frames of the MRP redundancy domain
Frame |
Description |
MRP_Test |
The detection frame in the MRP redundancy domain, generated by MRM, is used to determine whether the MRP redundancy domain forms a closed loop. |
MRP_LinkChange |
This class of frames is generated by the MRC, announcing to the MRM any changes it has detected in its own link. This includes two types: · MRP_LinkUp: When MRC detects the recovery of its own link fault, it sends this frame. · MRP_LinkDown: When the MRC detects a fault in its own link, it transmits this frame. |
MRP_TopologyChange |
This class of frame is generated by MRM, intended to announce to all MRCs in the MRP redundancy domain about any topology changes occurring within the domain. Upon receiving an MRP_TopologyChange frame, MRCs would clear their Filtering Database (FDB), in order to relearn the MAC addresses after the topology changes. |
MRP_TestMgrNAck |
This class of frames is generated by MRA, also known as negative response frames. They are used in the multi-primary station selection process to notify other MRAs that their own privilege level is higher than the counterparts. |
MRP_TestPropagate |
This type of frame is generated by MRA, and is used in the multi-primary station election process. It announces the information about MRAs with higher privilege levels that it has recorded. |
Protocol frames of the MRP interconnected domain
In the MRP interconnection domain, various types of protocol packets that affect the operation of the MRP protocol and their functions are as shown in Table 2.
Table 2 Protocol frames of the MRP interconnected domain
Frame Name |
Description |
MRP_InTest |
The detection frame in the MRP protocol within the MRP interconnection domain is generated by MIM and used to determine whether a closed loop has formed in the MRP interconnection domain. |
MRP_InLinkChange |
The MIC generates this class of frames to announce its detected link changes to MIM, which includes two types. · MRP_InLinkUp: When MIC detects that its own link fault in the interconnected domain has been restored, it transmits this frame. · MRP_InLinkDown: When MRC detects a fault in its own link within the interconnected domain, it transmits this frame. |
MRP_InTopologyChange |
This class of frames is generated by MIM, aimed at announcing topology changes in the MRP interconnection domain to the MIC in the MRP interconnection domain, all MRCs within the connected MRP redundancy domain, and MRM. Devices receiving the MRP_InTopologyChange frame will clear their Filtering Database (FDB), allowing them to relearn the MAC address after the topology change. |
Working Mechanism
Introduction to MRP Mechanism
The overall concept of the MRP protocol running is:
· Under the control of MRM, when the link state in the MRP redundancy domain is good, it actively blocks one ring port of the MRM to eliminate the loop, as shown in Figure 6. When a single point failure occurs in the MRP redundancy domain, the ring port where the link fault occurs is in the physical link Down state or Blocked state. At this time, MRM can quickly unblock the blocked ring port to ensure the connectivity of the ring network, as shown in Figure 7.
· Through the control of MIM, when all link states in the MRP interconnection domain are good, the interconnection ports of MIM are proactively blocked to eliminate the loop, as shown in Figure 6. When a single point failure occurs in the MRP interconnection domain, the interconnection port where the link fault occurs is in a physical link down state or a blocked state. At this time, MIM can quickly unblock the blocked interconnection port to ensure the connectivity of the network, as shown in Figure 7.
Figure 6 MRP Network Link State Diagram
Figure 7 MRP Network Link Fault Schematic Diagram
Working mechanism of each role in the MRP network
Working mechanism of MRM/MRC in the MRP redundancy domain
As shown in Figure 8, after the MRP protocol starts running in the ring network, the working mechanism of MRM is as follows:
1. The MRM periodically transmits MRP_Test frames through two ring ports, and sets the main port to the Forwarding state.
2. If the MRM receives its own MRP_Test frame from any ring port, it indicates that the MRP ring is closed. The MRM will set this port to a Blocked state to avoid a broadcast storm. If the MRM does not receive its own MRP_Test frame within the specified timing, it suggests that the MRP ring is in an open loop state, meaning the loop is disrupted. In this case, MRM will also set the port to a Forwarding state to ensure the communication link on the MRP ring is not disrupted.
Figure 8 Schematic Diagram of the MRM Running Process
The working mechanism of MRC is as follows:
1. The MRC forwards the received MRP protocol frames between two ring ports. That is, the frame received from one ring port will be forwarded from another ring port, as shown in Figure 9. This mechanism allows the MRP protocol frames to circulate within the MRP redundancy domain, thus enabling the MRM to receive the MRP_Test frames sent by the local host to determine the state of the ring network.
Figure 9 Schematic diagram of MRP protocol frame forwarding by MRC
2. When the MRC detects a link fault on the ring port, it blocks the faulty ring port and transmits the MRP_LinkDown frame through the ring port. After detecting the switch-back of the link fault on the ring port, the MRC temporarily blocks the recovered ring port, maintaining its blocked state to avoid creating a loop, while dispatching the MRP_LinkUp frame through the ring port, as shown in Figure 10. It is important to note that if a fault occurs in a section of the link in the MRP redundancy domain, both devices connected through this link will detect the ring port link fault.
Figure 10 MRC Link State Detection Schematic Diagram
3. When MRM receives the MRP_LinkChange frame:
¡ As shown in Figure 2-11, regarding the MRP_LinkDown frame:
- MRM can opt not to process MRP_LinkDown frames, instead shortening the interval for transmitting MRP_Test frames to rapidly validate the link state. If MRM does not receive the MRP_Test frame transmitted by the local host within a certain time, it would transmit the MRP_TopologyChange frame through two ring ports and set the originally blocked ring ports to Forwarding state. If a MRP_Test frame was received from the local host, it indicates that the MRP ring is still closed without any link fault, and the ring port of MRM stays the same. The advantage of this method is that MRM repeatedly qualifies the state of the ring network for link faults, preventing MRM from incorrectly judging the state of the ring network due to the receipt of erroneous MRP_LinkDown frames.
- MRM can also choose to immediately process the MRP_LinkDown frame. In this method, MRM directly sets the originally blocked ring port to Forwarding state and transmits the MRP_TopologyChange frame through two ring ports. The advantage of this method is that MRM can quickly process changes in the link state within the ring network, thus shortening the convergence time of ring network failure.
Figure 11 MRM Processing Schematic of Receiving MRP_LinkDown Frame
¡ As shown in Figure 12, regarding the MRP_LinkUp frame:
- MRM can opt to not process MRP_LinkUp frames, and in this mode, it will directly ignore MRP_LinkUp frames until it receives a MRP_Test frame transmitted from the local host. Only then will it set the port to a Blocked state, and pass MRP_TopologyChange frames through two ring ports; otherwise, it keeps the ring port state unchanged. The advantage of this mode is that MRM performs repeated validations on the state of the ring network itself, to ensure that the links within the ring network have indeed been restored to normal, thus preventing MRM from misjudging the ring network state due to receiving erroneous MRP_LinkUp frames.
- MRM can also choose to immediately process the MRP_LinkUp frame. Under this method, MRM directly blocks the sub-port of the local host and transmits the MRP_TopologyChange frame through two ring ports. The benefit of this method is that MRM can quickly process changes in the link state within the ring network, shortening the convergence time of network failure.
Figure 12 MRM receives and processes MRP_LinkUp frame diagram
4. After receiving the MRP_TopologyChange frame, the MRC will unblock temporarily blocked ports, clear the local FDB (Filtering Database) for the purpose of re-learning the MAC address after the topology change.
Election mechanism of MRM in the MRP redundancy domain
Once the network administrator designates a device supporting the MRP function as the MRA role, the operational process for the MRA is as follows:
1. At the beginning, all MRAs on the MRP ring act as temporary MRMs and send MRP_Test frames containing their privilege level information from their own two ring ports. When receiving MRP_Test frames from other MRAs, MRAs will only forward the received MRP_Test frames between the ring ports, comparing the privilege levels in the frames with the local preference. If an MRA receives an MRP_Test frame with a lower privilege level than its own, it sends a negative response frame, MRP_TestMgrNAck, which includes the MAC address of the MRA with the lower priority. If an MRA receives an MRP_Test frame with a higher privilege level than its own, it does not take immediate action.
2. MRA responds accordingly based on whether it receives a negative response frame carrying its own MAC address.
¡ Upon receiving the negative response frame carrying its own MAC address, the MRA records the MAC address and privilege level of the higher priority MRA, and changes its role to MRC. At the same time, it transmits the MRP_TestPropagate frame via two ring ports.
¡ If the MRA does not receive any negative response frames carrying its own MAC address within a certain period of time, it means that it is the highest priority device. In this case, the MRA assumes the formal role of MRM and begins managing the MRP ring.
3. When the elected MRM leaves the MRP ring or a fault occurs, the MRA election mechanism will restart. All MRAs will repeat the above steps and elect a new MRM.
For example, in the network grouping as shown in Figure 13, Device A and Device D are MRAs. Assume after the election, Device A becomes the MRM and Device D becomes the MRC. The forwarding of frames during the selection process and the flow of role changes are as shown in Figure 14.
Figure 13 Multi-site MRP Network Group Diagram
Figure 14 Multi-site frame forwarding and role change flowchart
In the above process, the function of MRP_TestMgrNAck frame and MRP_TestPropagate frame is:
· The MRP_TestMgrNAck frame is used to prevent the influence of other MRP_Test frames in the MRP ring on the current role election within the ring. As shown in Figure 15, due to a configuration mistake, Device B is set to be in the same MRP redundancy domain as Device A, and one of Device B's ring ports is directly connected to Device A. In this network, Device A can receive MRP_Test frames from Device B, but Device B cannot receive MRP_Test frames from Device A. If the MRA immediately switches to MRC upon receiving a higher privilege level MRP_Test frame during the election, a situation might arise in the group shown in Figure 15 where Device A cannot elect the MRM role in the normal MRP ring due to the overly high privilege level of Device B's MRP_Test frame, preventing the management of the ring network.
The introduction of the MRP_TestMgrNAck frame can solve the problems mentioned above. An MRA only responds with a negative response frame after receiving an MRP_Test frame with a lower privilege level than its own. The MRA with a lower privilege level will only become an MRC after receiving a negative response frame. In Figure 15, although Device A receives a higher-level MRP_Test frame sent by Device B, Device A doesn't send its own MRP_Test frame to Device B (MRP_Test frames are only transmitted on ring ports). Therefore, Device B doesn't send a negative response frame to Device A, and Device A's role selection as MRA is not affected by the frames from other MRP rings during the normal election result.
Figure 15 The Purpose Illustration of MRP_TestMgrNAck Frame
· The MRP_TestPropagate frame is used to report the information of the high-privilege level device it recorded. Upon receiving the MRP_TestPropagate frame, the MRC compares its content with the privilege level value of the high-privilege level MRA it has recorded locally. If the privilege level carried by the MRP_TestPropagate frame is higher, it updates the privilege level and MAC address record locally.
¡ The purpose of recording information is for subsequent monitoring of the state of the MRM associated with a specific MAC address. If, over a period of time, the MRA that has transitioned into an MRC does not receive an MRP_Test frame from the monitored MAC address, it indicates that the current MRM is no longer effective. The MRA will then reinitiate role election to ensure the normal operation of the MRP network.
¡ The purpose of transmitting the MRP_TestPropagate frame is to ensure that all MRCs within the MRP redundancy domain monitor the highest privilege level MRA, preventing MRCs from failing to update their local information records in a timely manner. Monitoring involves MRAs that have been converted to MRCs, avoiding mistakes that may trigger re-elections.
Working mechanism of MIM/MIC in the MRP interconnected domain
When detecting the loop state in MRP interconnected domain, there are two modes.
· LC-mode: LinkCheck mode. In this mode, each device within the MRP interconnection domain collects the state of their directly connected interconnection links, and sends the detection results as feedback to MIM for aggregation. MIM then controls the MRP interconnection domain according to the collected link state information.
· RC-mode: RingCheck mode. In this mode, the MRP protocol does not judge the state of the link in the MRP interconnection domain. The MIM directly checks whether the interconnection domain forms a closed loop and controls the MRP interconnection domain based on the ring state.
Working mechanisms of each role in the link detection mode
After the MRP protocol starts running in the MRP interconnection domain, the working mechanisms for MIM and MIC are:
1. Once the link state of the interconnected port of MIM is Up, it sets this interconnected port to the Blocked status, and transmits MRP_InLinkStatusPoll frames from both of its ring ports. The MRP_InLinkStatusPoll frame is used to notify the MIC to transmit the result of the link stateful inspection, facilitating MIM to collect the link state of the interconnected topology.
2. The MIC is capable of forwarding various types of MRP interconnection domain frames it receives, as shown in Figure 16. MIC has different forwarding behaviors for different frames.
¡ MRP_InLinkStatusPoll frame: The MIC will forward the MRP_InLinkStatusPoll frames received from the ring port to the interconnected port, but won't forward the MRP_InLinkStatusPoll frames received from the interconnected port.
¡ MRP_InLinkChange frame: The MIC forwards the MRP_InLinkChange frame received from the ring port to the interconnection port, but does not forward the MRP_InLinkChange frame received from the interconnection port.
¡ MRP_InTopologyChange Frame: The MIC will forward the MRP_InTopologyChange frame received from the interconnected port to two ring ports, but it will not forward the MRP_InTopologyChange frame received from the ring port to the interconnected port.
Figure 16 Schematic Diagram of MIC Frame Forwarding Behavior
3. Upon receiving the MRP_InLinkStatusPoll frame, the MIC, based on the detection of its own interconnected port link state, transmits the MRP_InLinkChange frame from the two ring ports, as shown in Figure 17.
¡ If the link state of the MIC's interconnection port is Up, the MIC transmits an MRP_InLinkUp frame notifying that the link state is good. At this time, the state of the MIC's interconnection port remains Blocked.
¡ If the link state of the interconnection port on the MIC is Down, the MIC will transmit an MRP_InLinkDown frame notifying of a fault in the link state. At this time, the state of the interconnection port on the MIC is Blocked.
Figure 17 Schematic diagram of MIC transmitting MRP_InLinkChange frame
4. MIM manages the MRP interlink domain based on the received MRP_InLinkChange frame, as shown in Figure 18.
¡ If the MIM does not receive the MRP_InLinkDown frame, it indicates that all interconnect link states in the MRP interconnect domain are in good condition. In such context, the MIM maintains the Blocked state of its own interconnection port and transmits (Tx) the MRP_InTopologyChange frame from both ring ports and the interconnection port.
¡ If MIM receives an MRP_InLinkDown frame, it indicates there is a link fault in the MRP interconnect domain. In this situation, MIM will set its interconnection port to the Forwarding state, and MRP_InTopologyChange frames are transmitted from both ring ports and the interconnection port.
Figure 18 Schematic diagram of MIM responding to MRP_InLinkChange frame
5. Upon receiving the MRP_InTopologyChange frame, devices in the MRP interlinked domain and the MRP redundancy domain connected with MIM clear their FDB, allowing them to relearn the MAC addresses after the topology changes in the MRP interlinked domain. When MIC receives the MRP_InTopologyChange frame, if the link state of its interlinked port is Up, it sets the interlinked port to Forwarding state, as shown in Figure 19.
Figure 19 Schematic of MIC responding to MRP_InTopologyChange frame
6. When the subsequent MIC detects changes in the link state of the interconnected port, it will still transmit the corresponding MRP_InLinkChange frame. The state changes of the device ports and the frame forwarding processes within the MRP redundancy domain are the same as steps (4) to (5), and will not be further described.
In the above process, MIM and MRM cannot forward MRP protocol frames between ring ports and interconnected ports to avoid infinite round robin of MRP protocol frames.
In ring detection mode, the operating mechanism of each role is as follows:
After starting to run the MRP protocol in the MRP interconnection domain, the working mechanism for MIM and MIC is as follows:
1. After the link state of the MIM's interconnected port becomes 'Up', it sets the state of this interconnected port to 'Blocked'. Then it begins periodically transmitting MRP_InTest frames on its two ring ports.
2. The MIC can forward various types of MRP interconnection domain frames it receives, as shown in Figure 20. The MIC has different forwarding behaviors for different frames.
¡ MRP_InTest frame: The MIC forwards the MRP_InTest frame received from the ring port to the interconnection port and another ring port, and forwards the MRP_InTest frame received from the interconnection port to both ring ports.
¡ MRP_InLinkDown frame: The MIC will forward the MRP_InLinkDown frame received from the ring port to the interconnection port, but won't forward the MRP_InLinkDown frame received from the interconnection port.
¡ MRP_InTopologyChange frame: The MIC will forward the MRP_InTopologyChange frame received from the interconnection port to the two ring ports. However, it will not forward the MRP_InTopologyChange frame received from the ring ports to the interconnection port.
Figure 20 Schematic diagram of MIC frame forwarding behavior
3. After the link state of the interconnection port on the MIC turns Up, it sets the interconnection port to the Blocked state. At any time, if the MIC detects a fault on the link of its own interconnection port, it will transmit an MRP_InLinkDown frame on its two ring ports.
4. During a period of time stipulated by the protocol:
¡ If the MIM receives the MRP_InTest frame it sent out, it indicates that the MRP interconnection domain is in a closed loop state. At this point, MIM blocks its own interconnection port, putting it in a Blocked state. It then transmits an MRP_InTopologyChange frame from both of its ring ports and the interconnection port, as shown in Figure 21.
Figure 21 Schematic Diagram of Closed Loop State MIM Processing
¡ If the MIM does not receive the MRP_InTest frame it transmitted from its own interconnecting port, or it receives the MRP_InLinkDown frame sent by MIC, it indicates a link fault in the MRP interconnected domain, and the loop is in an open loop state. At this point, the MIM opens its own interconnecting port, placing it in a Forwarding state, and transmits the MRP_InTopologyChange frame from both its ring ports and interconnecting port, as shown in Figure 22.
Figure 22 Schematic Diagram of Open Loop State MIM Processing
5. Upon receiving the MRP_InTopologyChange frame, devices in the MRP interconnected domain and MRP redundancy domain connected to MIM will clear their own FDB to relearn MAC addresses after the topology changes in the MRP interconnected domain. If the link state of the interconnection port itself is Up when MIC receives the MRP_InTopologyChange frame, then it will set the interconnection port to Forwarding state, as shown in Figure 23.
Figure 23 Schematic Diagram of MIC's Response to MRP_InTopologyChange Frame
In the aforementioned process, neither MRM nor MIM will forward MRP protocol frames between ring ports and interconnected ports to avoid an infinite round robin of MRP protocol frames.
Restrictions and guidelines
· Due to device performance, currently, the convergence speed of our company's MRP protocol for ring network failures can only reach a level of 200ms at its fastest.
· MRP is only suitable for single ring group networking, as well as networking where two devices are connected to other rings simultaneously. Compared to industrial level ring network protocols like ERPS and RRPP, which are applicable to multiple topology networking, its application scenarios are relatively singular.
· After the deployment of the MRP protocol in the ring network, other ring network protocols cannot be deployed on the designated ring port or interconnecting port.
· MRP can only provide high availability (HA) for links and device nodes within the ring network, unlike the RPR protocol, which can provide QoS services for traffic.
Typical Network Application
In the typical scenario where the MRP protocol is applied, two MRP redundancy domains are connected through the MRP interconnection domain. When no faults occur in the entire network, the state of the ports on each device is as shown in Figure 24.
Figure 24 Industrial Ring Network Protocol MRP Group Network Diagram (Network without Fault)
When a single point failure occurs only in the MRP redundancy domain, the state of ports on each device is as shown in Figure 25.
When a single point failure occurs only in the MRP interconnected domain, the state of the ports on each device is as shown in Figure 26.
When a single point failure occurs in both the MRP redundancy domain and the MRP interconnection domain, the state of ports on each device is as shown in Figure 27.
Related documentation
IEC62439-2-2016