- Table of Contents
-
- 12-Layer 3—IP Routing Configuration Guide
- 00-Preface
- 01-Basic IP routing configuration
- 02-Static routing configuration
- 03-RIP configuration
- 04-OSPF configuration
- 05-IS-IS configuration
- 06-BGP configuration
- 07-Policy-based routing configuration
- 08-IPv6 static routing configuration
- 09-IPv6 policy-based routing configuration
- 10-RIPng configuration
- 11-OSPFv3 configuration
- 12-Routing policy configuration
- 13-Guard route configuration
- 14-RIR configuration
- Related Documents
-
Title | Size | Download |
---|---|---|
07-Policy-based routing configuration | 143.30 KB |
Contents
Restrictions and guidelines: PBR configuration
Setting match criteria for a node
Configuring actions for a node
Specifying a policy for local PBR
Specifying a policy for interface PBR
Configuring PBR
About PBR
Policy-based routing (PBR) uses user-defined policies to route packets. A policy can specify parameters for packets that match specific criteria such as ACLs or that have specific lengths. The parameters include the next hop, output interface, default next hop, and default output interface.
Packet forwarding process
When the device receives a packet, the device searches the PBR policy for a matching node to forward that packet.
· If a matching node is found and its match mode is permit, the device performs the following operations:
a. Uses the next hops or output interfaces specified on the node to forward the packet.
b. Searches the routing table for a route (except the default route) to forward the packet if one of the following conditions exists:
- No next hops or output interfaces are specified on the node.
- Forwarding failed based on the next hops or output interfaces.
c. Uses the default next hops or default output interfaces specified on the node to forward the packet if one of the following conditions exists:
- No matching route was found in the routing table.
- The routing table-based forwarding failed.
d. Uses the default route to forward the packet if one of the following conditions exists:
- No default next hops or default output interfaces are specified on the node.
- The forwarding failed based on the default next hops or default output interfaces.
· The device perfoms routing table lookup to forward the packet in either of the following conditions:
¡ No matching node is found.
¡ A matching node is found, but its match mode is deny.
PBR types
PBR includes the following types:
· Local PBR—Guides the forwarding of locally generated packets, such as ICMP packets generated by using the ping command.
· Interface PBR—Guides the forwarding of packets received on an interface.
Policy
A policy includes match criteria and actions to be taken on the matching packets. A policy can have one or multiple nodes as follows:
· Each node is identified by a node number. A smaller node number has a higher priority.
· A node contains if-match and apply clauses. An if-match clause specifies a match criterion, and an apply clause specifies an action.
· A node has a match mode of permit or deny.
A policy compares packets with nodes in priority order. If a packet matches the criteria on a node, it is processed by the action on the node. If the packet does not match any criteria on the node, it goes to the next node for a match. If the packet does not match the criteria on any node, the device performs a routing table lookup.
Relationship between if-match clauses
On a node, you can specify multiple types of if-match clauses but only one if-match clause for each type.
To match a node, a packet must match all types of the if-match clauses for the node but only one if-match clause for each type.
Relationship between apply clauses
You can specify multiple apply clauses for a node, but some of them might not be executed. For more information about relationship between apply clauses, see "Configuring actions for a node."
Relationship between the match mode and clauses on the node
Does a packet match all the if-match clauses on the node? |
Match mode |
|
Permit |
Deny |
|
Yes. |
· If the node contains apply clauses, PBR executes the apply clauses on the node. ¡ If PBR-based forwarding succeeds, PBR does not compare the packet with the next node. ¡ If PBR-based forwarding fails and the apply continue clause is not configured, PBR does not compare the packet with the next node. ¡ If PBR-based forwarding fails and the apply continue clause is configured, PBR compares the packet with the next node. · If the node does not contain apply clauses, the device performs a routing table lookup for the packet. |
The device performs a routing table lookup for the packet. |
No. |
PBR compares the packet with the next node. |
PBR compares the packet with the next node. |
|
NOTE: A node that has no if-match clauses matches any packet. |
PBR and Track
PBR can work with the Track feature to dynamically adapt the availability status of an apply clause to the link status of a tracked object. The tracked object can be a next hop, output interface, default next hop, or default output interface.
· When the track entry associated with an object changes to Negative, the apply clause is invalid.
· When the track entry changes to Positive or NotReady, the apply clause is valid.
For more information about Track-PBR collaboration, see Network Management and Monitoring Configuration Guide.
Restrictions and guidelines: PBR configuration
If the device performs forwarding in software, PBR does not process IP packets destined for the local device.
If the device performs forwarding in hardware and a packet destined for it matches a PBR policy, PBR will execute the apply clauses in the policy, including the clause for forwarding. When you configure a PBR policy, be careful to avoid this situation.
The device that supports fast forwarding uses the high-speed cache to process packets, and identifies a data flow with one or multiple packet fields. If the first packet of a data flow is successfully forwarded through PBR, the associated forwarding information is generated in the high-speed cache. Subsequent packets of the data flow can then be forwarded through the PBR fast forwarding table. This greatly reduces packet forwarding time and improves packet forwarding rate. For more information about fast forwarding, see fast forwarding configuration in Layer 3—IP Services Configuration Guide.
If a permit-mode PBR policy node successfully forwards a packet, the device cannot generate a fast forwarding entry when the following conditions exist:
· The match rules other than packet 5-tuple are configured for the policy node, for example, the IP packet length match rule configured with the if-match packet-length clause.
· Among the apply access-vpn, apply next-hop (with the inbound-vpn keyword), and apply output-interface (with broadcast or NBMA output interfaces specified) clauses, if one or multiple of the clauses are configured on the policy node, the device determines whether the following conditions are met in descending order of priority:
¡ Multiple VPN instances are specified with the apply access-vpn clause for the policy node, but the first VPN instance is unavailable. If the clause is not configured, the device proceeds to the next condition.
¡ A next hop with the inbound-vpn keyword is specified in the apply next-hop clause for the policy node, but the next hop is unreachable. In load sharing mode, a minimum of one next hop with the inbound-vpn keyword is required. In non-load-sharing mode, before you configure a next hop with the inbound-vpn keyword, no other available next hops without the inbound-vpn keyword are configured. If the clause is not configured, the device proceeds to the next condition.
¡ A broadcast or NBMA output interface is specified with the apply output-interface clause on the policy node, but the output interface is unavailable. In load sharing mode, a minimum of one broadcast or NBMA output interface is required. In non-load-sharing mode, before you specify a broadcast or NBMA output interface, no other available non-broadcast or non-NBMA output interfaces are specified.
· The apply access-vpn vpn-instance, apply next-hop, apply output-interface, or apply srv6-policy clauses are not configured, but one or multiple of the following clauses are configured on the policy node:
¡ apply default-next-hop
¡ apply default-output-interface
¡ apply default-srv6-policy
· The apply ip-df or apply precedence clause is configured on the policy node.
If the permit-mode PBR policy node fails to forward a packet, and the apply continue clause is configured on the policy node, the device cannot generate a fast forwarding entry.
PBR tasks at a glance
To configure PBR, perform the following tasks:
f. Setting match criteria for a node
g. Configuring actions for a node
2. Specifying a policy for PBR
Choose the following tasks as needed:
¡ Specifying a policy for local PBR
¡ Specifying a policy for interface PBR
3. (Optional.) Enabling SNMP notifications for PBR
Configuring a policy
Creating a node
1. Enter system view.
system-view
2. Create a node for a policy, and enter its view.
policy-based-route policy-name [ deny | permit ] node node-number
Setting match criteria for a node
1. Enter system view.
system-view
2. Enter policy node view.
policy-based-route policy-name [ deny | permit ] node node-number
3. Set match criteria.
¡ Set an ACL match criterion.
if-match acl { acl-number | name acl-name }
By default, no ACL match criterion is set.
The ACL match criterion cannot match Layer 2 information.
¡ Set a packet length match criterion.
if-match packet-length min-len max-len
By default, no packet length match criterion is set.
¡ Set application group match criteria.
if-match app-group app-group-name&<1-6>
By default, no application group match criteria are set.
Application group match criteria apply only to interface PBR.
For more information about application groups, see APR configuration in Security Configuration Guide.
¡ Set service object group match criteria.
if-match object-group service object-group-name&<1-6>
By default, no service object group match criteria are set.
For more information about service object groups, see object group configuration in Security Configuration Guide.
Configuring actions for a node
About this task
The apply clauses allow you to specify the actions to be taken on matching packets on a node.
The following apply clauses determine the packet forwarding paths in a descending order:
· apply access-vpn
· apply next-hop
· apply output-interface
· apply default-next-hop
· apply default-output-interface
PBR supports the apply clauses in Table 1.
Table 1 Apply clauses supported in PBR
Clause |
Meaning |
Remarks |
apply precedence |
Sets an IP precedence. |
This clause is always executed. |
apply ip-df df-value |
Sets the Don't Fragment (DF) bit in the IP header. |
This clause is always executed. |
apply loadshare { default-next-hop | default-output-interface | next-hop | output-interface } |
Enables load sharing among multiple next hops, output interfaces, default next hops, or default output interfaces. |
Multiple next hop, output interface, default next hop, or default output interface options operate in either primary/backup or load sharing mode. The following description uses multiple next hops as an example: · Primary/backup mode—A next hop is selected from all next hops in configuration order for packet forwarding, with all remaining next hops as backups. When the selected next hop fails, the next available next hop takes over. · Load sharing mode—Matching traffic is distributed across the available next hops. If the traffic does not match any fast forwarding entries, per-packet load sharing is performed. If the traffic matches a fast forwarding entry, per-flow load sharing is performed. By default, the primary/backup mode applies. For the load sharing mode to take effect, make sure multiple next hops, output interfaces, default next hops, or default output interfaces are set in the policy. |
apply access-vpn |
Specifies the forwarding tables that can be used for the matching packets. |
Use this clause only in special scenarios that require sending packets received from one network to another network, for example, from a VPN to the public network, or from one VPN to another VPN. If a packet matches the forwarding table for a specified VPN instance, it is forwarded in the VPN. |
apply remark-vpn |
Enables VPN remark action. |
VPN remark action marks the matching packets as belonging to the VPN instance to which they are forwarded based on the apply access-vpn vpn-instance command. All subsequent service modules of PBR handle the packets as belonging to the re-marked VPN instance. If the VPN remark action is not enabled, the forwarded matching packets are marked as belonging to the VPN instance or the public network from which they were received. VPN remark action applies only to packets that have been successfully forwarded based on the apply access-vpn vpn-instance command. |
apply next-hop and apply output-interface |
Sets next hops and output interfaces. |
Only the apply clause with the highest priority is executed. |
apply default-next-hop and apply default-output-interface |
Sets default next hops and default output interfaces. |
Only the apply clause with the highest priority is executed. The clauses take effect only in the following cases: · No next hops or output interfaces are set or the next hops or output interfaces are invalid. · The packet does not match any route in the routing table. |
apply continue |
Compares packets with the next node upon failure on the current node. |
The apply continue clause applies when either of the following conditions exist: · None of the clauses that set the packet forwarding path is configured. · A clause that sets the packet forwarding path is configured, but it has become invalid. Then, a routing table lookup also fails for the matching packet. NOTE: A clause might become invalid because the specified next hop is unreachable, packets cannot be forwarded in the specified VPN instance or the specified output interface is down. |
Restrictions and guidelines
For outbound PBR, you can specify only one next hop and the next hop must be directly connected.
If you specify a next hop or default next hop, PBR periodically performs a lookup in the FIB table to determine its availability. Temporary service interruption might occur if PBR does not update the route immediately after its availability status changes.
Configuring actions to modify packet fields
1. Enter system view.
system-view
2. Enter policy node view.
policy-based-route policy-name [ deny | permit ] node node-number
3. Configure actions.
¡ Set an IP precedence.
apply precedence { type | value }
By default, no IP precedence is specified.
¡ Set the DF bit in the IP header.
apply ip-df df-value
By default, the DF bit in the IP header is not set.
Configuring actions to direct packet forwarding
1. Enter system view.
system-view
2. Enter policy node view.
policy-based-route policy-name [ deny | permit ] node node-number
3. Configure actions.
¡ Specify the forwarding tables that can be used for the matching packets.
apply access-vpn { public | vpn-instance vpn-instance-name&<1-4> }
By default, the device forwards matching packets by using the forwarding table for the network from which the packets are received.
You can repeat this command to specify the forwarding tables for the public network and VPN instances. The device forwards the matching packets by using the first available forwarding table selected in the order in which they are specified.
¡ Enable VPN remark action to mark the matching packets as belonging to the VPN instance to which they are forwarded based on the apply access-vpn vpn-instance command.
apply remark-vpn
By default, VPN remark action is not configured.
¡ Set next hops.
apply next-hop [ vpn-instance vpn-instance-name | inbound-vpn ] { ip-address [ direct ] [ track track-entry-number ] [ weight weight-value ] }&<1-4>
By default, no next hops are specified.
On a node, you can specify a maximum of four next hops for backup or load sharing in one command line or by executing this command multiple times.
If multiple next hops on the same subnet are specified for backup, the device first uses the subnet route for the next hops to forward packets when the primary next hop fails. If the subnet route is not available, the device selects a backup next hop.
¡ Enable load sharing among multiple next hops.
apply loadshare next-hop
By default, the next hops operate in primary/backup mode.
¡ Set output interfaces.
apply output-interface { interface-type interface-number [ track track-entry-number ] }&<1-4>
By default, no output interfaces are specified.
On a node, you can specify a maximum of four output interfaces for backup or load sharing in one command line or by executing this command multiple times.
¡ Enable load sharing among multiple output interfaces.
apply loadshare output-interface
By default, the output interfaces operate in primary/backup mode.
¡ Set default next hops.
apply default-next-hop [ vpn-instance vpn-instance-name | inbound-vpn ] { ip-address [ direct ] [ track track-entry-number ] }&<1-4>
By default, no default next hops are specified.
On a node, you can specify a maximum of four default next hops for backup or load sharing in one command line or by executing this command multiple times.
¡ Enable load sharing among multiple default next hops.
apply loadshare default-next-hop
By default, the default next hops operate in primary/backup mode.
¡ Set default output interfaces.
apply default-output-interface { interface-type interface-number [ track track-entry-number ] }&<1-4>
By default, no default output interfaces are specified.
On a node, you can specify a maximum of four default output interfaces for backup or load sharing in one command line or by executing this command multiple times.
¡ Enable load sharing among multiple default output interfaces.
apply loadshare default-output-interface
By default, the default output interfaces operate in primary/backup mode.
Comparing packets with the next node upon match failure on the current node
1. Enter system view.
system-view
2. Enter policy node view.
policy-based-route policy-name [ deny | permit ] node node-number
3. Compare packets with the next node upon match failure on the current node.
apply continue
By default, PBR does not compare packets with the next node upon match failure on the current node.
This command takes effect only when the match mode of the node is permit.
Specifying a policy for PBR
Specifying a policy for local PBR
About this task
Perform this task to specify a policy for local PBR to guide the forwarding of locally generated packets.
Restrictions and guidelines
You can specify only one policy for local PBR and must make sure the specified policy already exists. Before you apply a new policy, you must first remove the current policy.
Local PBR might affect local services such as ping and Telnet. When you use local PBR, make sure you fully understand its impact on local services of the device.
Procedure
1. Enter system view.
system-view
2. Specify a policy for local PBR.
ip local policy-based-route policy-name
By default, local PBR is not enabled.
Specifying a policy for interface PBR
About this task
Perform this task to apply a policy to an interface to guide the forwarding of packets received on the interface.
Restrictions and guidelines
You can apply only one policy to an interface and must make sure the specified policy already exists. Before you can apply a new interface PBR policy to an interface, you must first remove the current policy from the interface.
You can apply a policy to multiple interfaces.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Specify a policy for interface PBR.
ip policy-based-route policy-name
By default, no policy is applied to an interface.
Enabling SNMP notifications for PBR
About this task
Perform this task to enable SNMP notifications for PBR. PBR can generate notifications and send them to the SNMP module when the next hop becomes invalid. For the PBR notifications to be sent correctly, you must also configure SNMP on the device. For more information about configuring SNMP, see the network management and monitoring configuration guide for the device.
Procedure
1. Enter system view.
system-view
2. Enable SNMP notifications for PBR.
snmp-agent trap enable policy-based-route
By default, SNMP notifications are enabled for PBR.
Display and maintenance commands for PBR
Execute display commands in any view and reset commands in user view.
Task |
Command |
Display PBR policy information. |
display ip policy-based-route [ policy policy-name ] |
Display interface PBR configuration and statistics. |
In standalone mode: display ip policy-based-route interface interface-type interface-number [ slot slot-number [ cpu cpu-number ] ] In IRF mode: display ip policy-based-route interface interface-type interface-number [ chassis chassis-number slot slot-number [ cpu cpu-number ] ] |
Display local PBR configuration and statistics. |
In standalone mode: display ip policy-based-route local [ slot slot-number [ cpu cpu-number ] ] In IRF mode: display ip policy-based-route local [ chassis chassis-number slot slot-number [ cpu cpu-number ] ] |
Display PBR configuration. |
display ip policy-based-route setup |
Clear PBR statistics. |
reset ip policy-based-route statistics [ policy policy-name ] |