05-QoS Volume

01-QoS Configuration

Chapters Download  (591.68 KB)

01-QoS Configuration

Table of Contents

1 QoS Overview·· 1-1

Introduction to QoS· 1-1

Networks Without QoS Guarantee· 1-1

QoS Requirements of New Applications· 1-1

Congestion: Causes, Impacts, and Countermeasures· 1-2

Causes· 1-2

Impacts· 1-2

Countermeasures· 1-3

QoS Technology Implementations· 1-3

End-to-End QoS· 1-3

Traffic Classification· 1-4

Packet Precedences· 1-4

2 QoS Policy Configuration· 2-1

QoS Policy Overview· 2-1

Configuring a QoS Policy· 2-1

Defining a Class· 2-1

Defining a Traffic Behavior 2-3

Defining a Policy· 2-5

QoS Policy Configuration Example· 2-5

Applying the QoS Policy· 2-6

Applying the QoS Policy to an Interface· 2-6

Applying the QoS Policy to a VLAN· 2-7

Applying the QoS Policy Globally· 2-8

Support for QoS actions in different directions· 2-8

Displaying and Maintaining QoS Policies· 2-9

3 Priority Mapping Configuration· 3-1

Priority Mapping Overview· 3-1

Introduction to Priority Mapping· 3-1

Concepts· 3-1

Introduction to Priority Mapping Tables· 3-2

Configuring a Priority Mapping Table· 3-3

Configuration Prerequisites· 3-3

Configuration Procedure· 3-3

Configuration Example· 3-4

Configuring the Priority for a Port 3-5

Configuration Prerequisites· 3-5

Configuration Procedure· 3-5

Configuration Example· 3-5

Configuring the Trusted Precedence Type for a Port 3-5

Configuration Prerequisites· 3-6

Configuration Procedure· 3-6

Configuration Example· 3-6

Displaying and Maintaining Priority Mapping· 3-7

4 Traffic Policing and Traffic Shaping Configuration· 4-1

Traffic Policing and Traffic Shaping Overview· 4-1

Traffic Evaluation and Token Bucket 4-1

Traffic Policing· 4-2

Traffic Shaping· 4-3

Traffic Policing, GTS and Line Rate Configuration· 4-4

Configuring Traffic Policing· 4-5

Configuring GTS· 4-5

Displaying and Maintaining Traffic Policing, GTS and Line Rate· 4-7

Traffic Policing and GTS Configuration Examples· 4-7

Traffic Policing and GTS Configuration Example· 4-7

5 Aggregation CAR Configuration· 5-1

Aggregation CAR Overview· 5-1

Configuring an Aggregation CAR Policy· 5-1

Configuration Prerequisites· 5-1

Configuration Procedure· 5-1

Configuration Example· 5-2

Referencing Aggregation CAR in a Traffic Behavior 5-2

Configuration Prerequisites· 5-2

Configuration Procedure· 5-2

Configuration Example· 5-3

Displaying and Maintaining Aggregation CAR· 5-3

6 Congestion Management Configuration· 6-1

Overview· 6-1

Congestion Management Policies· 6-1

Congestion Management Configuration Methods· 6-4

Per-Queue Configuration Method· 6-4

Configuring SP Queuing· 6-4

Configure WRR Queuing· 6-5

Configuring SP+WRR Queues· 6-6

7 Traffic Mirroring Configuration· 7-1

Traffic Mirroring Overview· 7-1

Configuring Traffic Mirroring· 7-1

Mirroring Traffic to an Interface· 7-2

Mirroring Traffic to the CPU· 7-2

Displaying and Maintaining Traffic Mirroring· 7-2

Traffic Mirroring Configuration Examples· 7-2

Example for Mirroring Traffic to an Interface· 7-2

Configuration Procedure· 7-3

8 Port Buffer Configuration· 8-1

Port Buffer Overview· 8-1

Configuring the Shared Buffer 8-1

Configuring the Burst Function to Automatically Set the Shared Buffer 8-1

Configuring the Shared Buffer Manually· 8-2

Displaying and Maintaining Port Buffer 8-2

Burst Configuration Example· 8-3

Network Requirements· 8-3

Configuration Procedure· 8-3

 


QoS Overview

This chapter covers the following topics:

l          Introduction to QoS

l          Networks Without QoS Guarantee

l          QoS Requirements of New Applications

l          Congestion: Causes, Impacts, and Countermeasures

l          QoS Technology Implementations

Introduction to QoS

Quality of Service (QoS) reflects the ability of a network to meet customer needs. In an internet, QoS evaluates the ability of the network to forward packets of different services.

The evaluation can be based on different criteria because the network may provide various services. Generally, QoS performance is measured with respect to bandwidth, delay, jitter, and packet loss ratio during packet forwarding process.

Networks Without QoS Guarantee

On traditional IP networks without QoS guarantee, devices treat all packets equally and handle them using the first in first out (FIFO) policy. All packets share the resources of the network and devices. How many resources the packets can obtain completely depends on the time they arrive. This service is called best-effort. It delivers packets to their destinations as possibly as it can, without any guarantee for delay, jitter, packet loss ratio, and so on.

This service policy is only suitable for applications insensitive to bandwidth and delay, such as Word Wide Web (WWW) and E-Mail.

QoS Requirements of New Applications

The Internet has been growing along with the fast development of networking technologies.

Besides traditional applications such as WWW, E-Mail and FTP, network users are experiencing new services, such as tele-education, telemedicine, video telephone, videoconference and Video-on-Demand (VoD). Enterprise users expect to connect their regional branches together with VPN technologies to carry out operational applications, for instance, to access the database of the company or to monitor remote devices through Telnet.

These new applications have one thing in common, that is, they all have special requirements for bandwidth, delay, and jitter. For example, videoconference and VoD require high bandwidth, low delay and jitter. As for mission-critical applications, such as transactions and Telnet, they may not require high bandwidth but do require low delay and preferential service during congestion.

The emerging applications demand higher service performance of IP networks. Better network services during packets forwarding are required, such as providing dedicated bandwidth, reducing packet loss ratio, managing and avoiding congestion, and regulating network traffic. To meet these requirements, networks must provide more improved services.

Congestion: Causes, Impacts, and Countermeasures

Network congestion is a major factor contributed to service quality degrading on a traditional network. Congestion is a situation where the forwarding rate decreases due to insufficient resources, resulting in extra delay.

Causes

Congestion easily occurs in complex packet switching circumstances in the Internet. The following figure shows two common cases:

Figure 1-1 Traffic congestion causes

 

l          The traffic enters a device from a high speed link and is forwarded over a low speed link.

l          The packet flows enter a device from several incoming interfaces and are forwarded out an outgoing interface, whose rate is smaller than the total rate of these incoming interfaces.

When traffic arrives at the line speed, a bottleneck is created at the outgoing interface causing congestion.

Besides bandwidth bottlenecks, congestion can be caused by resource shortage in various forms such as insufficient processor time, buffer, and memory, and by network resource exhaustion resulting from excessive arriving traffic in certain periods.

Impacts

Congestion may bring these negative results:

l          Increased delay and jitter during packet transmission

l          Decreased network throughput and resource use efficiency

l          Network resource (memory in particular) exhaustion and even system breakdown

It is obvious that congestion hinders resource assignment for traffic and thus degrades service performance. Congestion is unavoidable in switched networks and multi-user application environments. To improve the service performance of your network, you must address the congestion issues.

Countermeasures

A simple solution for congestion is to increase network bandwidth, however, it cannot solve all the problems that cause congestion because you cannot increase network bandwidth infinitely.

A more effective solution is to provide differentiated services for different applications through traffic control and resource allocation. In this way, resources can be used more properly. During resources allocation and traffic control, the direct or indirect factors that might cause network congestion should be controlled to reduce the probability of congestion. Once congestion occurs, resource allocation should be performed according to the characteristics and demands of applications to minimize the effects of congestion.

QoS Technology Implementations

End-to-End QoS

Figure 1-2 End-to-end QoS model

 

As shown in Figure 1-2, traffic classification, traffic policing, traffic shaping, congestion management, and congestion avoidance are the foundations for a network to provide differentiated services. Mainly they implement the following functions:

l          Traffic classification uses certain match criteria to organize packets with different characteristics into different classes. Traffic classification is usually applied in the inbound direction of a port.

l          Traffic policing polices particular flows entering or leaving a device according to configured specifications and can be applied in both inbound and outbound directions of a port. When a flow exceeds the specification, some restriction or punishment measures can be taken to prevent overconsumption of network resources.

l          Traffic shaping proactively adjusts the output rate of traffic to adapt traffic to the network resources of the downstream device and avoid unnecessary packet drop and congestion. Traffic shaping is usually applied in the outbound direction of a port.

l          Congestion management provides a resource scheduling policy to arrange the forwarding sequence of packets when congestion occurs. Congestion management is usually applied in the outbound direction of a port.

l          Congestion avoidance monitors the usage status of network resources and is usually applied in the outbound direction of a port. As congestion becomes worse, it actively reduces the amount of traffic by dropping packets.

Among these QoS technologies, traffic classification is the basis for providing differentiated services. Traffic policing, traffic shaping, congestion management, and congestion avoidance manage network traffic and resources in different ways to realize differentiated services.

This section is focused on traffic classification, and the subsequent sections will introduce the other technologies in details.

Traffic Classification

When defining match criteria for classifying traffic, you can use IP precedence bits in the type of service (ToS) field of the IP packet header, or other header information such as IP addresses, MAC addresses, IP protocol field and port numbers. You can define a class for packets with the same quintuple (source address, source port number, protocol number, destination address and destination port number for example), or for all packets to a certain network segment.

When packets are classified on the network boundary, the precedence bits in the ToS field of the IP packet header are generally re-set. In this way, IP precedence can be directly adopted to classify the packets in the network. IP precedence can also be used in queuing to prioritize traffic. The downstream network can either adopt the classification results from its upstream network or classify the packets again according to its own criteria.

To provide differentiated services, traffic classes must be associated with certain traffic control actions or resource allocation actions. What traffic control actions to adopt depends on the current phase and the resources of the network. For example, CAR is adopted to police packets when they enter the network; GTS is performed on packets when they flow out of the node; queue scheduling is performed when congestion happens; congestion avoidance measures are taken when the congestion deteriorates.

Packet Precedences

This section introduces IP precedence, ToS precedence, differentiated services codepoint (DSCP) values, and 802.1p precedence.

1)        IP precedence, ToS precedence, and DSCP values

Figure 1-3 DS field and ToS bytes

 

As shown in Figure 1-3, the ToS field of the IP header contains eight bits: the first three bits (0 to 2) represent IP precedence from 0 to 7; the subsequent four bits (3 to 6) represent a ToS value from 0 to 15. According to RFC 2474, the ToS field of the IP header is redefined as the differentiated services (DS) field, where a DSCP value is represented by the first six bits (0 to 5) and is in the range 0 to 63. The remaining two bits (6 and 7) are reserved.

Table 1-1 Description on IP Precedence

IP Precedence (decimal)

IP Precedence (binary)

Description

0

000

Routine

1

001

priority

2

010

immediate

3

011

flash

4

100

flash-override

5

101

critical

6

110

internet

7

111

network

 

In a network in the Diff-Serve model, traffic is grouped into the following four classes, and packets are processed according to their DSCP values.

l          Expedited Forwarding (EF) class: In this class, packets are forwarded regardless of link share of other traffic. The class is suitable for preferential services requiring low delay, low packet loss, low jitter, and high bandwidth.

l          Assured forwarding (AF) class: This class is divided into four subclasses (AF 1 to AF 4), each containing three drop priorities for more granular classification. The QoS level of the AF class is lower than that of the EF class.

l          Class selector (CS) class: This class is derived from the IP ToS field and includes eight subclasses;

l          Best effort (BE) class: This class is a special CS class that does not provide any assurance. AF traffic exceeding the limit is degraded to the BE class. Currently, all IP network traffic belongs to this class by default.

Table 1-2 Description on DSCP values

DSCP value (decimal)

DSCP value (binary)

Description

46

101110

ef

10

001010

af11

12

001100

af12

14

001110

af13

18

010010

af21

20

010100

af22

22

010110

af23

26

011010

af31

28

011100

af32

30

011110

af33

34

100010

af41

36

100100

af42

38

100110

af43

8

001000

cs1

16

010000

cs2

24

011000

cs3

32

100000

cs4

40

101000

cs5

48

110000

cs6

56

111000

cs7

0

000000

be (default)

 

2)        802.1p precedence

802.1p precedence lies in Layer 2 packet headers and is applicable to occasions where Layer 3 header analysis is not needed and QoS must be assured at Layer 2.

Figure 1-4 An Ethernet frame with an 802.1Q tag header

 

As shown in Figure 1-4, the 4-byte 802.1Q tag header consists of the tag protocol identifier (TPID, two bytes in length), whose value is 0x8100, and the tag control information (TCI, two bytes in length). Figure 1-5 presents the format of the 802.1Q tag header.

Figure 1-5 802.1Q tag header

 

The priority in the 802.1Q tag header is called 802.1p precedence, because its use is defined in IEEE 802.1p. Table 1-3 presents the values for 802.1p precedence.

Table 1-3 Description on 802.1p precedence

802.1p precedence (decimal)

802.1p precedence (binary)

Description

0

000

best-effort

1

001

background

2

010

spare

3

011

excellent-effort

4

100

controlled-load

5

101

video

6

110

voice

7

111

network-management

 

 


QoS Policy Configuration

When configuring a QoS policy, go to these sections for information you are interested in:

l          QoS Policy Overview

l          Configuring a QoS Policy

l          Applying the QoS Policy

l          Displaying and Maintaining QoS Policies

QoS Policy Overview

A QoS policy involves three components: class, traffic behavior, and policy. You can associate a class with a traffic behavior using a QoS policy.

Class

Classes are used to identify traffic.

A class is identified by a class name and contains some match criteria.

You can define a set of match criteria to classify packets, and the relationship between criteria can be and or or.

l          and: The device considers a packet belongs to a class only when the packet matches all the criteria in the class.

l          or: The device considers a packet belongs to a class as long as the packet matches one of the criteria in the class.

Traffic behavior

A traffic behavior defines a set of QoS actions for packets.

Policy

A policy associates a class with a traffic behavior.

You can configure multiple class-to-traffic behavior associations in a policy.

Configuring a QoS Policy

Follow these steps to configure a QoS policy:

1)        Create a class and define a set of match criteria in class view.

2)        Create a traffic behavior and define a set of QoS actions in traffic behavior view.

3)        Create a policy and associate the traffic behavior with the class in policy view.

Defining a Class

To define a class, you need to specify a name for it and then configure match criteria in class view.

Follow these steps to define a class:

To do…

Use the command…

Remarks

Enter system view

system-view

Create a class and enter class view

traffic classifier tcl-name [ operator { and | or } ]

Required

By default, the relation between match criteria is and.

Define a match criterion

if-match match-criteria

Required

Display class information

display traffic classifier { system-defined | user-defined } [ tcl-name ]

Optional

Available in any view

 

match-criteria: Matching rules to be defined for a class. Table 2-1 describes the available forms of this argument.

Table 2-1 The form of the match-criteria argument

Form

Description

acl { access-list-number | name acl-name }

Specifies to match an IPv4 ACL specified by its number or name. The access-list-number argument specifies an ACL by its number, which ranges from 2000 to 4999; the name acl-name keyword-argument combination specifies an ACL by its name.

In a class configured with the operator and, the logical relationship between rules defined in the referenced IPv4 ACL is or.

acl ipv6 { access-list-number | name acl-name }

Specifies to match an IPv6 ACL specified by its number or name. The access-list-number argument specifies an ACL by its number, which ranges from 2000 to 3999; the name acl-name keyword-argument combination specifies an ACL by its name.

In a class configured with the operator and, the logical relationship between rules defined in the referenced IPv6 ACL is or.

any

Specifies to match all packets.

customer-dot1p 8021p-list

Specifies to match packets by 802.1p precedence of the customer network. The 8021p-list argument is a list of CoS values, in the range of 0 to 7.

customer-vlan-id vlan-id-list

Specifies to match the packets of specified VLANs of user networks. The vlan-id-list argument specifies a list of VLAN IDs, in the form of vlan-id to vlan-id or multiple discontinuous VLAN IDs (separated by space). You can specify up to eight VLAN IDs for this argument at a time. VLAN ID is in the range 1 to 4094.

destination-mac mac-address

Specifies to match the packets with a specified destination MAC address.

dscp dscp-list

Specifies to match packets by DSCP precedence. The dscp-list argument is a list of DSCP values in the range of 0 to 63.

ip-precedence ip-precedence-list

Specifies to match packets by IP precedence. The ip-precedence-list argument is a list of IP precedence values in the range of 0 to 7.

protocol protocol-name

Specifies to match the packets of a specified protocol. The protocol-name argument can be IP.

service-dot1p 8021p-list

Specifies to match packets by 802.1p precedence of the service provider network. The 8021p-list argument is a list of CoS values in the range of 0 to 7.

service-vlan-id vlan-id-list

Specifies to match the packets of the VLANs of the operator’s network. The vlan-id-list argument is a list of VLAN IDs, in the form of vlan-id to vlan-id or multiple discontinuous VLAN IDs (separated by space). You can specify up to eight VLAN IDs for this argument at a time. VLAN ID is in the range of 1 to 4094.

source-mac mac-address

Specifies to match the packets with a specified source MAC address.

 

The matching criteria listed below must be unique in a traffic class with the operator being AND. Therefore, even though you can define multiple if-match clauses for these matching criteria or input multiple values for a list argument (such as the 8021p-list argument) listed below in a traffic class, avoid doing that. Otherwise, the QoS policy referencing the class cannot be applied to interfaces successfully.

l          customer-dot1p 8021p-list

l          customer-vlan-id vlan-id-list

l          destination-mac mac-address

l          dscp dscp-list

l          ip-precedence ip-precedence-list

l          service-dot1p 8021p-list

l          service-vlan-id vlan-id-list

l          source-mac mac-address

To create multiple if-match clauses or specify multiple values for a list argument for any of the matching criteria listed above, ensure that the operator of the class is OR.

 

Defining a Traffic Behavior

A traffic behavior is a set of QoS actions. To define a traffic behavior, you must first create it and then configure actions for the behavior as required in traffic behavior view.

Follow these steps to define a traffic behavior:

To do…

Use the command…

Remarks

Enter system view

system-view

Create a traffic behavior and enter traffic behavior view

traffic behavior behavior-name

Required

Enable traffic accounting

accounting

Optional

Configure a CAR policy

car cir committed-information-rate [ cbs committed-burst-size [ ebs excess-burst-size ] ] [ pir peak-information-rate ] [ green action ] [ yellow action ] [ red action ]

Optional

For detailed information about CAR, refer to Traffic Policing and Traffic Shaping Configuration.

Reference an aggregation CAR policy

car name car-name

Optional

For detailed information about aggregation CAR, refer to Aggregation CAR Configuration.

Drop or send packets

filter { deny | permit }

Optional

l      deny Dropping packets.

l      permit Permitting packets to pass through.

Mirror packets to the CPU or an interface

mirror-to { cpu | interface interface-type interface-number }

Optional

For detailed information about traffic mirroring, refer to Traffic Mirroring Configuration.

Insert a VLAN tag

nest top-most vlan-id vlan-id-value

Optional

Redirect traffic to a specified target

redirect { cpu | interface interface-type interface-number | next-hop { ipv4-add [ ipv4-add  ] | ipv6-add [ interface-type interface-number ] [ ipv6-add [ interface-type interface-number ] ] } }

Optional

Set the DSCP value for packets

remark dscp dscp-value

Optional

Set the 802.1p precedence for packets

remark dot1p 8021p

Optional

Set the IP precedence for packets

remark ip-precedence ip-precedence-value

Optional

Set the local precedence for packets

remark local-precedence local-precedence

Optional

Set the provider network VLAN ID for packets

remark service-vlan-id vlan-id-value

Optional

Display traffic behavior configuration information

display traffic behavior  user-defined [ behavior-name ]

Optional

Available in any view

 

*

If both a QoS policy referencing CAR and the qos car command are configured on an interface, the QoS policy takes effect.

To ensure that a policy can be applied successfully, follow these guidelines when configuring a traffic behavior:

l          Do not configure the redirect to CPU, redirect to interface, and redirect to next hop in the same traffic behavior, because they are conflicting.

l          Do not configure the filter deny action or an aggregation CAR action with the accounting action in the same traffic behavior, because they are conflicting.

 

Defining a Policy

A policy defines the mapping between a class and a traffic behavior.

In a policy, multiple class-to-traffic-behavior mappings can be configured, and these mappings are executed according to the order configured.

Follow these steps to define a policy:

To do…

Use the command…

Remarks

Enter system view

system-view

Create a policy and enter policy view

qos policy policy-name

Required

Specify the traffic behavior for a class in the policy

classifier tcl-name behavior behavior-name

Required

Display the specified class and its associated traffic behavior in the QoS policy

display qos policy user-defined [ policy-name [ classifier tcl-name ] ]

Optional

Available in any view

 

 

QoS Policy Configuration Example

Network requirements

Configure a QoS policy test_policy to limit the rate of packets with IP precedence 6 to 100 kbps.

Configuration procedure

# Create a class test_class to match the packets with IP precedence 6.

<Sysname> system-view

[Sysname] traffic classifier test_class

[Sysname-classifier-test_class] if-match ip-precedence 6

[Sysname-classifier-test_class] quit

# Create a traffic behavior test_behavior and configure the action of limiting the traffic rate to 100 kbps for it.

[Sysname] traffic behavior test_behavior

[Sysname-behavior-test_behavior] car cir 100

[Sysname-behavior-test_behavior] quit

# Create a QoS policy test_policy and associate the traffic behavior with the class.

[Sysname] qos policy test_policy

[Sysname-qospolicy-test_policy] classifier test_class behavior test_behavior

Applying the QoS Policy

You can apply the QoS policy to different occasions:

l          Applied to an interface, the policy takes effect on the traffic sent or received on the interface;

l          Applied to a VLAN, the policy takes effect on the traffic sent or received on all ports in the VLAN;

l          Applied globally, the policy takes effect on the traffic sent or received on all ports.

 

You can modify the classification rules, traffic behaviors, and classifier-behavior associations of a QoS policy already applied.

 

Applying the QoS Policy to an Interface

A policy can be applied to multiple interfaces. Only one policy can be applied in one direction (inbound or outbound) of an interface.

Configuration procedure

Follow these steps to apply the QoS policy to an interface:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Apply the policy to the interface/port group

qos apply policy policy-name { inbound | outbound }

Required

 

If a QoS policy is applied in the outbound direction of an interface, the QoS policy cannot influence local packets (local packets refer to the important protocol packets that maintain the normal operation of the device. QoS must not process such packets to avoid packet drop. Commonly used local packets are: link maintenance packets, IS-IS packets, OSPF packets, RIP packets, BGP packets, LDP packets, RSVP packets, and SSH packets and so on.)

 

Configuration example

# Apply QoS policy test_policy to the inbound direction of GigabitEthernet 1/0/1.

<Sysname> system-view

[Sysname] interface gigabitethernet 1/0/1

[Sysname-GigabitEthernet1/0/1] qos apply policy test_policy inbound

Applying the QoS Policy to a VLAN

You can apply a QoS policy to a VLAN to regulate traffic of the VLAN. Only one QoS policy can be applied to a VLAN.

Configuration procedure

Follow these steps to apply the QoS policy to a VLAN:

To do…

Use the command…

Remarks

Enter system view

system-view

Apply the QoS policy to the specified VLAN(s)

qos vlan-policy policy-name vlan vlan-id-list { inbound | outbound }

Required

 

QoS policies cannot be applied to dynamic VLANs, for example, VLANs created by GVRP.

 

Configuration example

# Apply QoS policy test_policy to the inbound direction of VLAN 200, VLAN 300, VLAN 400, and VLAN 500.

<Sysname> system-view

[Sysname] qos vlan-policy test_policy vlan 200 300 400 500 inbound

Applying the QoS Policy Globally

You can apply the QoS policy globally to the inbound or outbound direction of all ports.

Configuration procedure

Follow these steps to apply a QoS policy globally:

To do…

Use the command…

Remarks

Enter system view

system-view

Apply a QoS policy globally

qos apply policy policy-name global { inbound | outbound }

Required

 

Configuration example

# Apply QoS policy test_policy to the inbound direction globally.

<Sysname> system-view

[Sysname] qos apply policy test_policy global inbound

Support for QoS actions in different directions

Before creating and applying a QoS policy, you must be aware that some QoS actions are supported only in a particular traffic direction, as shown in Table 2-2:

Table 2-2 Support for QoS actions in different traffic directions

Direction (right)

Inbound

Outbound

Action (below)

Traffic accounting

Supported

Supported

Traffic policing

Supported

Supported

Aggregation CAR

Supported

Not Supported

Traffic filtering

Supported

Supported

Traffic mirroring

Supported

Not supported

Inserting VLAN tags

Supported

Not supported

Traffic redirecting

Supported

Not supported

Marking 802.1p priority

Supported

Supported

Marking DSCP precedence

Supported

Supported

Marking IP precedence

Supported

Not supported

Marking local precedence

Supported

Not supported

Marking service VLAN IDs

Supported

Supported

 

Follow these rules when configuring a behavior. Otherwise the corresponding QoS policy cannot be applied successfully.

l          The nest action is mutually exclusive with the remark service-vlan-id action.

l          The filter deny action is mutually exclusive with any other action.

 

Displaying and Maintaining QoS Policies

To do…

Use the command…

Remarks

Display traffic class information

display traffic classifier  user-defined [ tcl-name ]

Available in any view

Display traffic behavior configuration information

display traffic behavior  user-defined [ behavior-name ]

Available in any view

Display the configuration of user-defined QoS policies

display qos policy  user-defined [ policy-name [ classifier tcl-name ] ]

Available in any view

Display QoS policy configuration on the specified or all interfaces

display qos policy interface [ interface-type interface-number ] [ inbound | outbound ]

Available in any view

Display VLAN QoS policy information

display qos vlan-policy { name policy-name | vlan vlan-id } [ inbound | outbound ]

Available in any view

Display information about global QoS policies

display qos policy global [ inbound | outbound ]

Available in any view

Clear VLAN QoS policy statistics

reset qos vlan-policy [ vlan vlan-id ] [ inbound | outbound ]

Available in user view

Clear statistics of a global QoS policy

reset qos policy global [ inbound | outbound ]

Available in user view

 


When configuring priority mapping, go to these sections for information you are interested in:

l          Priority Mapping Overview

l          Configuring a Priority Mapping Table

l          Configuring the Priority for a Port

l          Configuring the Trusted Precedence Type for a Port

l          Displaying and Maintaining Priority Mapping

Priority Mapping Overview

Introduction to Priority Mapping

When a packet enters a network, it will be marked with a certain value, which indicates the scheduling weight or forwarding priority of the packet. Then, the intermediate nodes in the network process the packet according to the priority.

When a packet enters a device, the device assigns to the packet a set of predefined parameters (including the 802.1p priority, DSCP values, IP precedence, local precedence, and drop precedence).

Concepts

For more information about 802.1p precedence, DSCP values, and IP precedence values, refer to Packet Precedences.

The local precedence and drop precedence are defined as follows:

l          Local precedence is a locally significant precedence that the device assigns to a packet. A local precedence value corresponds to an output queue. Packets with the highest local precedence are processed preferentially.

l          Drop precedence is a parameter used for packet drop. The value 2 corresponds to red packets, the value 1 corresponds to yellow packets, and the value 0 corresponds to green packets. Packets with the highest drop precedence are dropped preferentially.

Depending on whether a received packet is 802.1q-tagged, the switch marks it with priority as follows:

1)        For an 802.1q-untagged packet

When a packet carrying no 802.1q tag reaches a port, the switch uses the port priority as the 802.1p precedence value of the received packet, searches for the local precedence value corresponding to the port priority of the receiving port in the 802.1p-precedence-to-local-precedence mapping table, assigns the local precedence value to the packet, and enqueues the packet according to the local precedence value.

2)        For an 802.1q-tagged packet

When an 802.1q tagged packet reaches the port of a switch, you can specify a priority trust mode for the port, trusting port priority or trusting packet priority.

l          Trusting packet priority

In this mode, the switch searches for the set of precedence values corresponding to the trusted type (802.1p precedence or DSCP precedence) of priority of the packet in the corresponding priority mapping tables and assigns the set of matching precedence values to the packet.

l          Trusting port priority

In this mode, the switch replaces the 802.1p priority of the received packet with the port priority, searches for the local precedence corresponding to the port priority of the receiving port in the 802.1p-to-local precedence mapping table, assigns the local precedence to the packet, and enqueues the packet according to the local precedence value.

You can configure the priority trust mode of a port as required. The priority mapping process on a switch is as shown in Figure 3-1.

Figure 3-1 Priority mapping process in the case of supporting trusting port priority

 

An S5810 series switch can trust one of the following two priority types:

l          Trusting the DSCP precedence of received packets. In this mode, the switch searches the dscp-dot1p/dp/dscp mapping table based on the DSCP precedence of the received packet for the 802.1p precedence/drop precedence/DSCP precedence to be used to mark the packet. Then the switch searches the dscp-lp mapping table based on the marked DSCP precedence for the corresponding local precedence and marks the received packet with the local precedence.

l          Trusting the 802.1p precedence of received packets. In this mode, the switch searches the dot1p-dp/lp mapping table based on the 802.1p precedence in the tag for local/drop precedence for the packet.

Introduction to Priority Mapping Tables

The device provides various types of priority mapping table, as listed below.

l          dot1p-dscp: 802.1p-precedence-to-DSCP mapping table.

l          dot1p-lp: 802.1p-precedence-to-local-precedence mapping table.

l          dscp-dot1p: DSCP-to-802.1p-precedence mapping table, which is applicable to only IP packets.

l          dscp-lp: DSCP-to-local-precedence mapping table, which is applicable to only IP packets.

 

Table 3-1 through Table 3-2 list the default priority mapping tables.

Table 3-1 The default dot1p-lp and dot1p-dscp mappings

Input priority value

dot1p-lp mapping

dot1p-dscp mapping

802.1p precedence (dot1p)

Local precedence (lp)

DSCP value (dscp)

0

2

0

1

0

8

2

1

16

3

3

24

4

4

32

5

5

40

6

6

48

7

7

56

 

Table 3-2 The default dscp-lp and dscp-dot1p mappings

Input priority value

dscp-lp mapping

dscp-dot1p mapping

dscp

Local precedence (lp)

802.1p precedence (dot1p)

0 to 7

0

0

8 to 15

1

1

16 to 23

2

2

24 to 31

3

3

32 to 39

4

4

40 to 47

5

5

48 to 55

6

6

56 to 63

7

7

 

Configuring a Priority Mapping Table

You can modify the priority mapping tables of a device as needed.

Configuration Prerequisites

You need to decide on the new mapping values.

Configuration Procedure

Follow these steps to configure a priority mapping table:

To do

Use the command

Remarks

Enter system view

system-view

Enter priority mapping table view

qos map-table { dot1p-dscp | dot1p-lp | dscp-dot1p | dscp-lp }

Required

You can enter the corresponding priority mapping table view as required.

Configure the priority mapping table

import import-value-list export export-value

Required

Newly configured mappings overwrite the previous ones.

Display the configuration of the priority mapping table

display qos map-table [ dot1p-dscp | dot1p-lp | dscp-dot1p| dscp-lp ]

Optional

Available in any view

 

Configuration Example

Network requirements

Configure a dot1p-lp mapping table as shown below.

Table 3-3  dot1p-lp mappings

802.1p precedence

Local precedence

0

0

1

0

2

1

3

1

4

2

5

2

6

3

7

3

 

Configuration procedure

# Enter system view.

<Sysname> system-view

# Enter the dot1p-lp priority mapping table view.

[Sysname] qos map-table dot1p-lp

# Modify dot1p-lp priority mapping parameters.

[Sysname-maptbl-dot1p-lp] import 0 1 export 0

[Sysname-maptbl-dot1p-lp] import 2 3 export 1

[Sysname-maptbl-dot1p-lp] import 4 5 export 2

[Sysname-maptbl-dot1p-lp] import 6 7 export 3

Configuring the Priority for a Port

Port priority is in the range of 0 to 7. You can set the port priority as needed.

Configuration Prerequisites

 You need to decide on a priority for the port.

Configuration Procedure

Follow these steps to configure port priority:

To do

Use the command

Remarks

Enter system view

system-view

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view (Ethernet or WLAN-ESS) take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Configure a priority for the port

qos priority priority-value

Required

The default port priority is 0.

 

Configuration Example

Network requirements

Set the port priority of port GigabitEthernet 1/0/1 to 7.

Configuration procedure

# Enter system view.

<Sysname> system-view

# Set the priority of GigabitEthernet 1/0/1 to 7.

[Sysname] interface gigabitethernet 1/0/1

[Sysname-GigabitEthernet1/0/1] qos priority 7

Configuring the Trusted Precedence Type for a Port

You can configure whether to trust the priority of packets. On a device supporting port trusted precedence type, the priority mapping process for packets is shown in Priority Mapping Overview.

You can configure one of the following trusted precedence types for a port:

l          dot1p: Trusts the 802.1p precedence of the received packets and uses the 802.1p precedence for mapping.

l          dscp: Trusts the DSCP values of the received IP packets and uses the DSCP values for mapping.

Configuration Prerequisites

l          It is determined to trust port priority.

l          The trusted precedence type for the port is determined.

l          The priority mapping table corresponding to the trusted precedence type is configured. For the detailed configuration procedure, refer to Configuring a Priority Mapping Table.

Configuration Procedure

Follow these steps to configure the trusted precedence type:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view (Ethernet or WLAN-ESS) take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Configure the trusted precedence type

qos trust { dot1p | dscp }

Required

By default, port priority is trusted.

Display the trusted precedence type configuration

display qos trust interface [ interface-type interface-number ]

Optional

Available in any view

 

Configuration Example

Network requirements

Configure port GigabitEthernet 1/0/1 to trust the 802.1p precedence of received packets.

Configuration procedure

# Enter system view.

<Sysname> system-view

# Enter port view.

[Sysname] interface gigabitethernet 1/0/1

# Configure port GigabitEthernet 1/0/1 to trust the 802.1p precedence of received packets.

[Sysname-GigabitEthernet1/0/1] qos trust dot1p

Displaying and Maintaining Priority Mapping

To do…

Use the command…

Remarks

Display priority mapping table configuration information

display qos map-table [ dot1p-dscp | dot1p-lp | dscp-dot1p | dscp-lp ]

Available in any view

Display the trusted precedence type on the port

display qos trust interface [ interface-type interface-number ]

Available in any view

 

 


 

When configuring traffic classification, traffic policing, and traffic shaping, go to these sections for information you are interested in:

l          Traffic Policing and Traffic Shaping Overview

l          Traffic Policing, GTS and Line Rate Configuration

l          Displaying and Maintaining Traffic Policing, GTS and Line Rate

l          Traffic Policing and GTS Configuration Examples

Traffic Policing and Traffic Shaping Overview

If user traffic is not limited, burst traffic will make the network more congested. Therefore it is necessary to limit user traffic in order to better utilize the network resources and provide better services for more users. For example, you can configure a flow to use only the resources committed to it in a time range, thus avoiding network congestion caused by burst traffic.

Traffic policing and generic traffic shaping (GTS) limit traffic rate and resource usage according to traffic specifications. The prerequisite for traffic policing or GTS is to know whether a traffic flow has exceeded the specification. If yes, proper traffic control policies are applied. Generally, token buckets are used to evaluate traffic specifications.

Traffic Evaluation and Token Bucket

Token bucket features

A token bucket can be considered as a container holding a certain number of tokens. The system puts tokens into the bucket at a set rate. When the token bucket is full, the extra tokens will overflow.

Evaluating traffic with the token bucket

The evaluation for the traffic specification is based on whether the number of tokens in the bucket can meet the need of packet forwarding. If the number of tokens in the bucket is enough to forward the packets (generally, one token is associated with a 1-bit forwarding authority), the traffic conforms to the specification, and the traffic is called conforming traffic; otherwise, the traffic does not conform to the specification, and the traffic is called excess traffic.

A token bucket has the following configurable parameters:

l          Mean rate: At which tokens are put into the bucket, namely, the permitted average rate of traffic. It is usually set to the committed information rate (CIR).

l          Burst size: the capacity of the token bucket, namely, the maximum traffic size that is permitted in each burst. It is usually set to the committed burst size (CBS). The set burst size must be greater than the maximum packet size.

One evaluation is performed on each arriving packet. In each evaluation, if the number of tokens in the bucket is enough, the traffic conforms to the specification and the corresponding tokens for forwarding the packet are taken away; if the number of tokens in the bucket is not enough, it means that too many tokens have been used and the traffic is excessive.

Complicated evaluation

You can set two token buckets (referred to as the C bucket and E bucket respectively) in order to evaluate more complicated conditions and implement more flexible regulation policies. For example, traffic policing uses four parameters:

l          CIR: Rate at which tokens are put into the C bucket, that is, the average packet transmission or forwarding rate allowed by the C bucket.

l          CBS: Size of the C bucket, that is, transient burst of traffic that the C bucket can forward.

l          Peak information rate (PIR): Rate at which tokens are put into the E bucket, that is, the average packet transmission or forwarding rate allowed by the E bucket.

l          Excess burst size (EBS): Size of the E bucket, that is, transient burst of traffic that the E bucket can forward.

Figure 4-1 A two-bucket system

 

Figure 4-1 shows a two-bucket system, where the size of C bucket is CBS and that of the E bucket is EBS.

In each evaluation, packets are measured against the buckets:

l          If the C bucket has enough tokens, packets are colored green.

l          If the C bucket does not have enough tokens but the E bucket has enough tokens, packets are colored yellow.

l          If neither the C bucket nor the E bucket has sufficient tokens, packets are colored red.

Traffic Policing

The typical application of traffic policing is to supervise the specification of certain traffic entering a network and limit it within a reasonable range, or to "discipline" the extra traffic. In this way, the network resources and the interests of the carrier are protected. For example, you can limit bandwidth consumption of HTTP packets to less than 50% of the total. If the traffic of a certain session exceeds the limit, traffic policing can drop the packets or reset the IP precedence of the packets.

Figure 4-2 Schematic diagram for GTS

 

Traffic policing is widely used in policing traffic entering the networks of internet service providers (ISPs). It can classify the policed traffic and perform pre-defined policing actions based on different evaluation results. These actions include:

l          Forwarding the packets whose evaluation result is “conforming”.

l          Dropping the packets whose evaluation result is “excess”.

Traffic Shaping

Traffic shaping provides measures to adjust the rate of outbound traffic actively. A typical traffic shaping application is to limit the local traffic output rate according to the downstream traffic policing parameters.

The difference between traffic policing and GTS is that packets to be dropped in traffic policing are cached in a buffer or queue in GTS, as shown in Figure 4-3. When there are enough tokens in the token bucket, these cached packets are sent at an even rate. Traffic shaping may result in an additional delay while traffic policing does not.

Figure 4-3 Schematic diagram for GTS

 

For example, in Figure 4-4, Switch A sends packets to Switch B. Switch B performs traffic policing on packets from Switch A and drops packets exceeding the limit.

Figure 4-4 GTS application

 

You can perform traffic shaping for the packets on the outgoing interface of Switch A to avoid unnecessary packet loss. Packets exceeding the limit are cached in Switch A. Once resources are released, traffic shaping takes out the cached packets and sends them out. In this way, all the traffic sent to Switch B conforms to the traffic specification defined in Switch B.

Traffic Policing, GTS and Line Rate Configuration

Complete the following tasks to configure traffic policing, GTS, and line rate:

Task

Remarks

Configuring Traffic Policing

Configure an ACL

Apply CAR policies to the specified interface

Configuring queue-based GTS

Configure GTS on interfaces

Configuring GTS for all traffic

Configure GTS on interfaces

 

Configuring Traffic Policing

Traffic policing configuration involves the following two tasks: defining the characteristics of packets to be policed (defined with ACLs on the S5810 series), defining policing policies for the matched packets.

Follow these steps to configure ACL-based traffic policing:

To do…

Use the command…

Remarks

Enter system view

system-view

Configure an ACL

Refer to the ACL module

Required

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Configure an ACL based CAR policy on the interface/port group

qos car inbound acl [ ipv6 ] acl-number cir committed-information-rate [ cbs committed-burst-size [ ebs excess-burst-size ] ] [ pir peak-information-rate ] [ red action ]

Required

Display CAR policy information on the interface/all interfaces

display qos car interface [ interface-type interface-number ]

Optional

Available in any view

 

Traffic policing configuration example

Configure traffic policing on GigabitEthernet 1/0/1 to limit the rate of incoming packets matching ACL 2000 to 1 Mbps.

# Enter system view.

<Sysname> system-view

# Enter interface view.

[Sysname] interface gigabitethernet1/0/1

# Configure a CAR policy for the interface.

[Sysname-GigabitEthernet1/0/1] qos car inbound acl 2000 cir 1000 ebs 0

Configuring GTS

Traffic shaping configuration involves:

l          Queue-based GTS: configuring GTS parameters for packets of a certain queue.

l          GTS for all traffic: configuring GTS parameters for all traffic.

Configuring queue-based GTS

Follow these steps to configure queue-based GTS:

To do

Use the command

Remarks

Enter system view

system-view

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Configure GTS for a queue

qos gts queue queue-number cir committed-information-rate [ cbs committed-burst-size ]

Required

Display GTS configuration information

display qos gts interface [ interface-type interface-number ]

Optional

Available in any view

 

Configuring GTS for all traffic

Follow these steps to configure GTS for all traffic:

To do

Use the command

Remarks

Enter system view

system-view

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Configure GTS on the interface/port group

qos gts any cir committed-information-rate [ cbs committed-burst-size ]

Required

Display GTS configuration on the interface

display qos gts interface [ interface-type interface-number ]

Optional

Available in any view

 

GTS configuration example

Configure GTS on GigabitEthernet 1/0/1, shaping the packets when the sending rate exceeds 700 kbps.

# Enter system view.

<Sysname> system-view

# Enter interface view.

[Sysname] interface gigabitethernet 1/0/1

# Configure GTS parameters.

[Sysname-GigabitEthernet1/0/1] qos gts any cir 700

Displaying and Maintaining Traffic Policing, GTS and Line Rate

To do

Use the command

Remarks

Display the CAR information on the specified interface

display qos car interface [ interface-type interface-number ]

Available in any view

Display interface GTS configuration information

display qos gts interface [ interface-type interface-number ]

Available in any view

 

Traffic Policing and GTS Configuration Examples

Traffic Policing and GTS Configuration Example

Network requirements

l          GigabitEthernet 1/0/3 of Switch A is connected to GigabitEthernet 1/0/1 of Switch B.

l          Server, Host A, and Host B can access the Internet through Switch A and Switch B.

Perform traffic control for packets received on GigabitEthernet 1/0/1 of Switch A from Server and Host A respectively as follows:

l          Limit the rate of packets from Server to 560 kbps. When the traffic rate is below 560 kbps, the traffic is forwarded normally. When the traffic rate exceeds 560 kbps, violating packets are dropped.

l          Limit the rate of packets from Host A to 350 kbps. When the traffic rate is below 350 kbps, the traffic is forwarded normally. When the traffic rate exceeds 350 kbps, the violating packets are dropped.

Traffic control for packets forwarded by GigabitEthernet 1/0/1 of Switch B is as follows:

l          Limit the receiving rate on GigabitEthernet 1/0/1 of Switch B to 700 kbps, and violating packets are dropped.

Figure 4-5 Network diagram for traffic policing and GTS configuration

 

Configuration procedure

1)        Configure Switch A

# Configure GTS on GigabitEthernet 1/0/3 of Switch A, shaping the packets when the sending rate exceeds 700 kbps to decrease the packet loss ratio of GigabitEthernet 1/0/1 of Switch B.

<SwitchA> system-view

[SwitchA] interface gigabitethernet 1/0/3

[SwitchA-GigabitEthernet1/0/3] qos gts any cir 700

[SwitchA-GigabitEthernet1/0/3] quit

# Configure ACLs to permit the packets from Server and Host A.

[SwitchA] acl number 2001

[SwitchA-acl-basic-2001] rule permit source 1.1.1.1 0

[SwitchA-acl-basic-2001] quit

[SwitchA] acl number 2002

[SwitchA-acl-basic-2002] rule permit source 1.1.1.2 0

[SwitchA-acl-basic-2002] quit

# Configure CAR policies for different flows received on GigabitEthernet 1/0/1.

[SwitchA] interface gigabitethernet 1/0/1

[SwitchA-GigabitEthernet1/0/1] qos car inbound acl 2001 cir 560 red discard

[SwitchA-GigabitEthernet1/0/1] qos car inbound acl 2002 cir 350 red discard

2)        Configure Switch B

# Configure a CAR policy on GigabitEthernet 1/0/1 to limit the receiving rate to 700 kbps. Violating packets are dropped.

# Create ACL 2001 and configure it to match all packets.

<SwitchB> system-view

[SwitchB] acl number 2001

[SwitchB-acl-basic-2001] rule permit

# Configure traffic policing on GigabitEthernet 1/0/1.

[SwitchB] interface gigabitethernet 1/0/1

[SwitchB-GigabitEthernet1/0/1] qos car inbound acl 2001 cir 700 red discard

 


When configuring aggregation CAR, go to these sections for information you are interested in:

l          Aggregation CAR Overview

l          Configuring an Aggregation CAR Policy

l          Referencing Aggregation CAR in a Traffic Behavior

l          Displaying and Maintaining Aggregation CAR

Aggregation CAR Overview

Aggregation CAR means to use the same CAR for traffic on multiple ports. If aggregation CAR is enabled for multiple ports, the total traffic on these ports must conform to the traffic policing parameters set in the aggregation CAR.

Configuring an Aggregation CAR Policy

Configuration Prerequisites

You need to decide on:

l          Aggregation CAR parameters.

l          Interfaces where the aggregation CAR policy is to be applied.

l          Traffic match criteria; the ACL or CAR list must be predefined.

l          Refer to ACL configuration in the Security Volume for how to define ACL rules.

Configuration Procedure

Follow these steps to configure aggregation CAR:

To do…

Use the command…

Remarks

Enter system view

system-view

Configure an aggregation CAR policy

qos car car-name aggregative cir committed-information-rate [ cbs committed-burst-size [ ebs excess-burst-size ] ] [ pir peek-information-rate ] [ red action ]

Required

By default:

l      cbs is 100000 bytes.

l      ebs is 100000 bytes.

l      The red packets are dropped.

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Apply the aggregation CAR policy on the interface/port group

qos car inbound acl [ ipv6 ] acl-number name car-name

Required

Display aggregation CAR policy configuration information on the interface or all interfaces

display qos car interface [ interface-type interface-number ]

Optional

Available in any view

Display information about the aggregation CAR

display qos car name [ car-name ]

 

Configuration Example

# Specify the aggregation CAR aggcar-1 to adopt the following parameters: CIR is 200, CBS is 2,000, and red packets are dropped. Apply the aggregation CAR aggcar-1 to the packets matching ACL 2000 in the inbound direction of GigabitEthernet 1/0/1.

<Sysname> system-view

[Sysname] qos car aggcar-1 aggregative cir 200 cbs 2000 red discard

[Sysname] interface gigabitethernet 1/0/1

[Sysname-GigabitEthernet1/0/1] qos car inbound acl 2000 name aggcar-1

Referencing Aggregation CAR in a Traffic Behavior

Configuration Prerequisites

You need to decide on:

l          Aggregation CAR parameters

l          Traffic behavior to reference the aggregation CAR.

Configuration Procedure

Follow these steps to reference aggregation CAR in a traffic behavior:

To do

Use the command

Remarks

Enter system view

system-view

Configure aggregation CAR parameters

qos car car-name aggregative cir committed-information-rate [ cbs committed-burst-size [ ebs excess-burst-size ] ] [ pir peek-information-rate ] [ red action ]

Required

By default:

l      cbs is 100000 bytes

l      ebs is 100000 bytes

l      The red packets are dropped.

Enter traffic behavior view

traffic behavior behavior-name

Required

Reference the aggregation CAR

car name car-name

Required

Display traffic behavior configuration information

display traffic behavior user-defined [ behavior-name ]

Optional

Available in any view

Display information about the aggregation CAR

display qos car name [ car-name ]

 

Configuration Example

# Specify the aggregation CAR aggcar-1 to adopt the following parameters: CIR is 200, CBS is 2,000, and red packets are dropped. Reference aggregation CAR aggcar-1 in traffic behavior be1.

<Sysname> system-view

[Sysname] qos car aggcar-1 aggregative cir 200 cbs 2000 red discard

[Sysname] traffic behavior be1

[Sysname-behavior-be1] car name aggcar-1

Displaying and Maintaining Aggregation CAR

To do…

Use the command…

Remarks

Display the statistics for the specified aggregation CAR

display qos car name [ car-name ]

Required

Available in any view

Clear the statistics for the specified aggregation CAR

reset qos car name [ car-name ]

Required

Available in user view

 

 


When configuring hardware congestion management, go to these sections for information you are interested in:

l          Overview

l          Congestion Management Configuration Methods

l          Per-Queue Configuration Method

Overview

Congestion occurs on an interface when the traffic arriving rate is greater than the transmit rate. If there is no enough buffer capacity to store these packets, a part of them will be lost, which may cause the sending device to retransmit these packets because of timeout, deteriorating the congestion.

The key to congestion management is defining a dispatching policy for resources to decide the order of forwarding packets when congestion occurs. Congestion management involves queue creation, traffic classification, packet enqueuing, and queue scheduling.

Congestion Management Policies

In general, congestion management adopts queuing technology. The system uses a certain queuing algorithm for traffic classification, and then uses a certain precedence algorithm to send the traffic. Each queuing algorithm deals with a particular network traffic problem and has significant impacts on bandwidth resource assignment, delay, and jitter.

Queue scheduling processes packets by their priorities, preferentially forwarding high-priority packets. In the following section, Strict Priority (SP) queuing, Weighted Round Robin (WRR) queuing, and SP+WRR queuing are introduced.

SP queuing

SP queuing is specially designed for mission-critical applications, which require preferential service to reduce the response delay when congestion occurs.

Figure 6-1 Schematic diagram for SP queuing

 

As shown in Figure 6-1, SP queuing classifies eight queues on a port into eight classes, numbered 7 to 0 in descending priority order.

SP queuing schedules the eight queues strictly according to the descending order of priority. It sends packets in the queue with the highest priority first. When the queue with the highest priority is empty, it sends packets in the queue with the second highest priority, and so on. Thus, you can assign mission-critical packets to the high priority queue to ensure that they are always served first and common service packets to the low priority queues and transmitted when the high priority queues are empty.

The disadvantage of SP queuing is that packets in the lower priority queues cannot be transmitted if there are packets in the higher priority queues. This may cause lower priority traffic to starve to death.

WRR queuing

WRR queuing schedules all the queues in turn to ensure that every queue can be served for a certain time, as shown in Figure 6-2.

Figure 6-2 Schematic diagram for WRR queuing

 

Assume there are eight output queues on a port. WRR assigns each queue a weight value (represented by w7, w6, w5, w4, w3, w2, w1, or w0) to decide the proportion of resources assigned to the queue. On a 100 Mbps port, you can configure the weight values of WRR queuing to 50, 30, 10, 10, 50, 30, 10, and 10 (corresponding to w7, w6, w5, w4, w3, w2, w1, and w0 respectively). In this way, the queue with the lowest priority is assured of 5 Mbps of bandwidth at least, thus avoiding the disadvantage of SP queuing that packets in low-priority queues may fail to be served for a long time.

Another advantage of WRR queuing is that while the queues are scheduled in turn, the service time for each queue is not fixed, that is, if a queue is empty, the next queue will be scheduled immediately. This improves bandwidth resource use efficiency.

The S5810 series use group-based WRR queuing. You can assign the output queues to WRR scheduling group 1 and WRR scheduling group 2 as required. Note that the queues in the same group must be consecutive. The device preferentially schedules the group containing the highest queue ID. Suppose queue 0, queue 1, queue 2, and queue 3 are assigned to WRR scheduling group 1, and queue 4, queue 5, queue 6, and queue 7 are assigned to WRR scheduling group 2. WRR is performed in WRR scheduling group 2 preferentially. When no packet is to be sent in WRR scheduling group 2, WRR is performed in WRR scheduling group 1. .

SP+WRR queue scheduling algorithm

You can implement SP+WRR queue scheduling on a port by assigning some queues on the port to the SP scheduling group and the others to the WRR scheduling group (that is, group 1). Packets in the SP scheduling group are scheduled preferentially. When the SP scheduling group is empty, packets in the WRR scheduling group are scheduled. Queues in the SP scheduling group are scheduled by SP. Queues in the WRR scheduling group are scheduled by WRR.

 

Congestion Management Configuration Methods

To achieve congestion management, you can perform per-queue configuration, that is, configure queue scheduling for each queue in interface view or port group view.

Complete the following tasks to achieve hardware-based congestion management:

Task

Remarks

Per-Queue Configuration Method

Configuring SP Queuing

Optional

Configure WRR Queuing

Optional

Configuring SP+WRR Queues

Optional

 

Per-Queue Configuration Method

Configuring SP Queuing

Configuration procedure

Follow these steps to configure SP queuing:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

SP queuing is only applicable to Layer 2 interfaces.

Settings in interface view take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Configure SP queuing

undo qos wrr

Required

The default queuing algorithm on an interface is WRR queuing.

 

Configuration example

1)        Network requirements

Configure GigabitEthernet 1/0/1 to adopt SP queuing.

2)        Configuration procedure

# Enter system view

<Sysname> system-view

# Configure GigabitEthernet1/0/1 to adopt SP queuing.

[Sysname]interface gigabitethernet 1/0/1

[Sysname-GigabitEthernet1/0/1] undo qos wrr

Configure WRR Queuing

Configuration procedure

1)        Configuring basic WRR queuing

Follow these steps to configure basic WRR queuing:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter interface view or port group view

Enter interface view

interface interface-type interface-number

Use either command

Settings in interface view take effect on the current interface; settings in port group view take effect on all ports in the port group.

Enter port group view

port-group manual port-group-name

Configure a basic WRR queue

qos wrr queue-id group group-id weight schedule-value

Required

The default queuing algorithm on an interface is WRR queuing.

Display WRR queuing configuration information on interface(s)

display qos wrr interface [ interface-type interface-number ]

Optional

Available in any view

 

When you use the WRR queue scheduling algorithm, make sure that queues in the same scheduling group are consecutive.

 

Configuration example

1)        Network requirements

l          Enable WRR queuing on the interface.

l          Assign queue 0, queue 1, queue 2, and queue 3 to WRR group 1, with the weight of 10, 20, 50, and 70 respectively.

l          Assign queue 4, queue 5, queue 6, and queue 7 to WRR group 2, with the weight of 20, 50, 70, and 100 respectively.

2)        Configuration procedure

# Enter system view.

<Sysname> system-view

# Configure WRR queuing on GigabitEthernet 1/0/1.

[Sysname] interface gigabitethernet 1/0/1

[Sysname-GigabitEthernet1/0/1] qos wrr 0 group 1 weight 10

[Sysname-GigabitEthernet1/0/1] qos wrr 1 group 1 weight 20

[Sysname-GigabitEthernet1/0/1] qos wrr 2 group 1 weight 50

[Sysname-GigabitEthernet1/0/1] qos wrr 3 group 1 weight 70

[Sysname-GigabitEthernet1/0/1] qos wrr 4 group 2 weight 20

[Sysname-GigabitEthernet1/0/1] qos wrr 5 group 2 weight 50

[Sysname-GigabitEthernet1/0/1] qos wrr 6 group 2 weight 70

[Sysname-GigabitEthernet1/0/1] qos wrr 7 group 2 weight 100

 Configuring SP+WRR Queues

Configuration Procedure

Follow these steps to configure SP + WRR queues:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter port view or port group view

Enter port view

interface interface-type interface-number

Perform either of the two operations.

The configuration performed in Ethernet interface view applies to the current port only. The configuration performed in port group view applies to all the ports in the port group.

Enter port group view

port-group manual port-group-name

Configure SP queue scheduling

qos wrr queue-id group sp

Required

Configure WRR queue scheduling

qos wrr queue-id group group-id weight schedule-value

Required

 

When you use the SP+WRR queue scheduling algorithm, make sure that queues in the same scheduling group are consecutive.

 

Configuration Example

1)        Network requirements

l          Configure to adopt SP+WRR queue scheduling algorithm on GigabitEthernet1/0/1.

l          Configure queue 0 and queue 1 on GigabitEthernet1/0/1 to be in SP queue scheduling group.

l          Configure queue 2, queue 3, and queue 4 on GigabitEthernet1/0/1 to be in WRR queue scheduling group 1, with the weight being 20, 70, and 100 respectively.

l          Configure queue 5, queue 6, and queue 7 on GigabitEthernet1/0/1 to be in WRR queue scheduling group 2, with the weight being 10, 50, and 80 respectively.

2)        Configuration procedure

# Enter system view.

<Sysname> system-view

# Enable the SP+WRR queue scheduling algorithm on GigabitEthernet1/0/1.

[Sysname] interface gigabitethernet 1/0/1

[Sysname-GigabitEthernet1/0/1] qos wrr 0 group sp

[Sysname-GigabitEthernet1/0/1] qos wrr 1 group sp

[Sysname-GigabitEthernet1/0/1] qos wrr 2 group 1 weight 20

[Sysname-GigabitEthernet1/0/1] qos wrr 3 group 1 weight 70

[Sysname-GigabitEthernet1/0/1] qos wrr 4 group 1 weight 100

[Sysname-GigabitEthernet1/0/1] qos wrr 5 group 2 weight 10

[Sysname-GigabitEthernet1/0/1] qos wrr 6 group 2 weight 50

[Sysname-GigabitEthernet1/0/1] qos wrr 7 group 2 weight 80

 


When configuring traffic mirroring, go to these sections for information you are interested in:

l          Traffic Mirroring Overview

l          Configuring Traffic Mirroring

l          Displaying and Maintaining Traffic Mirroring

l          Traffic Mirroring Configuration Examples

Traffic Mirroring Overview

Traffic mirroring refers to the process of copying the specified packets to the specified destination for packet analysis and monitoring.

You can configure mirroring traffic to an interface, to the CPU, or to a VLAN.

l          Mirroring traffic to an interface: copies the matching packets on an interface to a destination interface.

l          Mirroring traffic to the CPU: copies the matching packets on an interface to a CPU (the CPU of the board where the traffic mirroring-enabled interface resides).

l          Mirroring traffic to a VLAN: copies the matching packets on an interface to a VLAN. In this case, all the ports in the VLAN can receive the mirrored packets. Even if the VLAN does not exist, you can pre-define the action of mirroring traffic to the VLAN. After the VLAN is created and some ports join the VLAN, the action of mirroring traffic to the VLAN takes effect automatically.

 

On the S5810 series Ethernet switches, traffic can only be mirrored to ports and to CPU.

 

Configuring Traffic Mirroring

To configure traffic mirroring, you must enter the view of an existing traffic behavior.

 

In a traffic behavior, the action of mirroring traffic to an interface and  the action of mirroring traffic to a CPU are mutually exclusive.

 

Mirroring Traffic to an Interface

Follow these steps to mirror traffic to an interface:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter traffic behavior view

traffic behavior behavior-name

Specify the destination interface for traffic mirroring

mirror-to interface interface-type interface-number

Required

 

Mirroring Traffic to the CPU

Follow these steps to mirror traffic to the CPU:

To do…

Use the command…

Remarks

Enter system view

system-view

Enter traffic behavior view

traffic behavior behavior-name

Mirror traffic to the CPU

mirror-to cpu

Required

 

Displaying and Maintaining Traffic Mirroring

To do

Use the command

Remarks

Display traffic behavior configuration information

display traffic behavior user-defined [ behavior-name ]

Available in any view

Display QoS policy configuration information

display qos policy user-defined [ policy-name [ classifier tcl-name ] ]

Available in any view

 

Traffic Mirroring Configuration Examples

Example for Mirroring Traffic to an Interface

Network requirements

The user's network is as described below:

l          Host A (with the IP address 192.168.0.1) and Host B are connected to GigabitEthernet1/0/1 of the switch.

l          The data monitoring device is connected to GigabitEthernet1/0/2 of the switch.

It is required to monitor and analyze packets sent by Host A on the data monitoring device.

Figure 7-1 Network diagram for configuring traffic mirroring to a port

 

Configuration Procedure

Configure Switch:

# Enter system view.

<Sysname> system-view

# Configure basic IPv4 ACL 2000 to match packets with the source IP address 192.168.0.1.

[Sysname] acl number 2000

[Sysname-acl-basic-2000] rule permit source 192.168.0.1 0

[Sysname-acl-basic-2000] quit

# Configure a traffic classification rule to use ACL 2000 for traffic classification.

[Sysname] traffic classifier 1

[Sysname-classifier-1] if-match acl 2000

[Sysname-classifier-1] quit

# Configure a traffic behavior and define the action of mirroring traffic to GigabitEthernet1/0/2 in the traffic behavior.

[Sysname] traffic behavior 1

[Sysname-behavior-1] mirror-to interface GigabitEthernet 1/0/2

[Sysname-behavior-1] quit

# Configure a QoS policy and associate traffic behavior 1 with classification rule 1.

[Sysname] qos policy 1

[Sysname-policy-1] classifier 1 behavior 1

[Sysname-policy-1] quit

# Apply the policy in the inbound direction of GigabitEthernet1/0/1.

[Sysname] interface GigabitEthernet 1/0/1

[Sysname-GigabitEthernet1/0/1] qos apply policy 1 inbound

After the configurations, you can monitor all packets sent from Host A on the data monitoring device.

.


When configuring port buffers, go to these sections for information you are interested in:

l          Port Buffer Overview

l          Configuring the Shared Buffer

l          Displaying and Maintaining Port Buffer

l          Burst Configuration Example

Port Buffer Overview

The S5810 supports transmit and receive buffering for ports to eliminate packet loss when traffic is arriving at a rate greater than the physical medium can support or when forwarding decision is made.

With the buffering implementation of the switch, the buffer memory is divided into a shared buffer area and a dedicated buffer area. The shared buffer area is shared by all ports and is user configurable. The remaining buffer memory after the shared buffer area is deducted is the dedicated buffer area, which is assigned evenly among the ports. The dedicated buffer memory of a port cannot be shared by any other ports and is used prior to the shared buffer. A port uses the shared buffer only when its dedicated buffer is inadequate.

Two approaches are available for you to set the shared buffer:

l          Configuring the Burst Function to Automatically Set the Shared Buffer

l          Configuring the Shared Buffer Manually

When manually setting the shared buffer area, take the traffic pattern in your network into consideration. If transient large traffic bursts may occur on some ports, expand the shared buffer to accommodate the bursts to prevent traffic loss. If transient small traffic bursts often occur on the ports, decrease the shared buffer so that each port can get more dedicated buffer memory.

The following are two scenarios where a larger shared buffer may be preferred to improve forwarding performance:

l          Broadcast or multicast traffic is dense, and large transient traffic bursts are likely to occur.

l          A low-speed link is being used to forward high-speed traffic or an interface is forwarding traffic received from multiple interfaces that work at the same rate as it.

Configuring the Shared Buffer

Configuring the Burst Function to Automatically Set the Shared Buffer

With the burst function configured, the switch automatically configures the shared buffer size, and assigns the remaining buffer memory (the dedicated buffer) evenly among all ports.

Follow these steps to configure the burst function:

To do…

Use the command…

Remarks

Enter system view

system-view

Enable the burst function

burst-mode enable

Required

Disabled by default

 

Configuring the Shared Buffer Manually

The shared buffer is assigned to incoming traffic and outgoing traffic respectively. The area buffers incoming traffic is called the shared receive buffer and the area buffers outgoing traffic is called the shared transmit buffer. Tune their size depending on the actual situation.

Follow these steps to configure the shared buffer manually:

To do…

Use the command…

Remarks

Enter system view

system-view

Set (in blocks) the shared transmit buffer or shared receive buffer

buffer-manage { ingress | egress } share-size size-value

Optional

By default, there is no shared receive buffer and the size of the shared transmit buffer is 1776.

 

The manual shared buffer settings override the buffer settings automatically configured with the burst function (if enabled). If you perform the undo buffer-manage share-size command while the burst function is enabled, the buffer settings configured with the burst function rather than the default port buffer settings take effect.

 

Displaying and Maintaining Port Buffer

To do…

Use the command…

Remarks

Display the shared buffer configuration

display buffer-manage configuration

Available in any view

 

Burst Configuration Example

Network Requirements

In a customer network shown in Figure 8-1,

l          A server connects to the switch through a 1000 Mbps Ethernet interface. The server sends dense broadcast or multicast traffic to the hosts irregularly.

l          Each host connects to the switch through a 100 Mbps network adapter.

Configure the switch to process dense traffic from the server to guarantee that packets can reach the hosts.

Figure 8-1 Network diagram for burst configuration

 

Configuration Procedure

# Enter system view.

<Switch> system-view

# Enable the burst function.

[Switch] burst-mode enable

H3C reserves the right to modify its collaterals without any prior notice. For the latest information of the collaterals, please consult H3C sales or call 400 hotline.