11-Network Management and Monitoring Configuration Guide

HomeSupportConfigure & DeployConfiguration GuidesH3C S12500-X & S12500X-AF Switch Series Configuration Guides-Release 113x-6W10111-Network Management and Monitoring Configuration Guide
01-Text
Title Size Download
01-Text 2.39 MB

Contents

Using ping, tracert, and system debugging· 1

Ping· 1

Using a ping command to test network connectivity· 1

Ping example· 1

Tracert 3

Prerequisites· 3

Using a tracert command to identify failed or all nodes in a path· 4

Tracert example· 4

System debugging· 5

Debugging information control switches· 5

Debugging a feature module· 6

Configuring NTP· 7

Overview·· 7

How NTP works· 7

NTP architecture· 8

Association modes· 9

NTP security· 10

NTP for MPLS L3VPNs· 11

Protocols and standards· 12

Configuration restrictions and guidelines· 12

Configuration task list 12

Enabling the NTP service· 13

Configuring NTP association modes· 13

Configuring NTP in client/server mode· 13

Configuring NTP in symmetric active/passive mode· 13

Configuring NTP in broadcast mode· 14

Configuring NTP in multicast mode· 15

Configuring access control rights· 15

Configuring NTP authentication· 16

Configuring NTP authentication in client/server mode· 16

Configuring NTP authentication in symmetric active/passive mode· 17

Configuring NTP authentication in broadcast mode· 19

Configuring NTP authentication in multicast mode· 21

Configuring NTP optional parameters· 23

Specifying the source interface for NTP messages· 23

Disabling an interface from processing NTP messages· 24

Configuring the maximum number of dynamic associations· 24

Setting a DSCP value for NTP packets· 25

Configuring the local clock as a reference source· 25

Displaying and maintaining NTP· 25

NTP configuration examples· 26

NTP client/server mode configuration example· 26

NTP symmetric active/passive mode configuration example· 27

NTP broadcast mode configuration example· 28

NTP multicast mode configuration example· 30

Configuration example for NTP client/server mode with authentication· 33

Configuration example for NTP broadcast mode with authentication· 35

Configuration example for MPLS VPN time synchronization in client/server mode· 37

Configuration example for MPLS VPN time synchronization in symmetric active/passive mode· 39

Configuring SNTP· 41

Configuration restrictions and guidelines· 41

Configuration task list 41

Enabling the SNTP service· 41

Specifying an NTP server for the device· 41

Configuring SNTP authentication· 42

Displaying and maintaining SNTP· 43

SNTP configuration example· 43

Configuring the information center 45

Overview·· 45

Log types· 45

Log levels· 45

Log destinations· 46

Default output rules for logs· 46

Default output rules for diagnostic logs· 46

Default output rules for hidden logs· 46

Default output rules for trace logs· 47

Log formats· 47

FIPS compliance· 49

Information center configuration task list 49

Outputting logs to the console· 50

Outputting logs to the monitor terminal 50

Outputting logs to a log host 51

Outputting logs to the log buffer 51

Saving logs to the log file· 52

Saving diagnostic logs to the diagnostic log file· 53

Configuring the maximum size of the trace log file· 54

Enabling synchronous information output 54

Enabling duplicate log suppression· 54

Disabling an interface from generating link up or link down logs· 55

Setting the minimum storage period for logs· 55

Displaying and maintaining information center 55

Information center configuration examples· 56

Configuration example for outputting logs to the console· 56

Configuration example for outputting logs to a UNIX log host 57

Configuration example for outputting logs to a Linux log host 58

Configuring SNMP· 60

Overview·· 60

FIPS compliance· 60

SNMP framework· 60

MIB and view-based MIB access control 60

SNMP operations· 61

Protocol versions· 61

Access control modes· 61

Configuring SNMP basic parameters· 62

Configuring SNMPv1 or SNMPv2c basic parameters· 62

Configuring SNMPv3 basic parameters· 64

Configuring SNMP logging· 68

Configuring SNMP notifications· 68

Enabling SNMP notifications· 68

Configuring the SNMP agent to send notifications to a host 69

Displaying the SNMP settings· 70

SNMPv1/SNMPv2c configuration example· 71

SNMPv3 in VACM mode configuration example· 72

SNMPv3 in RBAC mode configuration example· 74

Configuring samplers· 78

Creating a sampler 78

Displaying and maintaining a sampler 78

Configuring port mirroring· 79

Overview·· 79

Terminology· 79

Port mirroring classification and implementation· 80

Configuring local port mirroring· 81

Local port mirroring configuration task list 82

Creating a local mirroring group· 82

Configuring source ports for the local mirroring group· 82

Configuring source CPUs for the local mirroring group· 83

Configuring the monitor port for the local mirroring group· 83

Configure local port mirroring with multiple monitor ports· 84

Configuring Layer 2 remote port mirroring· 85

Layer 2 remote port mirroring configuration task list 86

Configuring a remote destination group on the destination device· 86

Configuring a remote source group on the source device· 88

Displaying and maintaining port mirroring· 90

Port mirroring configuration examples· 90

Local port mirroring configuration example (in source port mode) 90

Local port mirroring configuration example (in source CPU mode) 91

Local port mirroring with multiple monitor ports configuration example· 92

Layer 2 remote port mirroring configuration example· 93

Configuring flow mirroring· 97

Overview·· 97

Flow mirroring configuration task list 97

Configuring match criteria· 97

Configuring a traffic behavior 98

Configuring a QoS policy· 98

Applying a QoS policy· 98

Applying a QoS policy to an interface· 98

Applying a QoS policy to a VLAN·· 99

Applying a QoS policy globally· 99

Flow mirroring configuration example· 99

Network requirements· 99

Configuration procedure· 100

Verifying the configuration· 101

Configuring sFlow· 102

Protocols and standards· 102

sFlow configuration task list 102

Configuring the sFlow agent and sFlow collector information· 103

Configuring flow sampling· 103

Configuring counter sampling· 104

Displaying and maintaining sFlow·· 104

sFlow configuration example· 104

Network requirements· 104

Configuration procedure· 105

Verifying the configuration· 105

Troubleshooting sFlow configuration· 106

The remote sFlow collector cannot receive sFlow packets· 106

Configuring EAA· 107

Overview·· 107

EAA framework· 107

Elements in a monitor policy· 108

EAA environment variables· 109

Feature and software version compatibility· 110

Configuring a user-defined EAA environment variable· 110

Configuring a monitor policy· 111

Configuration restrictions and guidelines· 111

Configuring a monitor policy from the CLI 111

Configuring a monitor policy by using Tcl 113

Suspending monitor policies· 114

Displaying and maintaining EAA settings· 115

EAA configuration examples· 115

CLI event monitor policy configuration example· 115

Track event monitor policy configuration example· 116

CLI-defined policy with EAA environment variables configuration example· 118

Tcl-defined policy configuration example· 119

Configuring NQA· 121

Overview·· 121

NQA operation· 121

Collaboration· 122

Threshold monitoring· 122

NQA configuration task list 123

Configuring the NQA server 123

Enabling the NQA client 124

Configuring NQA operations on the NQA client 124

NQA operation configuration task list 124

Configuring the ICMP echo operation· 125

Configuring the ICMP jitter operation· 126

Configuring the DHCP operation· 127

Configuring the DNS operation· 127

Configuring the FTP operation· 128

Configuring the HTTP operation· 129

Configuring the UDP jitter operation· 130

Configuring the SNMP operation· 131

Configuring the TCP operation· 132

Configuring the UDP echo operation· 133

Configuring the UDP tracert operation· 133

Configuring the voice operation· 135

Configuring the DLSw operation· 137

Configuring the path jitter operation· 137

Configuring optional parameters for the NQA operation· 138

Configuring the collaboration feature· 139

Configuring threshold monitoring· 140

Configuring the NQA statistics collection feature· 143

Configuring the saving of NQA history records· 143

Scheduling the NQA operation on the NQA client 144

Configuring NQA templates on the NQA client 144

NQA template configuration task list 145

Configuring the ICMP template· 145

Configuring the DNS template· 146

Configuring the TCP template· 147

Configuring the TCP half open template· 148

Configuring the UDP template· 148

Configuring the HTTP template· 149

Configuring the HTTPS template· 151

Configuring the FTP template· 152

Configuring the SSL template· 153

Configuring optional parameters for the NQA template· 154

Displaying and maintaining NQA· 154

NQA configuration examples· 155

ICMP echo operation configuration example· 155

ICMP jitter operation configuration example· 157

DHCP operation configuration example· 159

DNS operation configuration example· 160

FTP operation configuration example· 161

HTTP operation configuration example· 162

UDP jitter operation configuration example· 163

SNMP operation configuration example· 166

TCP operation configuration example· 167

UDP echo operation configuration example· 168

UDP tracert operation configuration example· 169

Voice operation configuration example· 171

DLSw operation configuration example· 173

Path jitter operation configuration example· 175

NQA collaboration configuration example (on routers) 176

NQA collaboration configuration example· 178

ICMP template configuration example· 181

DNS template configuration example· 182

TCP template configuration example· 182

TCP half open template configuration example· 183

UDP template configuration example· 184

HTTP template configuration example· 185

HTTPS template configuration example· 185

FTP template configuration example· 186

SSL template configuration example· 187

Configuring NETCONF· 188

Overview·· 188

NETCONF structure· 188

NETCONF message format 189

How to use NETCONF· 190

Protocols and standards· 190

FIPS compliance· 190

NETCONF configuration task list 191

Enabling NETCONF over SOAP· 191

Enabling NETCONF over SSH·· 191

Enabling NETCONF logging· 192

Establishing a NETCONF session· 192

Setting the NETCONF session idle timeout time· 192

Entering XML view·· 193

Exchanging capabilities· 193

Subscribing to event notifications· 193

Subscription procedure· 194

Example for subscribing to event notifications· 195

Locking/unlocking the configuration· 196

Locking the configuration· 196

Unlocking the configuration· 196

Example for locking the configuration· 197

Performing service operations· 198

Performing the get/get-bulk operation· 198

Performing the get-config/get-bulk-config operation· 199

Performing the edit-config operation· 200

All-module configuration data retrieval example· 201

Syslog configuration data retrieval example· 202

Example for retrieving a data entry for the interface table· 203

Example for changing the value of a parameter 204

Saving, rolling back, and loading the configuration· 205

Saving the configuration· 205

Rolling back the configuration based on a configuration file· 206

Rolling back the configuration based on a rollback point 206

Loading the configuration· 210

Example for saving the configuration· 211

Filtering data· 211

Table-based filtering· 212

Column-based filtering· 212

Example for filtering data with regular expression match· 214

Example for filtering data by conditional match· 215

Performing CLI operations through NETCONF· 217

Configuration procedure· 217

CLI operation example· 217

Retrieving NETCONF session information· 218

Terminating another NETCONF session· 220

Configuration example· 220

Returning to the CLI 221

Appendix· 222

Appendix A Supported NETCONF operations· 222

Index· 232

 


Using ping, tracert, and system debugging

This chapter covers ping, tracert, and information about debugging the system.

Ping

Use the ping utility to determine if a specific address is reachable.

Ping sends ICMP echo requests (ECHO-REQUEST) to the destination device. Upon receiving the requests, the destination device responds with ICMP echo replies (ECHO-REPLY) to the source device. The source device outputs statistics about the ping operation, including the number of packets sent, number of echo replies received, and the round-trip time. You can measure the network performance by analyzing these statistics.

Using a ping command to test network connectivity

Execute ping commands in any view.

 

Task

Command

Determine if a specified address in an IP network is reachable.

When you configure the ping command for a low-speed network, set a larger value for the timeout timer (indicated by the -t keyword in the command).

ping [ ip ] [ -a source-ip | -c count | -f | -h ttl | -i interface-type interface-number | -m interval | -n | -p pad | -q | -r | -s packet-size | -t timeout | -tos tos | -v | -vpn-instance vpn-instance-name ] * host

 

Ping example

Network requirements

As shown in Figure 1, determine if Device A and Device C can reach each other. If they can reach each other, get detailed information about routes from Device A to Device C.

Figure 1 Network diagram

 

Configuration procedure

# Use the ping command on Device A to test connectivity to Device C.

Ping 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break

56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=2.137 ms

56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=2.051 ms

56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=1.996 ms

56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=1.963 ms

56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=1.991 ms

 

--- Ping statistics for 1.1.2.2 ---

5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss

round-trip min/avg/max/std-dev = 1.963/2.028/2.137/0.062 ms

The output shows that:

·          Device A sends five ICMP packets to Device C and Device A receives five ICMP packets.

·          No ICMP packet is lost.

·          The route is reachable.

# Get detailed information about routes from Device A to Device C.

<DeviceA> ping -r 1.1.2.2

Ping 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break

56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=4.685 ms

RR:      1.1.2.1

         1.1.2.2

         1.1.1.2

         1.1.1.1

56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=4.834 ms  (same route)

56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=4.770 ms  (same route)

56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=4.812 ms  (same route)

56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=4.704 ms  (same route)

 

--- Ping statistics for 1.1.2.2 ---

5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss

round-trip min/avg/max/std-dev = 4.685/4.761/4.834/0.058 ms

The test procedure of ping –r is as shown in Figure 1:

1.        The source device (Device A) sends an ICMP echo request to the destination device (Device C) with the RR option blank.

2.        The intermediate device (Device B) adds the IP address of its outbound interface (1.1.2.1) to the RR option of the ICMP echo request, and forwards the packet.

3.        Upon receiving the request, the destination device copies the RR option in the request and adds the IP address of its outbound interface (1.1.2.2) to the RR option. Then the destination device sends an ICMP echo reply.

4.        The intermediate device adds the IP address of its outbound interface (1.1.1.2) to the RR option in the ICMP echo reply, and then forwards the reply.

5.        Upon receiving the reply, the source device adds the IP address of its inbound interface (1.1.1.1) to the RR option. The detailed information of routes from Device A to Device C is formatted as: 1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.

Tracert

Tracert (also called Traceroute) enables retrieval of the IP addresses of Layer 3 devices in the path to a specific destination. In the event of network failure, use tracert to test network connectivity and identify failed nodes.

Figure 2 Tracert operation

 

Tracert uses received ICMP error messages to get the IP addresses of devices. Tracert works as shown in Figure 2:

1.        The source device sends a UDP packet with a TTL value of 1 to the destination device. The destination UDP port is not used by any application on the destination device.

2.        The first hop (Device B, the first Layer 3 device that receives the packet) responds by sending a TTL-expired ICMP error message to the source, with its IP address (1.1.1.2) encapsulated. This way, the source device can get the address of the first Layer 3 device (1.1.1.2).

3.        The source device sends a packet with a TTL value of 2 to the destination device.

4.        The second hop (Device C) responds with a TTL-expired ICMP error message, which gives the source device the address of the second Layer 3 device (1.1.2.2).

5.        This process continues until a packet sent by the source device reaches the ultimate destination device. Because no application uses the destination port specified in the packet, the destination device responds with a port-unreachable ICMP message to the source device, with its IP address encapsulated. This way, the source device gets the IP address of the destination device (1.1.3.2).

6.        The source device thinks that the packet has reached the destination device after receiving the port-unreachable ICMP message, and the path to the destination device is 1.1.1.2 to 1.1.2.2 to 1.1.3.2.

Prerequisites

Before you use a tracert command, perform the tasks in this section.

·          Enable sending of ICMP timeout packets on the intermediate devices (devices between the source and destination devices). If the intermediate devices are H3C devices, execute the ip ttl-expires enable command on the devices. For more information about this command, see Layer 3—IP Services Command Reference.

·          Enable sending of ICMP destination unreachable packets on the destination device. If the destination device is an H3C device, execute the ip unreachables enable command. For more information about this command, see Layer 3—IP Services Command Reference.

Using a tracert command to identify failed or all nodes in a path

Execute tracert commands in any view.

 

Task

Command

 

Display the routes from source to destination.

tracert [ -a source-ip | -f first-ttl | -m max-ttl | -p port | -q packet-number | -t tos | -vpn-instance vpn-instance-name | -w timeout ] * host

 

Tracert example

Network requirements

As shown in Figure 3, Device A failed to Telnet to Device C.

Test the network connectivity between Device A and Device C. If they cannot reach each other, locate the failed nodes in the network.

Figure 3 Network diagram

 

Configuration procedure

1.        Configure the IP addresses for devices as shown in Figure 3.

2.        Configure a static route on Device A.

<DeviceA> system-view

[DeviceA] ip route-static 0.0.0.0 0.0.0.0 1.1.1.2

[DeviceA] quit

3.        Use the ping command to test connectivity between Device A and Device C.

<DeviceA> ping 1.1.2.2

Ping 1.1.2.2(1.1.2.2): 56 -data bytes, press CTRL_C to break

Request time out

Request time out

Request time out

Request time out

Request time out

 

--- Ping statistics for 1.1.2.2 ---

5 packet(s) transmitted,0 packet(s) received,100.0% packet loss

The output shows that Device A and Device C cannot reach each other.

4.        Use the tracert command to identify failed nodes:

# Enable sending of ICMP timeout packets on Device B.

<DeviceB> system-view

[DeviceB] ip ttl-expires enable

# Enable sending of ICMP destination unreachable packets on Device C.

<DeviceC> system-view

[DeviceC] ip unreachables enable

# Execute the tracert command on Device A.

<DeviceA> tracert 1.1.2.2

traceroute to 1.1.2.2(1.1.2.2) 30 hops at most,40 bytes each packet, press CTRL_C to break

 1  1.1.1.2 1 ms 2 ms 1 ms

 2  * * *

 3  * * *

 4  * * *

 5

<DeviceA>

The output shows that:

?  Device A can reach Device B but cannot reach Device C.

?  An error has occurred on the connection between Device B and Device C.

5.        Use the debugging ip icmp command on Device A and Device C to verify that they can send and receive the specific ICMP packets.

Or use the display ip routing-table command to verify that there is a route from Device A to Device C.

System debugging

The device supports debugging for the majority of protocols and features and provides debugging information to help users diagnose errors.

Debugging information control switches

The following switches control the display of debugging information:

·          Module debugging switchControls whether to generate the module-specific debugging information.

·          Screen output switch—Controls whether to display the debugging information on a certain screen. Use terminal monitor and terminal logging level commands to turn on the screen output switch. For more information about these two commands, see Network Management and Monitoring Command Reference.

As shown in Figure 4, assume that the device can provide debugging for the three modules 1, 2, and 3. The debugging information can be output on a terminal only when both the module debugging switch and the screen output switch are turned on.

Debugging information is typically displayed on a console. You can also send debugging information to other destinations. For more information, see "Configuring the information center"

Figure 4 Relationship between the module and screen output switch

 

Debugging a feature module

Output of debugging commands is memory intensive. To guarantee system performance, enable debugging only for modules that are in an exceptional condition. When debugging is complete, use the undo debugging all command to disable all the debugging functions.

To debug a feature module:

 

Step

Command

Remarks

1.       Enable debugging for a specified module in user view.

debugging { all [ timeout time ] | module-name [ option ] }

By default, all debugging functions are disabled.

2.       (Optional.) Display the enabled debugging in any view.

display debugging [ module-name ]

N/A

 

 


Configuring NTP

Synchronize your device with a trusted time source by using the Network Time Protocol (NTP) or changing the system time before you run it on a live network. Various tasks, including network management, charging, auditing, and distributed computing depend on an accurate system time setting, because the timestamps of system messages and logs use the system time.

Overview

NTP is typically used in large networks to dynamically synchronize time among network devices. It guarantees higher clock accuracy than manual system clock setting. In a small network that does not require high clock accuracy, you can keep time synchronized among devices by changing their system clocks one by one.

NTP runs over UDP and uses UDP port 123.

How NTP works

Figure 5 shows how NTP synchronizes the system time between two devices, in this example, Device A and Device B. Assume that:

·          Prior to the time synchronization, the time of Device A is set to 10:00:00 am and that of Device B is set to 11:00:00 am.

·          Device B is used as the NTP server. Device A is to be synchronized to Device B.

·          It takes 1 second for an NTP message to travel from Device A to Device B, and from Device B to Device A.

·          It takes 1 second for Device B to process the NTP message.

Figure 5 Basic work flow

 

The synchronization process is as follows:

1.        Device A sends Device B an NTP message, which is timestamped when it leaves Device A. The time stamp is 10:00:00 am (T1).

2.        When this NTP message arrives at Device B, Device B adds a timestamp showing the time when the message arrived at Device B. The timestamp is 11:00:01 am (T2).

3.        When the NTP message leaves Device B, Device B adds a timestamp showing the time when the message left Device B. The timestamp is 11:00:02 am (T3).

4.        When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).

Up to now, Device A can calculate the following parameters based on the timestamps:

·          The roundtrip delay of the NTP message: Delay = (T4–T1) – (T3-T2) = 2 seconds.

·          Time difference between Device A and Device B: Offset = ((T2-T1) + (T3-T4)) /2 = 1 hour.

Based on these parameters, Device A can be synchronized to Device B.

This is only a rough description of the work mechanism of NTP. For more information, see the related protocols and standards.

NTP architecture

NTP uses stratums 1 to 16 to define clock accuracy, as shown in Figure 6. A lower stratum value represents higher accuracy. Clocks at stratums 1 through 15 are in synchronized state, and clocks at stratum 16 are not synchronized.

Figure 6 NTP architecture

 

Typically, a stratum 1 NTP server gets its time from an authoritative time source, such as an atomic clock, and provides time for other devices as the primary NTP server. The accuracy of each server is the stratum, with the topmost level (primary servers) assigned as one and each level downwards in the hierarchy assigned as one greater than the preceding level. NTP uses a stratum to describe how many NTP "hops" away a device is from the primary time server. A stratum 2 time server receives its time from a stratum 1 time server, and so on.

To ensure time accuracy and availability, you can specify multiple NTP servers for a device. The device selects an optimal NTP server as the clock source based on parameters such as stratum. The clock that the device selects is called the reference source. For more information about clock selection, see the related protocols and standards.

If the devices in a network cannot synchronize to an authoritative time source, you can select a device that has a relatively accurate clock from the network, and use the local clock of the device as the reference clock to synchronize other devices in the network.

Association modes

NTP supports the following association modes:

·          Client/server mode

·          Symmetric active/passive mode

·          Broadcast mode

·          Multicast mode

Table 1 NTP association modes

Mode

Working process

Principle

Application scenario

Client/server

On the client, specify the IP address of the NTP server.

A client sends a clock synchronization message to the NTP servers. Upon receiving the message, the servers automatically operate in server mode and send a reply.

If the client can be synchronized to multiple time servers, it selects an optimal clock and synchronizes its local clock to the optimal reference source after receiving the replies from the servers.

A client can be synchronized to a server, but a server cannot be synchronized to a client.

As Figure 6 shows, this mode is intended for configurations where devices of a higher stratum are synchronized to devices with a lower stratum.

Symmetric active/passive

On the symmetric active peer, specify the IP address of the symmetric passive peer.

A symmetric active peer periodically sends clock synchronization messages to a symmetric passive peer. The symmetric passive peer automatically operates in symmetric passive mode and sends a reply.

If the symmetric active peer can be synchronized to multiple time servers, it selects an optimal clock and synchronizes its local clock to the optimal reference source after receiving the replies from the servers.

A symmetric active peer and a symmetric passive peer can be synchronized to each other. If both of them are synchronized, the peer with a higher stratum is synchronized to the peer with a lower stratum.

As Figure 6 shows, this mode is most often used between two or more servers with the same stratum to operate as a backup for one another. If a server fails to communicate with all the servers of a higher stratum, the server can be synchronized to the servers of the same stratum.

Broadcast

A server periodically sends clock synchronization messages to the broadcast address 255.255.255.255. Clients listen to the broadcast messages from the servers to synchronize to the server according to the broadcast messages.

When a client receives the first broadcast message, the client and the server start to exchange messages to calculate the network delay between them. Then, only the broadcast server sends clock synchronization messages.

A broadcast client can be synchronized to a broadcast server, but a broadcast server cannot be synchronized to a broadcast client.

A broadcast server sends clock synchronization messages to synchronize clients in the same subnet. As Figure 6 shows, broadcast mode is intended for configurations involving one or a few servers and a potentially large client population.

The broadcast mode has a lower time accuracy than the client/server and symmetric active/passive modes because only the broadcast servers send clock synchronization messages.

Multicast

A multicast server periodically sends clock synchronization messages to the user-configured multicast address. Clients listen to the multicast messages from servers and synchronize to the server according to the received messages.

A multicast client can be synchronized to a multicast server, but a multicast server cannot be synchronized to a multicast client.

A multicast server can provide time synchronization for clients in the same subnet or in different subnets.

The multicast mode has a lower time accuracy than the client/server and symmetric active/passive modes.

 

In this document, an "NTP server" or a "server" refers to a device that operates as an NTP server in client/server mode. Time servers refer to all the devices that can provide time synchronization, including NTP servers, NTP symmetric peers, broadcast servers, and multicast servers.

NTP security

To improve time synchronization security, NTP provides the access control and authentication functions.

NTP access control

You can control NTP access by using an ACL. The access rights are in the following order, from least restrictive to most restrictive:

·          PeerAllows time requests and NTP control queries (such as alarms, authentication status, and time server information) and allows the local device to synchronize itself to a peer device.

·          ServerAllows time requests and NTP control queries, but does not allow the local device to synchronize itself to a peer device.

·          SynchronizationAllows only time requests from a system whose address passes the access list criteria.

·          QueryAllows only NTP control queries from a peer device to the local device.

The device processes an NTP request, as follows:

·          If no NTP access control is configured, peer is granted to the local device and peer devices.

·          If the IP address of the peer device matches a permit statement in an ACL for more than one access right, the least restrictive access right is granted to the peer device. If a deny statement or no ACL is matched, no access right is granted.

·          If no ACL is created for a specific access right, the associated access right is not granted.

·          If no ACL is created for any access right, peer is granted.

This feature provides minimal security for a system running NTP. A more secure method is NTP authentication.

NTP authentication

Use this feature to authenticate the NTP messages for security purposes. If an NTP message passes authentication, the device can receive it and get time synchronization information. If not, the device discards the message. This function makes sure the device does not synchronize to an unauthorized time server.

Figure 7 NTP authentication

 

As shown in Figure 7, NTP authentication works as follows:

1.        The sender uses the MD5 algorithm to calculate the NTP message according to the key identified by a key ID, and sends the calculated digest together with the NTP message and key ID to the receiver.

2.        Upon receiving the message, the receiver finds the key according to the key ID in the message, uses the MD5 algorithm to calculate the digest, and compares the digest with the digest contained in the NTP message. If they are the same, the receiver accepts the message. Otherwise, it discards the message.

NTP for MPLS L3VPNs

In an MPLS L3VPN network, the device supports multiple VPN instances when:

·          It functions as an NTP client to synchronize time with the NTP server.

·          It functions as a symmetric active peer to synchronize time with the symmetric passive peer.

The device functions only as a CE.

Only the client/server and symmetric active/passive modes support VPN instances.

As shown in Figure 8, users in VPN 1 and VPN 2 are connected to the MPLS backbone network through provider edge (PE) devices, and services of the two VPNs are isolated. Time synchronization between PEs and devices of the two VPNs can be realized if you perform the following tasks:

·          Configure the PEs to operate in NTP client or symmetric active mode.

·          Specify the VPN to which the NTP server or NTP symmetric passive peer belongs.

Figure 8 Network diagram

 

Protocols and standards

·          RFC 1305, Network Time Protocol (Version 3) Specification, Implementation and Analysis

·          RFC 5905, Network Time Protocol Version 4: Protocol and Algorithms Specification

Configuration restrictions and guidelines

When you configure NTP, follow these guidelines:

·          You cannot configure both NTP and SNTP on the same device.

·          Do not configure NTP on an aggregate member port.

·          The NTP service and SNTP service are mutually exclusive. You can only enable either NTP service or SNTP service at a time.

·          To ensure time synchronization accuracy, do not specify different reference sources for a symmetric active peer and a symmetric passive peer if they operate as the NTP client for the client/server, broadcast, and multicast modes.

·          You can use the clock protocol command to specify the system time source. For more information about the clock protocol command, see Fundamentals Command Reference.

·          The term "interface" in this chapter collectively refers to Layer 3 interfaces, including VLAN interfaces and Layer 3 Ethernet interfaces. You can set an Ethernet port as a Layer 3 interface by using the port link-mode route command (see Layer 2—LAN Switching Configuration Guide)

Configuration task list

Tasks at a glance

(Required.) Enabling the NTP service

(Required.) Perform at least one of the following tasks:

·         Configuring NTP association modes

·         Configuring the local clock as a reference source

(Optional.) Configuring access control rights

(Optional.) Configuring NTP authentication

(Optional.) Configuring NTP optional parameters

 

Enabling the NTP service

Step

Command

Remarks

3.       Enter system view.

system-view

N/A

4.       Enable the NTP service.

ntp-service enable

By default, the NTP service is not enabled.

 

Configuring NTP association modes

This section describes how to configure NTP association modes.

Configuring NTP in client/server mode

When the device operates in client/server mode, specify the IP address for the server on the client.

Follow these guidelines when you configure an NTP client:

·          A server must be synchronized by other devices or use its local clock as a reference source before synchronizing an NTP client. Otherwise, the client will not be synchronized to the NTP server.

·          If the stratum level of a server is higher than or equal to a client, the client will not synchronize to that server.

·          You can configure multiple servers by repeating the ntp-service unicast-server command.

To configure an NTP client:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Specify an NTP server for the device.

ntp-service unicast-server { server-name | ip-address } [ vpn-instance vpn-instance-name ] [ authentication-keyid keyid | priority | source interface-type interface-number | version number ] *

By default, no NTP server is specified for the device.

 

Configuring NTP in symmetric active/passive mode

When the device operates in symmetric active/passive mode, specify on a symmetric-active peer the IP address for a symmetric-passive peer.

Follow these guidelines when you configure a symmetric-active peer:

·          Execute the ntp-service enable command on a symmetric passive peer to enable NTP. Otherwise, the symmetric-passive peer will not process NTP messages from a symmetric-active peer.

·          Either the symmetric-active peer, or the symmetric-passive peer, or both of them must be in synchronized state. Otherwise, their time cannot be synchronized.

·          You can configure multiple symmetric-passive peers by repeating the ntp-service unicast-peer command.

To configure a symmetric-active peer:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Specify a symmetric-passive peer for the device.

ntp-service unicast-peer { peer-name | ip-address } [ vpn-instance vpn-instance-name ] [ authentication-keyid keyid | priority | source interface-type interface-number | version number ] *

By default, no symmetric-passive peer is specified.

 

Configuring NTP in broadcast mode

A broadcast server must be synchronized by other devices or use its local clock as a reference source before synchronizing a broadcast client. Otherwise, the broadcast client will not be synchronized to the broadcast server.

Configure NTP in broadcast mode on both broadcast server and client.

Configuring a broadcast client

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

Enter the interface for receiving NTP broadcast messages.

3.       Configure the device to operate in broadcast client mode.

ntp-service broadcast-client

By default, the device does not operate in broadcast client mode.

After you execute the command, the device receives NTP broadcast messages from the specified interface.

 

Configuring the broadcast server

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

Enter the interface for sending NTP broadcast messages.

3.       Configure the device to operate in NTP broadcast server mode.

ntp-service broadcast-server [ authentication-keyid keyid | version number ] *

By default, the device does not operate in broadcast server mode.

After you execute the command, the device receives NTP broadcast messages from the specified interface.

 

Configuring NTP in multicast mode

A multicast server must be synchronized by other devices or use its local clock as a reference source before synchronizing a multicast client. Otherwise, the multicast client will not be synchronized to the multicast server.

Configure NTP in multicast mode on both a multicast server and client.

Configuring a multicast client

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

Enter the interface for receiving NTP multicast messages.

3.       Configure the device to operate in multicast client mode.

ntp-service multicast-client [ ip-address ]

By default, the device does not operate in multicast server mode.

After you execute the command, the device receives NTP multicast messages from the specified interface.

 

Configuring the multicast server

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

Enter the interface for sending NTP multicast message.

3.       Configure the device to operate in multicast server mode.

ntp-service multicast-server [ ip-address ] [ authentication-keyid keyid | ttl ttl-number | version number ] *

By default, the device does not operate in multicast server mode.

After you execute the command, the device receives NTP multicast messages from the specified interface.

 

Configuring access control rights

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the NTP service access control right for a peer device to access the local device.

ntp-service access { peer | query | server | synchronization } acl-number

By default, the NTP service access control right for a peer device to access the local device is peer.

 

Before you configure the NTP service access control right to the local device, create and configure an ACL associated with the access control right. For more information about ACL, see ACL and QoS Configuration Guide.

Configuring NTP authentication

This section provides instructions for configuring NTP authentication.

Configuring NTP authentication in client/server mode

When you configure NTP authentication in client/server mode, enable NTP authentication, configure an authentication key, set the key as a trusted key on both client and server, and associate the key with the NTP server on the client. The key IDs and key values configured on the server and client must be the same. Otherwise, NTP authentication fails.

To configure NTP authentication for a client:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

5.       Associate the specified key with an NTP server.

ntp-service unicast-server { server-name | ip-address } [ vpn-instance vpn-instance-name ] authentication-keyid keyid

N/A

 

To configure NTP authentication for a server:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

 

NTP authentication results differ when different configurations are performed on client and server. For more information, see Table 2. (N/A in the table means that whether the configuration is performed does not make any difference.)

Table 2 NTP authentication results

Client

Server

Authentication result

Enable NTP authentication

Configure a key and configure it as a trusted key

Associate the key with an NTP server

Enable NTP authentication

Configure a key and configure it as a trusted key

Yes

Yes

Yes

Yes

Yes

Succeeded. NTP messages can be sent and received correctly.

Yes

Yes

Yes

Yes

No

Failed. NTP messages cannot be sent and received correctly.

Yes

Yes

Yes

No

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

No

Yes

N/A

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

N/A

No

N/A

N/A

No authentication. NTP messages can be sent and received correctly.

No

N/A

N/A

N/A

N/A

No authentication. NTP messages can be sent and received correctly.

 

Configuring NTP authentication in symmetric active/passive mode

When you configure NTP authentication in symmetric peers mode, enable NTP authentication, configure an authentication key, set the key as a trusted key on both active peer and passive peer, and associate the key with the passive peer on the active peer. The key IDs and key values configured on the active peer and passive peer must be the same. Otherwise, NTP authentication fails.

To configure NTP authentication for an active peer:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

5.       Associate the specified key with a passive peer.

ntp-service unicast-peer { ip-address | peer-name } [ vpn-instance vpn-instance-name ] authentication-keyid keyid

N/A

 

To configure NTP authentication for a passive peer:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

 

NTP authentication results differ when different configurations are performed on active peer and passive peer. For more information, see Table 3. (N/A in the table means that whether the configuration is performed does not make any difference.)

Table 3 NTP authentication results

Active peer

Passive peer

Authentication result

Enable NTP authentication

Configure a key and configure it as a trusted key

Associate the key with a passive peer

Enable NTP authentication

Configure a key and configure it as a trusted key

Stratum level of the active and passive peers is not considered.

Yes

Yes

Yes

Yes

Yes

Succeeded. NTP messages can be sent and received correctly.

Yes

Yes

Yes

Yes

No

Failed. NTP messages cannot be sent and received correctly.

Yes

Yes

Yes

No

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

N/A

No

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

N/A

No

No

N/A

No authentication. NTP messages can be sent and received correctly.

No

N/A

N/A

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

No

N/A

N/A

No

N/A

No authentication. NTP messages can be sent and received correctly.

The active peer has a higher stratum than the passive peer.

Yes

No

Yes

N/A

N/A

Failed. NTP messages cannot be sent and received correctly.

The passive peer has a higher stratum than the active peer.

Yes

No

Yes

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

No

Yes

No

N/A

No authentication. NTP messages can be sent and received correctly.

 

Configuring NTP authentication in broadcast mode

When you configure NTP authentication in broadcast mode, enable NTP authentication, configure an authentication key, set the key as a trusted key on both the broadcast client and server, and configure an NTP authentication key on the broadcast server. The key IDs and key values configured on the broadcast server and client must be the same. Otherwise, NTP authentication fails.

To configure NTP authentication for a broadcast client:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

 

To configure NTP authentication for a broadcast server:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

5.       Enter interface view.

interface interface-type interface-number

N/A

6.       Associate the specified key with the broadcast server.

ntp-service broadcast-server authentication-keyid keyid

By default, the broadcast server is not associated with any key.

 

NTP authentication results differ when different configurations are performed on broadcast client and server. For more information, see Table 4. (N/A in the table means that whether the configuration is performed does not make any difference.)

Table 4 NTP authentication results

Broadcast server

Broadcast client

Authentication result

Enable NTP authentication

Configure a key and configure it as a trusted key

Associate the key with a broadcast server

Enable NTP authentication

Configure a key and configure it as a trusted key

Yes

Yes

Yes

Yes

Yes

Succeeded. NTP messages can be sent and received correctly.

Yes

Yes

Yes

Yes

No

Failed. NTP messages cannot be sent and received correctly.

Yes

Yes

Yes

No

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

No

Yes

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

No

Yes

No

N/A

No authentication. NTP messages can be sent and received correctly.

Yes

N/A

No

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

N/A

No

No

N/A

No authentication. NTP messages can be sent and received correctly.

No

N/A

N/A

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

No

N/A

N/A

No

N/A

No authentication. NTP messages can be sent and received correctly.

 

Configuring NTP authentication in multicast mode

When you configure NTP authentication in multicast mode, enable NTP authentication, configure an authentication key, set the key as a trusted key on both the multicast client and server, and configure an NTP authentication key on the multicast server. The key IDs and key values configured on the multicast server and client must be the same. Otherwise, NTP authentication fails.

To configure NTP authentication for a multicast client:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

 

To configure NTP authentication for a multicast server:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NTP authentication.

ntp-service authentication enable

By default, NTP authentication is disabled.

3.       Configure an NTP authentication key.

ntp-service authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no NTP authentication key is configured.

4.       Configure the key as a trusted key.

ntp-service reliable authentication-keyid keyid

By default, no authentication key is configured as a trusted key.

5.       Enter interface view.

interface interface-type interface-number

N/A

6.       Associate the specified key with the multicast server.

ntp-service multicast-server [ ip-address ] authentication-keyid keyid

By default, no multicast server is associated with the specified key.

 

NTP authentication results differ when different configurations are performed on broadcast client and server. For more information, see Table 5. (N/A in the table means that whether the configuration is performed does not make any difference.)

Table 5 NTP authentication results

Multicast server

Multicast client

Authentication result

Enable NTP authentication

Configure a key and configure it as a trusted key

Associate the key with a multicast server

Enable NTP authentication

Configure a key and configure it as a trusted key

Yes

Yes

Yes

Yes

Yes

Succeeded. NTP messages can be sent and received correctly.

Yes

Yes

Yes

Yes

No

Failed. NTP messages cannot be sent and received correctly.

Yes

Yes

Yes

No

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

No

Yes

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

No

Yes

No

N/A

No authentication. NTP messages can be sent and received correctly.

Yes

N/A

No

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

Yes

N/A

No

No

N/A

No authentication. NTP messages can be sent and received correctly.

No

N/A

N/A

Yes

N/A

Failed. NTP messages cannot be sent and received correctly.

No

N/A

N/A

No

N/A

No authentication. NTP messages can be sent and received correctly.

 

Configuring NTP optional parameters

The configuration tasks in this section are optional tasks. Configure them to improve NTP security, performance, or reliability.

Specifying the source interface for NTP messages

To prevent interface status changes from causing NTP communication failures, configure the device to use the IP address of an interface that is always up, for example, a loopback interface, as the source IP address for the NTP messages to be sent. Set the loopback interface as the source interface so that any interface status change on the device will not cause NTP messages to be unable to be received.

When the device responds to an NTP request, the source IP address of the NTP response is always the IP address of the interface that has received the NTP request.

Follow these guidelines when you specify the source interface for NTP messages:

·          If you have specified the source interface for NTP messages in the ntp-service unicast-server or ntp-service unicast-peer command, the interface specified in the ntp-service unicast-server or ntp-service unicast-peer command serves as the source interface for NTP messages.

·          If you have configured the ntp-service broadcast-server or ntp-service multicast-server command, the source interface for the broadcast or multicast NTP messages is the interface configured with the respective command.

To specify the source interface for NTP messages:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Specify the source interface for NTP messages.

ntp-service source interface-type interface-number

By default, no source interface is specified for NTP messages.

 

Disabling an interface from processing NTP messages

When NTP is enabled, all interfaces by default can process NTP messages. For security purposes, you can disable some of the interfaces from processing NTP messages.

To disable an interface from processing NTP messages:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Disable the interface from processing NTP messages.

undo ntp-service inbound enable

By default, an interface processes NTP messages.

 

Configuring the maximum number of dynamic associations

NTP has the following types of associations:

·          Static association—A manually created association.

·          Dynamic association—Temporary association created by the system during NTP operation. A dynamic association is removed if no messages are exchanged over a specific period of time.

The following describes how an association is established in different association modes:

·          Client/server mode—After you specify an NTP server, the system creates a static association on the client. The server simply responds passively upon the receipt of a message, rather than creating an association (static or dynamic).

·          Symmetric active/passive mode—After you specify a symmetric-passive peer on a symmetric active peer, static associations are created on the symmetric-active peer, and dynamic associations are created on the symmetric-passive peer.

·          Broadcast or multicast mode—Static associations are created on the server, and dynamic associations are created on the client.

A single device can have a maximum of 128 concurrent associations, including static associations and dynamic associations.

Perform this task to restrict the number of dynamic associations to prevent dynamic associations from occupying too many system resources.

To configure the maximum number of dynamic associations:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the maximum number of dynamic sessions allowed to be established.

ntp-service max-dynamic-sessions number

By default, the command can establish up to 100 dynamic sessions.

 

Setting a DSCP value for NTP packets

The DSCP value determines the sending precedence of a packet.

To configure a DSCP value for NTP packets:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Set a DSCP value for NTP packets.

ntp-service dscp dscp-value

The default DSCP value is 48 for NTP packets.

 

Configuring the local clock as a reference source

Follow these guidelines when you configure the local clock as a reference source:

·          Make sure the local clock can provide the time accuracy required for the network. After you configure the local clock as a reference source, the local clock is synchronized, and can operate as a time server to synchronize other devices in the network. If the local clock is incorrect, timing errors occur.

·          Before you configure this feature, adjust the local system time to make sure it is accurate.

To configure the local clock as a reference source:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the local clock as a reference source.

ntp-service refclock-master [ ip-address ] [ stratum ]

By default, the device does not use the local clock as a reference source.

 

Displaying and maintaining NTP

Execute display commands in any view.

 

Task

Command

Display information about NTP service status.

display ntp-service status

Display information about IPv4 NTP associations.

display ntp-service sessions [ verbose ]

Display brief information about the NTP servers from the local device back to the primary reference source.

display ntp-service trace

 

NTP configuration examples

NTP client/server mode configuration example

Network requirements

As shown in Figure 9, perform the following tasks:

·          Configure the local clock of Device A as a reference source, with the stratum level 2.

·          Configure Device B to operate in client mode and Device A to be used as the NTP server for Device B.

Figure 9 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface, as shown in Figure 9. (Details not shown.)

2.        Configure Device A:

# Enable the NTP service.

<DeviceA> system-view

[DeviceA] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 2.

[DeviceA] ntp-service refclock-master 2

3.        Configure Device B:

# Enable the NTP service.

<DeviceB> system-view

[DeviceB] ntp-service enable

# Specify Device A as the NTP server of Device B so that Device B is synchronized to Device A.

[DeviceB] ntp-service unicast-server 1.0.1.11

4.        Verify the configuration:

# Verify that Device B has synchronized to Device A, and the clock stratum level is 3 on Device B and 2 on Device A.

[DeviceB] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 1.0.1.11

 Local mode: client

 Reference clock ID: 1.0.1.11

 Leap indicator: 00

 Clock jitter: 0.000977 s

 Stability: 0.000 pps

 Clock precision: 2^-18

 Root delay: 0.00383 ms

 Root dispersion: 16.26572 ms

 Reference time: d0c6033f.b9923965  Wed, Dec 29 2010 18:58:07.724

# Verify that an IPv4 NTP association has been established between Device B and Device A.

[DeviceB] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

[12345]1.0.1.11        127.127.1.0        2     1   64   15   -4.0 0.0038 16.262

Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.

 Total sessions : 1

NTP symmetric active/passive mode configuration example

Network requirements

As shown in Figure 10, Device C has a clock more accurate than Device A.

·          Configure the local clock of Device A as a reference source, with the stratum level 3.

·          Configure the local clock of Device C as a reference source, with the stratum level 2.

·          Configure Device B to operate in client mode and specify Device A as the NTP server of Device B.

·          Configure Device C to operate in symmetric-active mode and specify Device B as the passive peer of Device C.

Figure 10 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface, as shown in Figure 10. (Details not shown.)

2.        Configure Device A:

# Enable the NTP service.

<DeviceA> system-view

[DeviceA] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 3.

[DeviceA] ntp-service refclock-master 3

3.        Configure Device B:

# Enable the NTP service.

[DeviceB] ntp-service enable

# Specify Device A as the NTP server of Device B.

[DeviceB] ntp-service unicast-server 3.0.1.31

4.        Configure Device C:

# Enable the NTP service.

<DeviceC> system-view

[DeviceC] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 2.

[DeviceC] ntp-service refclock-master 2

# Configure Device B as a symmetric passive peer.

[DeviceC] ntp-service unicast-peer 3.0.1.32

5.        Verify the configuration:

After the configuration, Device B has two time servers Device A and Device C. Device C has a lower stratum level than Device A, so Device B selects Device C as a reference clock to synchronize to Device C.

# Verify that Device B has synchronized to Device C.

[DeviceB] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 3.0.1.33

 Local mode: sym_passive

 Reference clock ID: 3.0.1.33

 Leap indicator: 00

 Clock jitter: 0.000916 s

 Stability: 0.000 pps

 Clock precision: 2^-17

 Root delay: 0.00609 ms

 Root dispersion: 1.95859 ms

 Reference time: 83aec681.deb6d3e5  Sun, Jan  4 1970  5:56:17.869

# Verify that an IPv4 NTP association has been established between Device B and Device A, and Device B and Device C.

[DeviceB] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

   [25]3.0.1.31        127.127.1.0        3    28   64    - 0.0000 0.0000 4000.0

 [1234]3.0.1.33        127.127.1.0        2    62   64   34 0.4251 6.0882 1392.1

Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.

 Total sessions: 2

NTP broadcast mode configuration example

Network requirements

As shown in Figure 11, Switch C functions as the NTP server for multiple devices on a network segment and synchronizes the time among multiple devices.

·          Configure Switch C's local clock as a reference source, with the stratum level 2.

·          Configure Switch C to operate in broadcast server mode and send out broadcast messages from VLAN-interface 2.

·          Configure Switch A and Switch B to operate in broadcast client mode, and listen to broadcast messages through VLAN-interface 2.

Figure 11 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface, as shown in Figure 11. (Details not shown.)

2.        Configure Switch C:

# Enable the NTP service.

<SwitchC> system-view

[SwitchC] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 2.

[SwitchC] ntp-service refclock-master 2

# Configure Switch C to operate in broadcast server mode and send broadcast messages through VLAN-interface 2.

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] ntp-service broadcast-server

3.        Configure Switch A:

# Enable the NTP service.

<SwitchA> system-view

[SwitchA] ntp-service enable

# Configure Switch A to operate in broadcast client mode and receive broadcast messages on VLAN-interface 2.

[SwitchA] interface vlan-interface 2

[SwitchA-Vlan-interface2] ntp-service broadcast-client

4.        Configure Switch B:

# Enable the NTP service.

<SwitchB> system-view

[SwitchB] ntp-service enable

# Configure Switch B to operate in broadcast client mode and receive broadcast messages on VLAN-interface 2.

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] ntp-service broadcast-client

5.        Verify the configuration:

# Verify that Switch A has synchronized to Switch C, and the clock stratum level is 3 on Switch A and 2 on Switch C.

[SwitchA-Vlan-interface2] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 3.0.1.31

 Local mode: bclient

 Reference clock ID: 3.0.1.31

 Leap indicator: 00

 Clock jitter: 0.044281 s

 Stability: 0.000 pps

 Clock precision: 2^-10

 Root delay: 0.00229 ms

 Root dispersion: 4.12572 ms

 Reference time: d0d289fe.ec43c720  Sat, Jan  8 2011  7:00:14.922

# Verify that an IPv4 NTP association has been established between Switch A and Switch C.

[SwitchA-Vlan-interface2] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

 [1245]3.0.1.31        127.127.1.0        2     1   64  519   -0.0 0.0022 4.1257

Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.

 Total sessions : 1

NTP multicast mode configuration example

Network requirements

As shown in Figure 12, Switch C functions as the NTP server for multiple devices on different network segments and synchronizes the time among multiple devices.

·          Configure Switch C's local clock as a reference source, with the stratum level 2.

·          Configure Switch C to operate in multicast server mode and send out multicast messages from VLAN-interface 2.

·          Configure Switch A and Switch D to operate in multicast client mode and receive multicast messages through VLAN-interface 3 and VLAN-interface 2, respectively.

Figure 12 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface as shown in Figure 12. (Details not shown.)

2.        Configure Switch C:

# Enable the NTP service.

<SwitchC> system-view

[SwitchC] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 2.

[SwitchC] ntp-service refclock-master 2

# Configure Switch C to operate in multicast server mode and send multicast messages through VLAN-interface 2.

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] ntp-service multicast-server

3.        Configure Switch D:

# Enable the NTP service.

<SwitchD> system-view

[SwitchD] ntp-service enable

# Configure Switch D to operate in multicast client mode and receive multicast messages on VLAN-interface 2.

[SwitchD] interface vlan-interface 2

[SwitchD-Vlan-interface2] ntp-service multicast-client

4.        Verify the configuration:

Switch D and Switch C are on the same subnet, so Switch D can do the following:

?  Receive the multicast messages from Switch C without being enabled with the multicast functions.

?  Synchronize to Switch C.

# Verify that Switch D has synchronized to Switch C, and the clock stratum level is 3 on Switch D and 2 on Switch C.

[SwitchD-Vlan-interface2] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 3.0.1.31

 Local mode: bclient

 Reference clock ID: 3.0.1.31

 Leap indicator: 00

 Clock jitter: 0.044281 s

 Stability: 0.000 pps

 Clock precision: 2^-10

 Root delay: 0.00229 ms

 Root dispersion: 4.12572 ms

 Reference time: d0d289fe.ec43c720  Sat, Jan  8 2011  7:00:14.922

# Verify that an IPv4 NTP association has been established between Switch D and Switch C.

[SwitchD-Vlan-interface2] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

 [1245]3.0.1.31        127.127.1.0        2     1   64  519   -0.0 0.0022 4.1257

Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.

 Total sessions : 1

5.        Configure Switch B:

Because Switch A and Switch C are on different subnets, you must enable the multicast functions on Switch B before Switch A can receive multicast messages from Switch C.

# Enable IP multicast routing and IGMP.

<SwitchB> system-view

[SwitchB] multicast routing

[SwitchB-mrib] quit

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] pim dm

[SwitchB-Vlan-interface2] quit

[SwitchB] vlan 3

[SwitchB-vlan3] port ten-gigabitethernet 1/0/1

[SwitchB-vlan3] quit

[SwitchB] interface vlan-interface 3

[SwitchB-Vlan-interface3] igmp enable

[SwitchB-Vlan-interface3] igmp static-group 224.0.1.1

[SwitchB-Vlan-interface3] quit

[SwitchB] igmp-snooping

[SwitchB-igmp-snooping] quit

[SwitchB] interface ten-gigabitethernet 1/0/1

[SwitchB-Ten-GigabitEthernet1/0/1] igmp-snooping static-group 224.0.1.1 vlan 3

6.        Configure Switch A:

# Enable the NTP service.

<SwitchA> system-view

[SwitchA] ntp-service enable

# Configure Switch A to operate in multicast client mode and receive multicast messages on VLAN-interface 3.

[SwitchA] interface vlan-interface 3

[SwitchA-Vlan-interface3] ntp-service multicast-client

7.        Verify the configuration:

# Verify that Switch A has synchronized to Switch C, and the clock stratum level is 3 on Switch A and 2 on Switch C.

[SwitchA-Vlan-interface3] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 3.0.1.31

 Local mode: bclient

 Reference clock ID: 3.0.1.31

 Leap indicator: 00

 Clock jitter: 0.165741 s

 Stability: 0.000 pps

 Clock precision: 2^-10

 Root delay: 0.00534 ms

 Root dispersion: 4.51282 ms

 Reference time: d0c61289.10b1193f  Wed, Dec 29 2010 20:03:21.065

# Verify that an IPv4 NTP association has been established between Switch A and Switch C.

[SwitchA-Vlan-interface3] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

 [1234]3.0.1.31        127.127.1.0        2   247   64  381   -0.0 0.0053 4.5128

Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.

 Total sessions : 1

Configuration example for NTP client/server mode with authentication

Network requirements

As shown in Figure 13, perform the following tasks:

·          Configure the local clock of Device A as a reference source, with the stratum level 2.

·          Configure Device B to operate in client mode and specify Device A as the NTP server of Device B, with Device B as the client.

·          Configure NTP authentication on both Device A and Device B.

Figure 13 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface, as shown in Figure 13. (Details not shown.)

2.        Configure Device A:

# Enable the NTP service.

<DeviceA> system-view

[DeviceA] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 2.

[DeviceA] ntp-service refclock-master 2

3.        Configure Device B:

# Enable the NTP service.

<DeviceB> system-view

[DeviceB] ntp-service enable

# Enable NTP authentication on Device B.

[DeviceB] ntp-service authentication enable

# Set an authentication key, and input the key in plain text.

[DeviceB] ntp-service authentication-keyid 42 authentication-mode md5 simple aNiceKey

# Specify the key as a trusted key.

[DeviceB] ntp-service reliable authentication-keyid 42

# Specify Device A as the NTP server of Device B, and associate the server with key 42.

[DeviceB] ntp-service unicast-server 1.0.1.11 authentication-keyid 42

Before Device B can synchronize its clock to that of Device A, enable NTP authentication for Device A.

4.        Configure NTP authentication on Device A:

# Enable NTP authentication.

[DeviceA] ntp-service authentication enable

# Set an authentication key, and input the key in plain text.

[DeviceA] ntp-service authentication-keyid 42 authentication-mode md5 simple aNiceKey

# Specify the key as a trusted key.

[DeviceA] ntp-service reliable authentication-keyid 42

5.        Verify the configuration:

# Verify that Device B has synchronized to Device A, and the clock stratum level is 3 on Device B and 2 on Device A.

[DeviceB] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 1.0.1.11

 Local mode: client

 Reference clock ID: 1.0.1.11

 Leap indicator: 00

 Clock jitter: 0.005096 s

 Stability: 0.000 pps

 Clock precision: 2^-10

 Root delay: 0.00655 ms

 Root dispersion: 1.15869 ms

 Reference time: d0c62687.ab1bba7d  Wed, Dec 29 2010 21:28:39.668

# Verify that an IPv4 NTP association has been established between Device B and Device A.

[DeviceB] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

 [1245]1.0.1.11        127.127.1.0        2     1   64  519   -0.0 0.0065    0.0

Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.

 Total sessions : 1

Configuration example for NTP broadcast mode with authentication

Network requirements

As shown in Figure 14, Switch C functions as the NTP server for multiple devices on different network segments and synchronizes the time among multiple devices. Switch A and Switch B authenticate the reference source.

·          Configure Switch C's local clock as a reference source, with the stratum level 3.

·          Configure Switch C to operate in broadcast server mode and send out broadcast messages from VLAN-interface 2.

·          Configure Switch A and Switch B to operate in broadcast client mode and receive broadcast messages through VLAN-interface 2.

·          Enable NTP authentication on Switch A, Switch B, and Switch C.

Figure 14 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface, as shown in Figure 14. (Details not shown.)

2.        Configure Switch A:

# Enable the NTP service.

<SwitchA> system-view

[SwitchA] ntp-service enable

# Enable NTP authentication on Switch A. Configure an NTP authentication key, with the key ID of 88 and key value of 123456. Input the key in plain text, and specify it as a trusted key.

[SwitchA] ntp-service authentication enable

[SwitchA] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456

[SwitchA] ntp-service reliable authentication-keyid 88

# Configure Switch A to operate in NTP broadcast client mode and receive NTP broadcast messages on VLAN-interface 2.

[SwitchA] interface vlan-interface 2

[SwitchA-Vlan-interface2] ntp-service broadcast-client

3.        Configure Switch B:

# Enable the NTP service.

<SwitchB> system-view

[SwitchB] ntp-service enable

# Enable NTP authentication on Switch B. Configure an NTP authentication key, with the key ID of 88 and key value of 123456. Input the key in plain text and specify it as a trusted key.

[SwitchB] ntp-service authentication enable

[SwitchB] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456

[SwitchB] ntp-service reliable authentication-keyid 88

# Configure Switch B to operate in broadcast client mode and receive NTP broadcast messages on VLAN-interface 2.

[SwitchB] interface vlan-interface 2

[SwitchB-Vlan-interface2] ntp-service broadcast-client

4.        Configure Switch C:

# Enable the NTP service.

<SwitchC> system-view

[SwitchC] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 3.

[SwitchC] ntp-service refclock-master 3

# Configure Switch C to operate in NTP broadcast server mode and use VLAN-interface 2 to send NTP broadcast packets.

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] ntp-service broadcast-server

[SwitchC-Vlan-interface2] quit

5.        Verify the configuration:

NTP authentication is enabled on Switch A and Switch B, but not on Switch C, so Switch A and Switch B cannot synchronize their local clocks to Switch C.

# Verify that Switch B has not synchronized to Switch C.

[SwitchB-Vlan-interface2] display ntp-service status

 Clock status: unsynchronized

 Clock stratum: 16

 Reference clock ID: none

6.        Enable NTP authentication on Switch C:

# Enable NTP authentication on Switch C. Configure an NTP authentication key, with the key ID of 88 and key value of 123456. Input the key in plain text, and specify it as a trusted key.

[SwitchC] ntp-service authentication enable

[SwitchC] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456

[SwitchC] ntp-service reliable authentication-keyid 88

# Specify Switch C as an NTP broadcast server, and associate the key 88 with Switch C.

[SwitchC] interface vlan-interface 2

[SwitchC-Vlan-interface2] ntp-service broadcast-server authentication-keyid 88

7.        Verify the configuration:

# Verify that Switch B has synchronized to Switch C, and the clock stratum level is 4 on Switch B and 3 on Switch C.

[SwitchB-Vlan-interface2] display ntp-service status

 Clock status: synchronized

 Clock stratum: 4

 System peer: 3.0.1.31

 Local mode: bclient

 Reference clock ID: 3.0.1.31

 Leap indicator: 00

 Clock jitter: 0.006683 s

 Stability: 0.000 pps

 Clock precision: 2^-10

 Root delay: 0.00127 ms

 Root dispersion: 2.89877 ms

 Reference time: d0d287a7.3119666f  Sat, Jan  8 2011  6:50:15.191

# Verify that an IPv4 NTP association has been established between Switch B and Switch C.

[SwitchB-Vlan-interface2] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

 [1245]3.0.1.31        127.127.1.0        3     3   64   68   -0.0 0.0000    0.0

Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.

 Total sessions : 1

Configuration example for MPLS VPN time synchronization in client/server mode

Network requirements

As shown in Figure 15, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. CE 1 and CE 3 are devices in VPN 1.

To synchronize time between PE 2 and CE 1 in VPN 1, perform the following tasks:

·          Configure CE 1's local clock as a reference source, with the stratum level 2.

·          Configure PE 1 to operate in client/server mode.

·          Specify VPN 1 as the target VPN.

Figure 15 Network diagram

 

Configuration procedure

Before you perform the following configuration, be sure you have completed MPLS VPN-related configurations and make sure of the reachability between CE 1 and PE 1, between PE 1 and PE 2, and between PE 2 and CE 3.

1.        Set the IP address for each interface, as shown in Figure 15. (Details not shown.)

2.        Configure CE 1:

# Enable the NTP service.

<CE1> system-view

[CE1] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 2.

[CE1] ntp-service refclock-master 2

3.        Configure PE 2:

# Enable the NTP service.

<PE2> system-view

[PE2] ntp-service enable

# Specify CE 1 in VPN 1 as the NTP server of PE 2.

[PE2] ntp-service unicast-server 10.1.1.1 vpn-instance vpn1

4.        Verify the configuration:

# Verify that PE 2 has synchronized to CE 1, with the stratum level 3.

[PE2] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 10.1.1.1

 Local mode: client

 Reference clock ID: 10.1.1.1

 Leap indicator: 00

 Clock jitter: 0.005096 s

 Stability: 0.000 pps

 Clock precision: 2^-10

 Root delay: 0.00655 ms

 Root dispersion: 1.15869 ms

 Reference time: d0c62687.ab1bba7d  Wed, Dec 29 2010 21:28:39.668

# Verify that an IPv4 NTP association has been established between PE 2 and CE 1.

[PE2] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

 [1245]10.1.1.1        127.127.1.0        2     1   64  519   -0.0 0.0065    0.0

Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.

 Total sessions : 1   

# Verify that server 127.0.0.1 has synchronized to server 10.1.1.1, and server 10.1.1.1 has synchronized to the local clock.

[PE2] display ntp-service trace

Server     127.0.0.1

Stratum    3 , jitter  0.000, synch distance 796.50.

Server     10.1.1.1

Stratum    2 , jitter 939.00, synch distance 0.0000.

RefID      127.127.1.0

Configuration example for MPLS VPN time synchronization in symmetric active/passive mode

Network requirements

As shown in Figure 16, two VPNs are present on PE 1 and PE 2: VPN 1 and VPN 2. CE 1 and CE 3 belong to VPN 1.

To synchronize the time between PE 1 and CE 1 in VPN 1, perform the following tasks:

·          Configure CE 1's local clock as a reference source, with the stratum level 2.

·          Configure CE 1 to operate in symmetric active mode.

·          Specify VPN 1 as the target VPN.

Figure 16 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface, as shown in Figure 16. (Details not shown.)

2.        Configure CE 1:

# Enable the NTP service.

<CE1> system-view

[CE1] ntp-service enable

# Specify the local clock as the reference source, with the stratum level 2.

[CE1] ntp-service refclock-master 2

3.        Configure PE 1:

# Enable the NTP service.

<PE1> system-view

[PE1] ntp-service enable

# Specify CE 1 in VPN 1 as the symmetric-passive peer of PE 1.

[PE1] ntp-service unicast-peer 10.1.1.1 vpn-instance vpn1

4.        Verify the configuration:

# Verify that PE 1 has synchronized to CE 1, with the stratum level 3.

[PE1] display ntp-service status

 Clock status: synchronized

 Clock stratum: 3

 System peer: 10.1.1.1

 Local mode: sym_active

 Reference clock ID: 10.1.1.1

 Leap indicator: 00

 Clock jitter: 0.005096 s

 Stability: 0.000 pps

 Clock precision: 2^-10

 Root delay: 0.00655 ms

 Root dispersion: 1.15869 ms

 Reference time: d0c62687.ab1bba7d  Wed, Dec 29 2010 21:28:39.668

# Verify that an IPv4 NTP association has been established between PE 1 and CE 1.

[PE1] display ntp-service sessions

       source          reference       stra reach poll  now offset  delay disper

********************************************************************************

 [1245]10.1.1.1        127.127.1.0        2     1   64  519   -0.0 0.0000    0.0

Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.

 Total sessions : 1  

# Verify that server 127.0.0.1 has synchronized to server 10.1.1.1, and server 10.1.1.1 has synchronized to the local clock.

[PE1] display ntp-service trace

Server     127.0.0.1

Stratum    3 , jitter  0.000, synch distance 796.50.

Server     10.1.1.1

Stratum    2 , jitter 939.00, synch distance 0.0000.

RefID      127.127.1.0

 


Configuring SNTP

SNTP is a simplified, client-only version of NTP specified in RFC 4330. SNTP supports only the client/server mode. An SNTP-enabled device can receive time from NTP servers, but cannot provide time services to other devices.

SNTP uses the same packet format and packet exchange procedure as NTP, but provides faster synchronization at the price of time accuracy.

If you specify multiple NTP servers for an SNTP client, the server with the best stratum is selected. If multiple servers are at the same stratum, the NTP server whose time packet is first received is selected.

Configuration restrictions and guidelines

You cannot configure both NTP and SNTP on the same device.

Make sure you use the clock protocol command to specify the NTP time source.

Configuration task list

Tasks at a glance

(Required.) Enabling the SNTP service

(Required.) Specifying an NTP server for the device

(Optional.) Configuring SNTP authentication

 

Enabling the SNTP service

The NTP service and SNTP service are mutually exclusive. You can only enable either NTP service or SNTP service at a time.

To enable the SNTP service:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the SNTP service.

sntp enable

By default, the SNTP service is not enabled.

 

Specifying an NTP server for the device

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Specify an NTP server for the device.

sntp unicast-server { server-name | ip-address } [ vpn-instance vpn-instance-name ] [ authentication-keyid keyid | source interface-type interface-number | version number ] *

By default, no NTP server is specified for the device.

Repeat this step to specify multiple NTP servers.

To use authentication, you must specify the authentication-keyid keyid option.

 

To use an NTP server as the time source, make sure its clock has been synchronized. If the stratum level of the NTP server is greater than or equal to that of the client, the client does not synchronize with the NTP server.

Configuring SNTP authentication

SNTP authentication makes sure an SNTP client is synchronized only to an authenticated trustworthy NTP server.

Follow these guidelines when you configure SNTP authentication:

·          Enable authentication on both the NTP server and the SNTP client.

·          Configure the SNTP client with the same authentication key ID and key value as the NTP server, and specify the key as a trusted key on both the NTP server and the SNTP client. For information about configuring NTP authentication on an NTP server, see "Configuring NTP."

·          Associate the specified key with the specific NTP server on the SNTP client.

With authentication disabled, the SNTP client can synchronize with the NTP server regardless of whether the NTP server is enabled with authentication.

To configure SNTP authentication on the SNTP client:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable SNTP authentication.

sntp authentication enable

By default, SNTP authentication is disabled.

3.       Configure an SNTP authentication key.

sntp authentication-keyid keyid authentication-mode md5 { cipher | simple } value

By default, no SNTP authentication key is configured.

4.       Specify the key as a trusted key.

sntp reliable authentication-keyid keyid

By default, no trusted key is specified.

5.       Associate the SNTP authentication key with the specific NTP server.

sntp unicast-server { ip-address | server-name } [ vpn-instance vpn-instance-name ] authentication-keyid keyid

By default, no NTP server is specified.

 

Displaying and maintaining SNTP

Execute display commands in any view.

 

Task

Command

Display information about all SNTP associations.

display sntp sessions

 

SNTP configuration example

Network requirements

As shown in Figure 17, perform the following tasks:

·          Configure the local clock of Device A as a reference source, with the stratum level 2.

·          Configure Device B to operate in SNTP client mode, and specify Device A as the NTP server.

·          Configure NTP authentication on Device A and SNTP authentication on Device B.

Figure 17 Network diagram

 

Configuration procedure

1.        Set the IP address for each interface, as shown in Figure 17. (Details not shown.)

2.        Configure Device A:

# Enable the NTP service.

<DeviceA> system-view

[DeviceA] ntp-service enable

# Configure the local clock of Device A as a reference source, with the stratum level 2.

[DeviceA] ntp-service refclock-master 2

# Enable NTP authentication on Device A.

[DeviceA] ntp-service authentication enable

# Configure an NTP authentication key, with the key ID of 10 and key value of aNiceKey. Input the key in plain text.

[DeviceA] ntp-service authentication-keyid 10 authentication-mode md5 simple aNiceKey

# Specify the key as a trusted key.

[DeviceA] ntp-service reliable authentication-keyid 10

3.        Configure Device B:

# Enable the SNTP service.

<DeviceB> system-view

[DeviceB] sntp enable

# Enable SNTP authentication on Device B.

[DeviceB] sntp authentication enable

# Configure an SNTP authentication key, with the key ID of 10 and key value of aNiceKey. Input the key in plain text.

[DeviceB] sntp authentication-keyid 10 authentication-mode md5 simple aNiceKey

# Specify the key as a trusted key.

[DeviceB] sntp reliable authentication-keyid 10

# Specify Device A as the NTP server of Device B, and associate the server with key 10.

[DeviceB] sntp unicast-server 1.0.1.11 authentication-keyid 10

4.        Verify the configuration:

# Verify that an SNTP association has been established between Device B and Device A, and Device B has synchronized to Device A.

[DeviceB] display sntp sessions

NTP server     Stratum   Version    Last receive time

1.0.1.11        2         4          Tue, May 17 2011  9:11:20.833 (Synced)

 


Configuring the information center

The information center on a device classifies and manages logs for all modules so that network administrators can monitor network performance and troubleshoot network problems.

Overview

The information center receives logs generated by source modules and outputs logs to different destinations according to user-defined output rules. You can classify, filter, and output logs based on source modules. To view the supported source modules, use info-center source ?.

Figure 18 Information center diagram

 

By default, the information center is enabled.

Log types

Logs are classified into the following types:

·          Common logs—Record common system information. Unless otherwise specified, the term "logs" in this document refers to common logs.

·          Diagnostic logs—Record debug messages.

·          Hidden logs—Record log information not displayed on the terminal, such as input commands.

·          Trace logs—Record system tracing and debug messages,

Log levels

Logs are classified into eight severity levels from 0 through 7 in descending order. The device outputs logs with a severity level that is higher than or equal to the specified level. For example, if you configure an output rule with a severity level of 6 (informational), logs that have a severity level from 0 to 6 are output.

Table 6 Log levels

Severity value

Level

Description

0

Emergency

The system is unusable. For example, the system authorization has expired.

1

Alert

Action must be taken immediately. For example, traffic on an interface exceeds the upper limit.

2

Critical

Critical condition. For example, the device temperature exceeds the upper limit, the power module fails, or the fan tray fails.

3

Error

Error condition. For example, the link state changes.

4

Warning

Warning condition. For example, an interface is disconnected, or the memory resources are used up.

5

Notification

Normal but significant condition. For example, a terminal logs in to the device, or the device reboots.

6

Informational

Informational message. For example, a command or a ping operation is executed.

7

Debug

Debug message.

 

Log destinations

The system outputs logs to the following destinations: console, monitor terminal, log buffer, log host, and log file. Log output destinations are independent and you can configure them after enabling the information center.

Default output rules for logs

A log output rule specifies the source modules and severity level of logs that can be output to a destination. Logs matching the output rule are output to the destination. Table 7 shows the default log output rules.

Table 7 Default output rules

Destination

Log source modules

Output switch

Severity

Console

All supported modules

Enabled

Debug

Monitor terminal

All supported modules

Disabled

Debug

Log host

All supported modules

Enabled

Informational

Log buffer

All supported modules

Enabled

Informational

Log file

All supported modules

Enabled

Informational

 

Default output rules for diagnostic logs

Diagnostic logs can only be output to the diagnostic log file, and cannot be filtered by source modules and severity levels. Table 8 shows the default output rule for diagnostic logs.

Table 8 Default output rule for diagnostic logs

Destination

Log source modules

Output switch

Severity

Diagnostic log file

All supported modules

Enabled

Debug

 

Default output rules for hidden logs

Hidden logs can be output to the log host, log buffer, and the log file, but they cannot be filtered by source modules and severity levels. Table 9 shows the default output rules for hidden logs.

Table 9 Default output rules for hidden logs

Destination

Log source modules

Output switch

Severity

Log host

All supported modules

Enabled

Informational

Log buffer

All supported modules

Enabled

Informational

Log file

All supported modules

Enabled

Informational

 

Default output rules for trace logs

Trace logs can only be output to the trace log file, and cannot be filtered by source modules and severity levels. Table 10 shows the default output rules for trace logs.

Table 10 Default output rules for trace logs

Destination

Log source modules

Output switch

Severity

Trace log file

All supported modules

Enabled

Debugging

 

Log formats

The format of logs varies with output destinations. Table 11 shows the original format of log information, which might be different from what you see. The actual format depends on the log resolution tool used.

Table 11 Log formats

Output destination

Format

Example

Console, monitor terminal, log buffer, or log file

Prefix Timestamp Sysname Module/Level/Mnemonic: Content

%Nov 24 14:21:43:502 2010 H3C SYSLOG/6/SYSLOG_RESTART: System restarted –-

H3C Comware Software.

Log host

·         H3C format:
<PRI>Timestamp Sysname %%vvModule/Level/Mnemonic: Source; Content

·         unicom format:
<PRI>Timestamp Hostip vvModule/Level/Serial_number: Content

·         cmcc format:
<PRI>Timestamp Sysname %vvModule/Level/Mnemonic: Source Content

·         H3C format:
<190>Nov 24 16:22:21 2010 H3C %%10SYSLOG/6/SYSLOG_RESTART: -DevIP=1.1.1.1; System restarted –-

H3C Comware Software.

·         unicom format:
<189>Oct 13 16:48:08 2000 10.1.1.1 10IFNET/2/210231a64jx073000020: VTY logged in from 192.168.1.21

·         cmcc format:
<189>Oct 9 14:59:04 2009 Sysname %10SHELL/5/SHELL_LOGIN: VTY logged in from 192.168.1.21

 

Table 12 describes the fields in a log message.

Table 12 Log field description

Field

Description

Prefix (information type)

A log to a destination other than the log host has an identifier in front of the timestamp:

·         An identifier of percent sign (%) indicates a log with a level equal to or higher than informational.

·         An identifier of asterisk (*) indicates a debug log or a trace log.

·         An identifier of caret (^) indicates a diagnostic log.

PRI (priority)

A log destined to the log host has a priority identifier in front of the timestamp. The priority is calculated by using this formula: facility*8+level, where:

·         facility is the facility name. It can be configured with the info-center loghost command. It is used to identify log sources on the log host, and to query and filter the logs from specific log sources.

·         level ranges from 0 to 7. See Table 6 for more information about severity levels.

Timestamp

Records the time when the log was generated.

Logs sent to the log host and those sent to the other destinations have different timestamp precisions, and their timestamp formats are configured with different commands. For more information, see Table 13 and Table 14.

Hostip

Source IP address of the log. If info-center loghost source is configured, this field displays the IP address of the specified source interface. Otherwise, this field displays the sysname.

This field exists only in logs in unicom format that are sent to the log host.

Serial number

Serial number of the device that generated the log.

This field exists only in logs in unicom format that are sent to the log host.

Sysname (host name or host IP address)

The sysname is the host name or IP address of the device that generated the log. You can use the sysname command to modify the name of the device.

%% (vendor ID)

Indicates that the information was generated by an H3C device.

This field exists only in logs sent to the log host.

vv (version information)

Identifies the version of the log, and has a value of 10.

This field exists only in logs that are sent to the log host.

Module

Specifies the name of the module that generated the log. You can enter the info-center source ? command in system view to view the module list.

Level

Identifies the level of the log. See Table 6 for more information about severity levels.

Mnemonic

Describes the content of the log. It contains a string of up to 32 characters.

Source

Identifies the source of the log. It can take one of the following values:

·         Slot number of a card. (In standalone mode.)

·         IRF member ID and card slot number. (In IRF mode.)

·         IP address of the log sender.

Content

Provides the content of the log.

 

Table 13 Timestamp precisions and configuration commands

Item

Destined to the log host

Destined to the console, monitor terminal, log buffer, and log file

Precision

Seconds

Milliseconds

Command used to set the timestamp format

info-center timestamp loghost

info-center timestamp

 

Table 14 Description of the timestamp parameters

Timestamp parameters

Description

Example

boot

Time that has elapsed since system startup, in the format of xxx.yyy. xxx represents the higher 32 bits, and yyy represents the lower 32 bits, of milliseconds elapsed.

Logs that are sent to all destinations other than a log host support this parameter.

%0.109391473 Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged in successfully.

0.109391473 is a timestamp in the boot format.

date

Current date and time, in the format of mmm dd hh:mm:ss yyy for logs that are output to a log host, or MMM DD hh:mm:ss:xxx YYYY for logs that are output to other destinations.

All logs support this parameter.

%May 30 05:36:29:579 2003 Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged in successfully.

May 30 05:36:29:579 2003 is a timestamp in the date format.

iso

Timestamp format stipulated in ISO 8601.

Only logs that are sent to a log host support this parameter.

<189>2003-05-30T06:42:44 Sysname %%10FTPD/5/FTPD_LOGIN(l): User ftp (192.168.1.23) has logged in successfully.

2003-05-30T06:42:44 is a timestamp in the iso format.

none

No timestamp is included.

All logs support this parameter.

% Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged in successfully.

No timestamp is included.

no-year-date

Current date and time without year information, in the format of MMM DD hh:mm:ss:xxx.

Only logs that are sent to a log host support this parameter.

<189>May 30 06:44:22 Sysname %%10FTPD/5/FTPD_LOGIN(l): User ftp (192.168.1.23) has logged in successfully.

May 30 06:44:22 is a timestamp in the no-year-date format.

 

FIPS compliance

The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more information about FIPS mode, see Security Configuration Guide.

Information center configuration task list

 

Tasks at a glance

Perform at least one of the following tasks:

·         Outputting logs to the console

·         Outputting logs to the monitor terminal

·         Outputting logs to a log host

·         Outputting logs to the log buffer

·         Saving logs to the log file

(Optional.) Saving diagnostic logs to the diagnostic log file

 

(Optional.) Configuring the maximum size of the trace log file

 

(Optional.) Enabling synchronous information output

 

(Optional.) Enabling duplicate log suppression

 

(Optional.) Disabling an interface from generating link up or link down logs

 

(Optional.) Setting the minimum storage period for logs

 

 

Outputting logs to the console

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the information center.

info-center enable

By default, the information center is enabled.

3.       Configure an output rule for the console.

info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity }

For information about default output rules, see "Default output rules for logs."

4.       (Optional.) Configure the timestamp format.

info-center timestamp { boot | date | none }

By default, the timestamp format is date.

5.       Return to user view.

quit

N/A

6.       Enable log output to the console.

terminal monitor

The default setting is enabled.

7.       Enable the display of debug information on the current terminal.

terminal debugging

By default, the display of debug information is disabled on the monitor terminal.

8.       (Optional.) Set the lowest severity level of logs that can be output to the console.

terminal logging level severity

The default setting is 6 (informational).

 

Outputting logs to the monitor terminal

Monitor terminals refer to terminals that log in to the device through the VTY user interface.

To output logs to the monitor terminal:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the information center.

info-center enable

By default, the information center is enabled.

3.       Configure an output rule for the monitor terminal.

info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity }

For information about default output rules, see "Default output rules for logs."

4.       (Optional.) Configure the timestamp format.

info-center timestamp { boot | date | none }

By default, the timestamp format is date.

5.       Return to user view.

quit

N/A

6.       Enable log output to the monitor terminal.

terminal monitor

The default setting is enabled.

7.       Enable the display of debug information on the current terminal.

terminal debugging

By default, the display of debug information is disabled on the monitor terminal.

8.       (Optional.) Set the lowest level of logs that can be output to the monitor terminal.

terminal logging level severity

The default setting is 6 (informational).

 

Outputting logs to a log host

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the information center.

info-center enable

By default, the information center is enabled.

3.       Configure an output rule for outputting logs to a log host.

info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity }

For information about default output rules, see "Default output rules for logs."

4.       (Optional.) Specify the source IP address for output logs.

info-center loghost source interface-type interface-number

By default, the source IP address of output log information is the primary IP address of the matching route's egress interface.

5.       (Optional.) Configure the timestamp format.

info-center timestamp loghost { date | iso | no-year-date | none }

By default, the timestamp format is date.

6.       Specify a log host and configure related parameters.

info-center loghost [ vpn-instance vpn-instance-name ] loghost [ port port-number ] [ facility local-number ]

By default, no log host or related parameters are specified.

The value of the port-number argument must be the same as the value configured on the log host. Otherwise, the log host cannot receive logs.

 

Outputting logs to the log buffer

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the information center.

info-center enable

By default, the information center is enabled.

3.       (Optional.) Set the maximum number of logs that can be stored in the log buffer.

info-center logbuffer size buffersize

By default, the log buffer can store 512 logs.

4.       Configure an output rule for the log buffer.

info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity }

For information about default output rules, see "Default output rules for logs."

5.       (Optional.) Configure the timestamp format.

info-center timestamp { boot | date | none }

By default, the timestamp format is date.

 

Saving logs to the log file

By default, the log file feature saves logs from the log file buffer to the log file every 24 hours. You can adjust the saving interval or manually save logs to the log file. After saving logs into the log file, the system clears the log file buffer.

The log file has a maximum capacity. When the capacity is reached, the system will replace earliest logs with new logs.

To save logs to the log file:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the information center.

info-center enable

By default, the information center is enabled.

3.       Enable the log file feature.

info-center logfile enable

By default, the log file feature is enabled.

4.       (Optional.) Enable log file overwrite-protection.

info-center logfile overwrite-protection [ all-port-powerdown ]

By default, log file overwrite-protection is disabled.

When it is enabled, the device stops saving new logs when the log file is full or the storage device runs out of space.

This command is supported only in FIPS mode.

5.       (Optional.) Configure the maximum size for the log file.

info-center logfile size-quota size

The default setting is10 MB.

To ensure normal operation, set the size argument to a value between 1 MB and 10 MB.

6.       (Optional.) Specify the directory to save the log file.

info-center logfile directory dir-name

The default setting is flash:/logfile.

The configuration made by this command cannot survive a reboot or an active/standby switchover.

7.       Save the logs in the log file buffer to the log file.

·         Method 1: Configure the interval to perform the save operation:
info-center logfile frequency
freq-sec

·         Method 2: Manually save the logs in the log file buffer to the log file:
logfile save

Use either method.

The default saving interval is 86400 seconds.

The logfile save command is available in any view.

 

Saving diagnostic logs to the diagnostic log file

By default, the system saves diagnostic logs from the diagnostic log buffer to the diagnostic log file every 24 hours. You can adjust the saving interval or manually save diagnostic logs to the diagnostic log file. After saving diagnostic logs into the diagnostic log file, the system clears the diagnostic log buffer.

The diagnostic log file has a maximum capacity. When the capacity is reached, the system replaces earliest diagnostic logs with new logs.

To enable saving diagnostic logs to the diagnostic log file:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the information center.

info-center enable

By default, the information center is enabled.

3.       Enable saving diagnostic logs to the diagnostic log file.

info-center diagnostic-logfile enable

By default, saving diagnostic logs to the diagnostic log file is enabled.

4.       (Optional.) Configure the maximum size of the diagnostic log file.

info-center diagnostic-logfile quota size

The default setting is 10 MB.

To ensure normal operation, set the size argument to a value between 1 MB and 10 MB.

5.       (Optional.) Specify the directory to save the diagnostic log file.

info-center diagnostic-logfile directory dir-name

The default setting is flash:/diagfile.

The configuration made by this command cannot survive a reboot or an active/standby switchover.

6.       Save the diagnostic logs in the diagnostic log buffer to the diagnostic log file.

·         Method 1: Configure the interval to perform the saving operation:
info-center diagnostic-logfile frequency
freq-sec

·         Method 2: Manually save the diagnostic logs in the buffer to a diagnostic log file:
diagnostic-logfile save

The default saving interval is 86400 seconds.

The diagnostic-logfile save command is available in any view.

 

Configuring the maximum size of the trace log file

The device has only one trace log file. When the trace log file is full, the device overwrites the oldest trace logs with new ones.

To set the maximum size of the trace log file:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Set the maximum size of the trace log file.

info-center trace-logfile quota size

By default, the maximum size of the trace log file is 1 MB.

 

Enabling synchronous information output

System log output interrupts ongoing configuration operations, obscuring previously entered commands. Synchronous information output shows the obscured commands. It also provides a command prompt in command editing mode, or a [Y/N] string in interaction mode so you can continue your operation from where you were stopped.

To enable synchronous information output:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable synchronous information output.

info-center synchronous

By default, synchronous information output is disabled.

 

Enabling duplicate log suppression

The output of consecutive duplicate logs at an interval of less than 30 seconds wastes system and network resources.

With this feature enabled, the system starts a suppression period upon outputting a log:

·          During the suppression period, the system does not output logs that have the same module name, level, mnemonic, location, and text as the previous log.

·          After the suppression period expires, if the same log continues to appear, the system outputs the suppressed logs and the log number and starts another suppression period. The suppression period is 30 seconds for the first time, 2 minutes for the second time, and 10 minutes for subsequent times.

·          If a different log is generated during the suppression period, the system aborts the current suppression period, outputs suppressed logs and the log number and then the different log, and starts another suppression period.

To enable duplicate log suppression:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable duplicate log suppression.

info-center logging suppress duplicates

By default, duplicate log suppression is disabled.

 

Disabling an interface from generating link up or link down logs

By default, all interfaces generate link up or link down log information when the interface state changes. In some cases, you might want to disable specific interfaces from generating this information. For example:

·          You are concerned only about the states of some interfaces. In this case, you can use this function to disable other interfaces from generating link up and link down log information.

·          An interface is unstable and continuously outputs log information. In this case, you can disable the interface from generating link up and link down log information.

To disable an interface from generating link up or link down logs:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Disable the interface from generating link up or link down logs.

undo enable log updown

By default, all interfaces generate link up and link down logs when the interface state changes.

 

Setting the minimum storage period for logs

Use this feature to set the minimum storage period for logs in the log buffer and log file. This feature ensures that logs will not be overwritten by new logs during a set period of time.

In the log buffer or log file, new logs will automatically overwrite the oldest logs when the following conditions are met:

·          The log buffer is full.

·          The log file is full and the log file overwrite-protection feature is disabled.

After the minimum storage period is set, the system identifies the storage period of a log to determine whether to delete the log. The system current time minus a log's generation time is the log's storage period.

·          If the storage period of a log is shorter than or equal to the minimum storage period, the system does not delete the log. The new log will not be saved.

·          If the storage period of a log is longer than the minimum storage period, the system deletes the log to save the new log.

To set the log minimum storage period:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Set the log minimum storage period.

info-center syslog min-age min-age

By default, the log minimum storage period is not set.

 

Displaying and maintaining information center

Execute display commands in any view and reset commands in user view.

 

Task

Command

Display the information of each output destination.

display info-center

Display the state and the log information of the log buffer (in standalone mode).

display logbuffer [ reverse ] [ level severity | size buffersize | slot slot-number ] *

Display the state and the log information of the log buffer (in IRF mode).

display logbuffer [ reverse ] [ level severity | size buffersize | chassis chassis-number slot slot-number ] *

Display a summary of the log buffer (in standalone mode).

display logbuffer summary [ level severity | slot slot-number ] *

Display a summary of the log buffer (in IRF mode).

display logbuffer summary [ level severity | chassis chassis-number slot slot-number ] *

Display the configuration of the log file.

display logfile summary

Clear the log buffer.

reset logbuffer

 

Information center configuration examples

Configuration example for outputting logs to the console

Network requirements

Configure the device to output to the console FTP logs that have a severity level of at least warning.

Figure 19 Network diagram

 

Configuration procedure

# Enable the information center.

<Sysname> system-view

[Sysname] info-center enable

# Disable log output to the console.

[Sysname] info-center source default console deny

To avoid output of unnecessary information, disable all modules from outputting log information to the specified destination (console in this example) before you configure the output rule.

# Configure an output rule to output to the console FTP logs that have a severity level of at least warning.

[Sysname] info-center source ftp console level warning

[Sysname] quit

# Enable the display of logs on the console. (This function is enabled by default.)

<Sysname> terminal logging level 6

<Sysname> terminal monitor

 Current terminal monitor is on.

Now, if the FTP module generates logs, the information center automatically sends the logs to the console, and the console displays the logs.

Configuration example for outputting logs to a UNIX log host

Network requirements

Configure the device to output to the UNIX log host FTP logs that have a severity level of at least informational.

Figure 20 Network diagram

 

Configuration procedure

Before the configuration, make sure the device and the log host can reach each other. (Details not shown.)

1.        Configure the device:

# Enable the information center.

<Device> system-view

[Device] info-center enable

# Specify the log host 1.2.0.1/16 and specify local4 as the logging facility.

[Device] info-center loghost 1.2.0.1 facility local4

# Disable log output to the log host.

[Device] info-center source default loghost deny

To avoid output of unnecessary information, disable all modules from outputting logs to the specified destination (loghost in this example) before you configure an output rule.

# Configure an output rule to output to the log host FTP logs that have a severity level of at least informational.

[Device] info-center source ftp loghost level informational

2.        Configure the log host:

The following configurations were performed on Solaris. Other UNIX operating systems have similar configurations.

a.    Log in to the log host as a root user.

b.    Create a subdirectory named Device in directory /var/log/, and then create file info.log in the Device directory to save logs from Device.

# mkdir /var/log/Device

# touch /var/log/Device/info.log

c.    Edit the file syslog.conf in directory /etc/ and add the following contents.

# Device configuration messages

local4.info /var/log/Device/info.log

In this configuration, local4 is the name of the logging facility that the log host uses to receive logs. info is the informational level. The UNIX system records the log information that has a severity level of at least informational to the file /var/log/Device/info.log.

 

 

NOTE:

Follow these guidelines while editing the file /etc/syslog.conf:

·      Comments must be on a separate line and must begin with a pound sign (#).

·      No redundant spaces are allowed after the file name.

·      The logging facility name and the severity level specified in the /etc/syslog.conf file must be identical to those configured on the device by using the info-center loghost and info-center source commands. Otherwise, the log information might not be output properly to the log host.

 

d.    Display the process ID of syslogd, kill the syslogd process, and then restart syslogd using the –r option to make the new configuration take effect.

# ps -ae | grep syslogd

147

# kill -HUP 147

# syslogd -r &

Now, the device can output FTP logs to the log host, which stores the logs to the specified file.

Configuration example for outputting logs to a Linux log host

Network requirements

Configure the device to output to the Linux log host 1.2.0.1/16 FTP logs that have a severity level of at least informational.

Figure 21 Network diagram

 

Configuration procedure

Before the configuration, make sure the device and the log host can reach each other. (Details not shown.)

1.        Configure the device:

# Enable the information center.

<Sysname> system-view

[Sysname] info-center enable

# Specify the log host 1.2.0.1/16, and specify local5 as the logging facility.

[Sysname] info-center loghost 1.2.0.1 facility local5

# Disable log output to the log host.

[Sysname] info-center source default loghost deny

To avoid outputting unnecessary information, disable all modules from outputting log information to the specified destination (loghost in this example) before you configure an output rule.

# Configure an output rule to enable output to the log host FTP logs that have a severity level of at least informational.

[Sysname] info-center source ftp loghost level informational

2.        Configure the log host:

The following configurations were performed on Solaris. Other UNIX operating systems have similar configurations.

a.    Log in to the log host as a root user.

b.    Create a subdirectory named Device in the directory /var/log/, and create file info.log in the Device directory to save logs of Device.

# mkdir /var/log/Device

# touch /var/log/Device/info.log

c.    Edit the file syslog.conf in directory /etc/ and add the following contents.

# Device configuration messages

local5.info /var/log/Device/info.log

In the above configuration, local5 is the name of the logging facility used by the log host to receive logs. info is the informational level. The Linux system will store the log information with a severity level equal to or higher than informational to the file /var/log/Device/info.log.

 

 

NOTE:

Follow these guidelines while editing the file /etc/syslog.conf:

·      Comments must be on a separate line and must begin with a pound sign (#).

·      No redundant spaces are allowed after the file name.

·      The logging facility name and the severity level specified in the /etc/syslog.conf file must be identical to those configured on the device by using the info-center loghost and info-center source commands. Otherwise, the log information might not be output to the log host.

 

d.    Display the process ID of syslogd, kill the syslogd process, and then restart syslogd by using the -r option to apply the new configuration.

Make sure the syslogd process is started with the -r option on a Linux log host.

# ps -ae | grep syslogd

147

# kill -9 147

# syslogd -r &

Now, the system can record log information into the specified file.

 


Configuring SNMP

This chapter provides an overview of the Simple Network Management Protocol (SNMP) and guides you through the configuration procedure.

Overview

SNMP is an Internet standard protocol widely used for a management station to access and operate the devices on a network, regardless of their vendors, physical characteristics, and interconnect technologies.

SNMP enables network administrators to read and set the variables on managed devices for state monitoring, troubleshooting, statistics collection, and other management purposes.

FIPS compliance

The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more information about FIPS mode, see Security Configuration Guide.

SNMP framework

The SNMP framework comprises the following elements:

·          SNMP manager—Works on an NMS to monitor and manage the SNMP-capable devices in the network.

·          SNMP agent—Works on a managed device to receive and handle requests from the NMS, and sends notifications to the NMS when events, such as an interface state change, occur.

·          Management Information Base (MIB)—Specifies the variables (for example, interface status and CPU usage) maintained by the SNMP agent for the SNMP manager to read and set.

Figure 22 Relationship between NMS, agent, and MIB

 

MIB and view-based MIB access control

A MIB stores variables called "nodes" or "objects" in a tree hierarchy and identifies each node with a unique OID. An OID is a dotted numeric string that uniquely identifies the path from the root node to a leaf node. For example, object B in Figure 23 is uniquely identified by the OID {1.2.1.1}.

Figure 23 MIB tree

 

A MIB view represents a set of MIB objects (or MIB object hierarchies) with certain access privileges and is identified by a view name. The MIB objects included in the MIB view are accessible while those excluded from the MIB view are inaccessible.

A MIB view can have multiple view records each identified by a view-name oid-tree pair.

You control access to the MIB by assigning MIB views to SNMP groups or communities.

SNMP operations

SNMP provides the following basic operations:

·          Get—NMS retrieves the SNMP object nodes in an agent MIB.

·          Set—NMS modifies the value of an object node in an agent MIB.

·          Notification—SNMP agent sends traps or informs to report events to the NMS. The difference between these two types of notification is that informs require acknowledgement but traps do not. Traps are available in SNMPv1, SNMPv2c, and SNMPv3, but informs are available only in SNMPv2c and SNMPv3.

Protocol versions

SNMPv1, SNMPv2c, and SNMPv3 are supported in non-FIPS mode. Only SNMPv3 is supported in FIPS mode. An NMS and an SNMP agent must use the same SNMP version to communicate with each other.

·          SNMPv1—Uses community names for authentication. To access an SNMP agent, an NMS must use the same community name as set on the SNMP agent. If the community name used by the NMS differs from the community name set on the agent, the NMS cannot establish an SNMP session to access the agent or receive traps from the agent.

·          SNMPv2c—Uses community names for authentication. SNMPv2c is compatible with SNMPv1, but supports more operation types, data types, and error codes.

·          SNMPv3—Uses a user-based security model (USM) to secure SNMP communication. You can configure authentication and privacy mechanisms to authenticate and encrypt SNMP packets for integrity, authenticity, and confidentiality.

Access control modes

SNMP uses the following modes to control access to MIB objects:

·          View-based Access Control ModelThe VACM mode controls access to MIB objects by assigning MIB views to SNMP communities or users.

·          Role based access controlThe RBAC mode controls access to MIB objects by assigning user roles to SNMP communities or users.

?  An SNMP community or user with a predefined user role network-admin or level-15 has read and write access to all MIB objects.

?  An SNMP community or user with a predefined user role network-operator has read-only access to all MIB objects.

?  An SNMP community or user with a user role specified by the role command accesses MIB objects through the user role rules specified by the rule command.

If you create the same SNMP community or user with both modes multiple times, the most recent configuration takes effect. For more information about user roles and the rule command, see Fundamentals Command Reference.

For an NMS to access an agent:

·          The RBAC mode requires the user role bound to a community name or username to have the same access right to MIB objects as the NMS.

·          The VACM mode requires only the access right from the NMS to MIB objects.

The RBAC mode is more secure. As a best practice, use the RBAC mode to control access to MIB objects.

Configuring SNMP basic parameters

SNMPv3 differs from SNMPv1 and SNMPv2c in many ways. Their configuration procedures are described in separate sections.

Configuring SNMPv1 or SNMPv2c basic parameters

SNMPv1 and SNMPv2c settings are supported only in non-FIPS mode.

To configure SNMPv1 or SNMPv2c basic parameters:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       (Optional.) Enable the SNMP agent.

snmp-agent

By default, the SNMP agent is disabled.

The SNMP agent is enabled when you perform any command that begins with snmp-agent except for the snmp-agent calculate-password command.

3.       (Optional.) Configure the system contact.

snmp-agent sys-info contact sys-contact

By default, the system contact is Hangzhou H3C Tech. Co., Ltd.

4.       (Optional.) Configure the system location.

snmp-agent sys-info location sys-location

By default, the system location is Hangzhou, China.

5.       Enable SNMPv1 or SNMPv2c.

snmp-agent sys-info version { all | { v1 | v2c | v3 } * }

The default is SNMPv3.

6.       (Optional.) Change the local engine ID.

snmp-agent local-engineid engineid

By default, the local engine ID is the company ID plus the device ID.

7.       (Optional.) Create or update a MIB view.

snmp-agent mib-view { excluded | included } view-name oid-tree [ mask mask-value ]

By default, the MIB view ViewDefault is predefined. In this view, all the MIB objects in the iso subtree but the snmpUsmMIB, snmpVacmMIB, and snmpModules.18 subtrees are accessible.

Each view-name oid-tree pair represents a view record. If you specify the same record with different MIB sub-tree masks multiple times, the most recent configuration takes effect. Except for the four sub-trees in the default MIB view, you can create up to 16 unique MIB view records.

8.       Configure the SNMP access right.

·         (Method 1.) Create an SNMP community:
In VACM mode:
snmp-agent community { read | write } [ simple | cipher ] community-name [ mib-view view-name ] [ acl acl-number
]
In RBAC mode:
snmp-agent community [ simple | cipher ] community-name user-role role-name [ acl acl-number ]

·         (Method 2.) Create an SNMPv1/v2c group, and add users to the group:

a.    snmp-agent group { v1 | v2c } group-name [ read-view view-name ] [ write-view view-name ] [ notify-view view-name ] [ acl acl-number ]

b.    snmp-agent usm-user { v1 | v2c } user-name group-name [ acl acl-number ]

By default, no SNMP group or SNMP community exists.

The username configured by using method 1 is the same as that configured by using method 2. Usernames created by either method must be the same as the community name configured on the NMS.

9.       (Optional.) Create an SNMP context.

snmp-agent context context-name

By default, no SNMP context is configured on the device.

10.     (Optional.) Map an SNMP community to an SNMP context.

snmp-agent community-map community-name context context-name

By default, no mapping between an SNMP community and an SNMP context exists on the device.

11.     (Optional.) Configure the maximum SNMP packet size (in bytes) that the SNMP agent can handle.

snmp-agent packet max-size byte-count

By default, the maximum SNMP packet size that the SNMP agent can handle is 1500 bytes.

12.     (Optional.) Specify the UDP port for receiving SNMP packets.

snmp-agent port port-number

By default, the device uses UDP port 161 for receiving SNMP packets.

 

Configuring SNMPv3 basic parameters

SNMPv3 users are managed in groups. All SNMPv3 users in a group share the same security model, but can use different authentication and privacy key settings. To implement a security model for a user and avoid SNMP communication failures, make sure the security model configuration for the group and the security key settings for the user are compliant with Table 15 and match the settings on the NMS.

Table 15 Basic security setting requirements for different security models

Security model

Security model keyword for the group

Security key settings for the user

Remarks

Authentication with privacy

privacy

Authentication key, privacy key

If the authentication key or the privacy key is not configured, SNMP communication will fail.

Authentication without privacy

authentication

Authentication key

If no authentication key is configured, SNMP communication will fail.

The privacy key (if any) for the user does not take effect.

No authentication, no privacy

Neither authentication nor privacy

None

The authentication and privacy keys, if configured, do not take effect.

 

To configure SNMPv3 basic parameters:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       (Optional.) Enable the SNMP agent.

snmp-agent

By default, the SNMP agent is disabled.

The SNMP agent is enabled when you perform any command that begins with snmp-agent except for the snmp-agent calculate-password command.

3.       (Optional.) Configure the system contact.

snmp-agent sys-info contact sys-contact

By default, the system contact is Hangzhou H3C Tech. Co., Ltd.

4.       (Optional.) Configure the system location.

snmp-agent sys-info location sys-location

By default, the system location is Hangzhou, China.

5.       Enable SNMPv3.

snmp-agent sys-info version { all | { v1 | v2c | v3 }* }

The default is SNMPv3.

6.       (Optional.) Change the local engine ID.

snmp-agent local-engineid engineid

By default, the local engine ID is the company ID plus the device ID.

* IMPORTANT:

After you change the local engine ID, the existing SNMPv3 users and encrypted keys become invalid, and you must reconfigure them.

7.       (Optional.) Configure a remote engine ID.

snmp-agent remote ip-address [ vpn-instance vpn-instance-name ] engineid engineid

By default, no remote engine ID is configured.

To send informs to an SNMPv3 NMS, you must configure the SNMP engine ID of the NMS.

8.       (Optional.) Create or update a MIB view.

snmp-agent mib-view { excluded | included } view-name oid-tree [ mask mask-value ]

By default, the MIB view ViewDefault is predefined. In this view, all the MIB objects in the iso subtree but the snmpUsmMIB, snmpVacmMIB, and snmpModules.18 subtrees are accessible.

Each view-name oid-tree pair represents a view record. If you specify the same record with different MIB sub-tree masks multiple times, the most recent configuration takes effect. Except for the four sub-trees in the default MIB view, you can create up to 16 unique MIB view records.

9.       Create an SNMPv3 group.

·         In non-FIPS mode:
snmp-agent
group v3 group-name [ authentication | privacy ] [ read-view view-name ] [ write-view view-name ] [ notify-view view-name ] [ acl acl-number
]

·         In FIPS mode:
snmp-agent group v3 group-name { authentication | privacy } [ read-view read-view ] [ write-view write-view ] [ notify-view notify-view ] [ acl acl-number
]

By default, no SNMP group exists.

10.     (Optional.) Create an SNMP context.

snmp-agent context context-name

By default, no SNMP context is configured on the device.

11.     (Optional.) Calculate a digest for the ciphertext key converted from a plaintext key.

·         In non-FIPS mode:
snmp-agent calculate-password
plain-password mode { 3desmd5 | 3dessha | md5 | sha } { local-engineid | specified-engineid engineid }

·         In FIPS mode:
snmp-agent calculate-password plain-password mode sha { local-engineid | specified-engineid engineid }

N/A

12.     Create an SNMPv3 user.

·         In non-FIPS mode (in VACM mode):
snmp-agent
usm-user v3 user-name group-name [ remote ip-address [ vpn-instance vpn-instance-name ]
] [ { cipher | simple } authentication-mode { md5 | sha } auth-password [ privacy-mode { aes128 | 3des | des56 } priv-password ] ] [ acl acl-number ]

·         In non-FIPS mode (in RBAC mode):
snmp-agent usm-user v3 user-name user-role role-name [ remote ip-address } [ vpn-instance vpn-instance-name ] ] [ { cipher | simple } authentication-mode { md5 | sha } auth-password [ privacy-mode { aes128 | 3des | des56 } priv-password ] ] [ acl acl-number ]

·         In FIPS mode (in VACM mode):
snmp-agent usm-user v3 user-name group-name [ remote ip-address [ vpn-instance vpn-instance-name ]
] { cipher | simple } authentication-mode sha auth-password [ privacy-mode aes128 priv-password ] [ acl acl-number ]

·         In FIPS mode (in RBAC mode):
snmp-agent usm-user v3 user-name user-role role-name [ remote ip-address } [ vpn-instance vpn-instance-name ] ] { cipher | simple } authentication-mode sha auth-password [ privacy-mode aes128 priv-password ] [ acl acl-number ]

If the cipher keyword is specified, the arguments auth-password and priv-password are used as encrypted keys.

To send informs to an SNMPv3 NMS, you must configure the remote ip-address option to specify the IP address of the NMS.

13.     (Optional.) Configure the maximum SNMP packet size (in bytes) that the SNMP agent can handle.

snmp-agent packet max-size byte-count

By default, the maximum SNMP packet size that the SNMP agent can handle is 1500 bytes.

14.     (Optional.) Specify the UDP port for receiving SNMP packets.

snmp-agent port port-number

By default, the device uses UDP port 161 for receiving SNMP packets.

 

Configuring SNMP logging

Enable SNMP logging only if necessary. SNMP logging is memory-intensive and might impact device performance.

The SNMP agent logs Get requests, Set requests, Set responses, and SNMP notifications, but does not log Get responses.

·          Get operation—The agent logs the IP address of the NMS, name of the accessed node, and node OID.

·          Set operation—The agent logs the NMS' IP address, name of accessed node, node OID, variable value, and error code and index for the Set operation.

·          Notification tracking—The agent logs the SNMP notifications after sending them to the NMS.

To configure SNMP logging:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       (Optional.) Enable SNMP logging.

snmp-agent log { all | get-operation | set-operation }

By default, SNMP logging is disabled.

3.       (Optional.) Enable SNMP notification logging.

snmp-agent trap log

By default, SNMP notification logging is disabled.

 

Configuring SNMP notifications

The SNMP Agent sends notifications (traps and informs) to inform the NMS of significant events, such as link state changes and user logins or logouts. Unless otherwise stated, the trap keyword in the command line includes both traps and informs.

Enabling SNMP notifications

Enable an SNMP notification only if necessary. SNMP notifications are memory-intensive and might affect device performance.

To generate linkUp or linkDown notifications when the link state of an interface changes, you must enable linkUp or linkDown notification globally by using the snmp-agent trap enable standard [ linkdown | linkup ] * command and on the interface by using the enable snmp trap updown command.

After you enable notifications for a module, whether the module generates notifications also depends on the configuration of the module. For more information, see the configuration guide for each module.

To enable SNMP notifications:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable notifications globally.

snmp-agent trap enable [ configuration | protocol | standard [ authentication | coldstart | linkdown | linkup | warmstart ] * | system ]

By default, SNMP configuration notifications, standard notifications, and system notifications are enabled. Whether other SNMP notifications are enabled varies with modules.

3.       Enter interface view.

interface interface-type interface-number

N/A

4.       Enable link state notifications.

enable snmp trap updown

By default, link state notifications are enabled.

 

Configuring the SNMP agent to send notifications to a host

You can configure the SNMP agent to send notifications as traps or informs to a host, typically an NMS, for analysis and management. Traps are less reliable and use fewer resources than informs, because an NMS does not send an acknowledgement when it receives a trap.

Configuration guidelines

When network congestion occurs or the destination is not reachable, the SNMP agent buffers notifications in a queue. You can configure the queue size and the notification lifetime (the maximum time that a notification can stay in the queue). A notification is deleted when its lifetime expires. When the notification queue is full, the oldest notifications are automatically deleted.

You can extend standard linkUp/linkDown notifications to include interface description and interface type, but must make sure that the NMS supports the extended SNMP messages.

To send informs, make sure:

·          The SNMP agent and the NMS use SNMPv2c or SNMPv3.

·          If SNMPv3 is used, you must configure the SNMP engine ID of the NMS when you configure SNMPv3 basic settings. Also, specify the IP address of the SNMP engine when you create the SNMPv3 user.

Configuration prerequisites

·          Configure the SNMP agent with the same basic SNMP settings as the NMS. If SNMPv1 or SNMPv2c is used, you must configure a community name. If SNMPv3 is used, you must configure an SNMPv3 user, a MIB view, and a remote SNMP engine ID associated with the SNMPv3 user for notifications.

·          The SNMP agent and the NMS can reach each other.

Configuration procedure

To configure the SNMP agent to send notifications to a host:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure a target host.

·         (Method 1.) Send traps to the target host:
In non-FIPS mode:
snmp-agent target-host trap address udp-domain ip-address [ udp-port port-number ] [ vpn-instance vpn-instance-name ]
params securityname security-string [ v1 | v2c | v3 [ authentication | privacy ] ]
In FIPS mode:
snmp-agent target-host trap address udp-domain ip-address [ udp-port port-number ] [ vpn-instance vpn-instance-name ]
params securityname security-string v3 { authentication | privacy }

·         (Method 2.) Send informs to the target host:
In non-FIPS mode:
snmp-agent target-host inform address udp-domain ip-address [ udp-port port-number ] [ vpn-instance vpn-instance-name ]
params securityname security-string { v2c | v3 [ authentication | privacy ] }
In FIPS mode:
snmp-agent target-host inform address udp-domain ip-address [ udp-port port-number ] [ vpn-instance vpn-instance-name ]
params securityname security-string v3 { authentication | privacy }

By default, no target host is configured.

3.       (Optional.) Configure a source address for notifications.

snmp-agent { inform | trap } source interface-type { interface-number | interface-number.subnumber }

By default, SNMP uses the IP address of the outgoing routed interface as the source IP address.

4.       (Optional.) Enable extended linkUp/linkDown notifications.

snmp-agent trap if-mib link extended

By default, the SNMP agent sends standard linkup/linkDown notifications.

5.       (Optional.) Configure the notification queue size.

snmp-agent trap queue-size size

By default, the notification queue can hold 100 notification messages.

6.       (Optional.) Configure the notification lifetime.

snmp-agent trap life seconds

The default notification lifetime is 120 seconds.

 

Displaying the SNMP settings

Execute display commands in any view. The display snmp-agent community command is supported only in non-FIPS mode.

 

Task

Command

Display SNMP agent system information, including the contact, physical location, and SNMP version.

display snmp-agent sys-info [ contact | location | version ] *

Display SNMP agent statistics.

display snmp-agent statistics

Display the local engine ID.

display snmp-agent local-engineid

Display SNMP group information.

display snmp-agent group [ group-name ]

Display remote engine IDs.

display snmp-agent remote [ ip-address [ vpn-instance vpn-instance-name ] ]

Display basic information about the notification queue.

display snmp-agent trap queue

Display the modules that can generate notifications and their notification status (enable or disable).

display snmp-agent trap-list

Display SNMPv3 user information.

display snmp-agent usm-user [ engineid engineid | username user-name | group group-name ] *

Display SNMPv1 or SNMPv2c community information.

display snmp-agent community [ read | write ]

Display MIB view information.

display snmp-agent mib-view [ exclude | include | viewname view-name ]

Display SNMP MIB node information.

display snmp-agent mib-node [ details | index-node | trap-node | verbose ]

Display an SNMP context.

display snmp-agent context [ context-name ]

 

SNMPv1/SNMPv2c configuration example

SNMPv1 configuration procedure is the same as the SNMPv2c configuration procedure. This example uses SNMPv1, and is available only for non-FIPS mode.

Network requirements

As shown in Figure 24, the NMS (1.1.1.2/24) uses SNMPv1 to manage the SNMP agent (1.1.1.1/24), and the agent automatically sends notifications to report events to the NMS.

Figure 24 Network diagram

 

Configuration procedure

1.        Configure the SNMP agent:

# Configure the IP address of the agent and make sure the agent and the NMS can reach each other. (Details not shown.)

# Specify SNMPv1, and create the read-only community public and the read and write community private.

<Agent> system-view

[Agent] snmp-agent sys-info version v1

[Agent] snmp-agent community read public

[Agent] snmp-agent community write private

# Configure contact and physical location information for the agent.

[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306

[Agent] snmp-agent sys-info location telephone-closet,3rd-floor

# Enable SNMP notifications, set the NMS at 1.1.1.2 as an SNMP trap destination, and use public as the community name. (To make sure the NMS can receive traps, specify the same SNMP version in the snmp-agent target-host command as is configured on the NMS.)

[Agent] snmp-agent trap enable

[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname public v1

2.        Configure the SNMP NMS:

?  Specify SNMPv1.

?  Create the read-only community public, and create the read and write community private.

?  Set the timeout timer and maximum number of retries as needed.

For information about configuring the NMS, see the NMS manual.

 

 

NOTE:

The SNMP settings on the agent and the NMS must match.

 

3.        Verify the configuration:

# Try to get the MTU value of NULL0 interface from the agent. The attempt succeeds.

Send request to 1.1.1.1/161 ...

Protocol version: SNMPv1

Operation: Get

Request binding:

1: 1.3.6.1.2.1.2.2.1.4.135471

Response binding:

1: Oid=ifMtu.135471 Syntax=INT Value=1500

Get finished

# Use a wrong community name to get the value of a MIB node on the agent. You can see an authentication failure trap on the NMS.

1.1.1.1/2934 V1 Trap = authenticationFailure

SNMP Version = V1

Community = public

Command = Trap

Enterprise = 1.3.6.1.4.1.43.1.16.4.3.50

GenericID = 4

SpecificID = 0

Time Stamp = 8:35:25.68

SNMPv3 in VACM mode configuration example

Network requirements

As shown in Figure 25, the NMS (1.1.1.2/24) uses SNMPv3 to monitor and manage the interface status of the agent (1.1.1.1/24). The agent automatically sends notifications to report events to the NMS. The default UDP port 162 is used for SNMP notifications.

The NMS and the agent perform authentication when they set up an SNMP session. The authentication algorithm is SHA-1 and the authentication key is 123456TESTauth&!. The NMS and the agent also encrypt the SNMP packets between them by using the AES algorithm and the privacy key 123456TESTencr&!.

Figure 25 Network diagram

 

Configuration procedure

1.        Configure the agent:

# Configure the IP address of the agent, and make sure the agent and the NMS can reach each other. (Details not shown.)

# Assign the NMS (SNMPv3 group managev3group) read and write access to the objects under the snmp node (OID 1.3.6.1.2.1.11), and deny its access to any other MIB object.

<Agent> system-view

[Agent] undo snmp-agent mib-view ViewDefault

[Agent] snmp-agent mib-view included test snmp

[Agent] snmp-agent group v3 managev3group privacy read-view test write-view test

# Add the user managev3user to the SNMPv3 group managev3group, and set the authentication algorithm to sha, authentication key to 123456TESTauth&!, encryption algorithm to aes128, and privacy key to 123456TESTencr&!.

[Agent] snmp-agent usm-user v3 managev3user managev3group simple authentication-mode sha 123456TESTauth&! privacy-mode aes128 123456TESTencr&!

# Configure contact and physical location information for the agent.

[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306

[Agent] snmp-agent sys-info location telephone-closet,3rd-floor

# Enable notifications, specify the NMS at 1.1.1.2 as a trap destination, and set the username to managev3user for the traps.

[Agent] snmp-agent trap enable

[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname managev3user v3 privacy

2.        Configure the SNMP NMS:

?  Specify SNMPv3.

?  Create the SNMPv3 user managev3user.

?  Enable both authentication and privacy functions.

?  Use SHA-1 for authentication and AES for encryption.

?  Set the authentication key to 123456TESTauth&! and the privacy key to 123456TESTencr&!.

?  Set the timeout timer and maximum number of retries.

For information about configuring the NMS, see the NMS manual.

 

 

NOTE:

The SNMP settings on the agent and the NMS must match.

 

3.        Verify the configuration:

# Try to get the MTU value of NULL0 interface from the agent. The get attempt succeeds.

Send request to 1.1.1.1/161 ...

Protocol version: SNMPv3

Operation: Get

Request binding:

1: 1.3.6.1.2.1.2.2.1.4.135471

Response binding:

1: Oid=ifMtu.135471 Syntax=INT Value=1500

Get finished

# Try to get the device name from the agent. The get attempt fails because the NMS has no access right to the node.

Send request to 1.1.1.1/161 ...

Protocol version: SNMPv3

Operation: Get

Request binding:

1: 1.3.6.1.2.1.1.5.0

Response binding:

1: Oid=sysName.0 Syntax=noSuchObject Value=NULL

Get finished

# Execute the shutdown or undo shutdown command on an idle interface on the agent. You can see the link state change traps on the NMS:

1.1.1.1/3374 V3 Trap = linkdown

SNMP Version = V3

Community = managev3user

Command = Trap

1.1.1.1/3374 V3 Trap = linkup

SNMP Version = V3

Community = managev3user

Command = Trap

SNMPv3 in RBAC mode configuration example

Network requirements

As shown in Figure 26, the NMS (1.1.1.2/24) uses SNMPv3 to monitor and manage the interface status of the agent (1.1.1.1/24). The agent automatically sends notifications to report events to the NMS.

The NMS and the agent perform authentication when they establish an SNMP session. The authentication algorithm is SHA-1 and the authentication key is 123456TESTauth&!. The NMS and the agent also encrypt the SNMP packets between them by using the AES algorithm and the privacy key 123456TESTencr&!.

Figure 26 Network diagram

 

Configuration procedure

1.        Configure the agent:

# Configure the IP address of the agent, and make sure the agent and the NMS can reach each other. (Details not shown.)

# Create the user role test, and permit test to have read and write access to the snmp node (OID 1.3.6.1.2.1.11).

<Agent> system-view

[Agent] role name test

[Agent-role-test] rule 1 permit read write oid 1.3.6.1.2.1.11

# Permit the user role test to have read-only access to the system node (OID 1.3.6.1.2.1.1) and hh3cUIMgt node (OID 1.3.6.1.4.1.25506.2.2).

[Agent-role-test] rule 2 permit read oid 1.3.6.1.4.1.25506.2.2

[Agent-role-test] rule 3 permit read oid 1.3.6.1.2.1.1

[Agent-role-test] quit

# Create the SNMPv3 user managev3user with the user role test, and enable the authentication with privacy security model for the user. Set the authentication algorithm to sha, authentication key to 123456TESTauth&!, encryption algorithm to aes128, and privacy key to 123456TESTencr&!.

[Agent] snmp-agent usm-user v3 managev3user user-role test simple authentication-mode sha 123456TESTauth&! privacy-mode aes128 123456TESTencr&!

# Configure contact and physical location information for the agent.

[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306

[Agent] snmp-agent sys-info location telephone-closet,3rd-floor

# Enable notifications, specify the NMS at 1.1.1.2 as a notification destination, and set the username to managev3user for the notifications.

[Agent] snmp-agent trap enable

[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname managev3user v3 privacy

2.        Configure the SNMP NMS:

?  Specify SNMPv3.

?  Create the SNMPv3 user managev3user.

?  Enable both authentication and privacy functions.

?  Use SHA-1 for authentication and AES for encryption.

?  Set the authentication key to 123456TESTauth&! and the privacy key to 123456TESTencr&!.

?  Set the timeout timer and maximum number of retries.

For information about how to configure the NMS, see the NMS manual.

 

 

NOTE:

The SNMP settings on the agent and the NMS must match.

 

3.        Verify the configuration:

# Try to get the value of sysName from the agent. The get attempt succeeds.

Send request to 1.1.1.1/161 ...

Protocol version: SNMPv3

Operation: Get

Request binding:

1: 1.3.6.1.2.1.1.5.0

Response binding:

1: Oid=sysName.0 Syntax=OCTETS Value=Agent

Get finished

# Try to set the device name from the agent. The set attempt fails because the NMS does not have access rights to the node.

Send request to 1.1.1.1/161 ...

Protocol version: SNMPv3

Operation: Set

Request binding:

1: 1.3.6.1.2.1.1.5.0

Response binding:

Session failed ! SNMP: Cannot access variable, No Access, error index=11: Oid=sysName.0 Syntax=OCTETS Value=h3c Set finished

%Aug 14 16:13:21:475 2013 Agent SNMP/5/SNMP_SETDENY: -IPAddr=1.1.1.2-SecurityName=managev3user-SecurityModel=SNMPv3-OP=SET-Node=sysName(1.3.6.1.2.1.1.5.0)-Value=h3c; Permission denied.

# Log in to the agent. You can see a notification on the NMS.

hh3cLogIn inform received from: 192.168.41.41 at 2013/8/14 17:36:16

  Time stamp: 0 days 08h:03m:43s.37th

  Agent address: 1.1.1.1 Port: 62861 Transport: IP/UDP Protocol: SNMPv2c Inform

  Manager address: 1.1.1.2 Port: 10005 Transport: IP/UDP

  Community: public

  Bindings (4)

    Binding #1: sysUpTime.0 *** (timeticks) 0 days 08h:03m:43s.37th

    Binding #2: snmpTrapOID.0 *** (oid) hh3cLogIn

    Binding #3: hh3cTerminalUserName.0 *** (octets) testuser [74.65.73.74.75.73.65.72 hex)]

    Binding #4: hh3cTerminalSource.0 *** (octets) VTY [56.54.59 (hex)]

 



Configuring samplers

A sampler selects a packet from sequential packets and sends the packet to other service modules for processing. It operates in fixed mode, which selects the first packet from sequential packets in each sampling.

Port mirroring can use a sampler to sample packets to reduce the number of mirrored packets. For more information about port mirroring, see "Configuring port mirroring"

Creating a sampler

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a sampler.

sampler sampler-name mode fixed packet-interval rate

By default, no sampler exists.

 

Displaying and maintaining a sampler

Execute display commands in any view.

 

Task

Command

Display configuration information for a sampler (in standalone mode).

display sampler [ sampler-name ] [ slot slot-number ]

Display configuration information for a sampler (in IRF mode).

display sampler [ sampler-name ] [ chassis chassis-number slot slot-number ]

 

 

 


Configuring port mirroring

The port mirroring feature is available on both Layer 2 and Layer 3 Ethernet interfaces. The term "interface" in this chapter collectively refers to these two types of interfaces. You can use the port link-mode command to configure an Ethernet port as a Layer 2 or Layer 3 interface (see Layer 2—LAN Switching Configuration Guide).

Overview

Port mirroring copies the packets passing through a port or CPU to the monitor port that connects to a data monitoring device for packet analysis.

Terminology

The following terms are used in port mirroring configuration.

Mirroring source

The mirroring sources can be one or more monitored ports or CPUs. The monitored ports and CPUs are called source ports and source CPUs, respectively.

Packets passing through mirroring sources are copied to a port connecting to a data monitoring device for packet analysis. The copies are called mirrored packets.

Source device

The device where the mirroring sources reside is called a source device.

Mirroring destination

The mirroring destination connects to a data monitoring device and is the destination port (also known as the monitor port) of mirrored packets. Mirrored packets are sent out of the monitor port to the data monitoring device.

A monitor port might receive multiple copies of a packet when it monitors multiple mirroring sources. For example, two copies of a packet are received on Port 1 when the following conditions exist:

·          Port 1 is monitoring bidirectional traffic of Port 2 and Port 3 on the same device.

·          The packet travels from Port 2 to Port 3.

Destination device

The device where the monitor port resides is called the destination device.

Mirroring direction

The mirroring direction specifies the direction of the traffic that is copied on a mirroring source.

·          InboundCopies packets received.

·          OutboundCopies packets sent.

·          BidirectionalCopies packets received and sent.

Mirroring group

Port mirroring is implemented through mirroring groups, which include local, remote source, and remote destination groups. For more information about the mirroring groups, see "Port mirroring classification and implementation."

Reflector port, egress port, and remote probe VLAN

Reflector ports, remote probe VLANs, and egress ports are used for Layer 2 remote port mirroring. The remote probe VLAN is a dedicated VLAN for transmitting mirrored packets to the destination device. Both the reflector port and egress port reside on a source device and send mirrored packets to the remote probe VLAN.

For more information about the reflector port, egress port, remote probe VLAN, and Layer 2 remote port mirroring, see "Port mirroring classification and implementation."

 

 

NOTE:

On port mirroring devices, all ports except source, destination, reflector, and egress ports are called common ports.

 

Port mirroring classification and implementation

Port mirroring includes local port mirroring and remote port mirroring.

·          Local port mirroring—The mirroring sources and the mirroring destination are on the same device.

·          Remote port mirroring—The mirroring sources and the mirroring destination are on different devices.

Local port mirroring

In local port mirroring, the following conditions exist:

·          The source device is directly connected to a data monitoring device.

·          The source device acts as the destination device to forward mirrored packets to the data monitoring device.

A local mirroring group is a mirroring group that contains the mirroring sources and the mirroring destination on the same device.

In a local mirroring group, the source ports or source CPUs, and the monitor port can be located on different cards of the same device.

Figure 27 Local port mirroring implementation

 

As shown in Figure 27, the source port FortyGigE 1/0/1 and the monitor port FortyGigE 1/0/2 reside on the same device. Packets received on FortyGigE 1/0/1 are copied to FortyGigE 1/0/2. FortyGigE 1/0/2 then forwards the packets to the data monitoring device for analysis.

Remote port mirroring

In remote port mirroring, the following conditions exist:

·          The source device is not directly connected to a data monitoring device.

·          The source device copies mirrored packets to the destination device, which forwards them to the data monitoring device.

·          The mirroring sources and the mirroring destination reside on different devices and are in different mirroring groups.

A remote source group is a mirroring group that contains the mirroring sources. A remote destination group is a mirroring group that contains the mirroring destination. Intermediate devices are the devices between the source device and the destination device.

In Layer 2 remote port mirroring, the mirroring source and the mirroring destination are located on different devices on a same Layer 2 network.

In Layer 2 remote port mirroring, packets are mirrored as follows:

1.        The source device copies packets received on the mirroring sources to the egress port.

2.        The egress port forwards the mirrored packets to the intermediate devices.

3.        The intermediate devices then flood the mirrored packets in the remote probe VLAN and transmit the packets to the destination device.

4.        Upon receiving the mirrored packets, the destination device checks whether the ID of the mirrored packets is the same as the remote probe VLAN ID. If the two VLAN IDs match, the destination device forwards the mirrored packets to the data monitoring device through the monitor port.

Figure 28 Layer 2 remote port mirroring implementation

 

 

To ensure Layer 2 forwarding of the mirrored packets, assign the intermediate devices' ports facing the source and destination devices to the remote probe VLAN.

In Layer 2 remote port mirroring, the switch does not support bidirectional mirroring on the same port in a mirroring group.

Configuring local port mirroring

A local mirroring group takes effect only when you configure the source ports or source CPUs, and the monitor port for the local mirroring group.

On an IRF fabric, mirroring traffic between IRF member devices is not supported.

Local port mirroring configuration task list

Tasks at a glance

1.       (Required.) Creating a local mirroring group

2.       (Required.) Perform at least one of the following tasks:

?  Configuring source ports for the local mirroring group

?  Configuring source CPUs for the local mirroring group

3.       (Required.) Configuring the monitor port for the local mirroring group

 

Creating a local mirroring group

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a local mirroring group.

mirroring-group group-id local [ sampler sampler-name ]

By default, no local mirroring group exists.

 

Configuring source ports for the local mirroring group

To configure source ports for a local mirroring group, use one of the following methods:

·          Assign a list of source ports to a mirroring group in system view.

·          Assign a port to it as a source port in interface view.

To assign multiple ports to the mirroring group as source ports in interface view, repeat the operation.

Configuration restrictions and guidelines

When you configure source ports for a local mirroring group, follow these restrictions and guidelines:

·          A mirroring group can contain multiple source ports.

·          A source port can belong to only one mirroring group.

·          A source port cannot be configured as a reflector port, egress port, or monitor port.

Configuring source ports in system view

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure source ports for the specified local mirroring group.

mirroring-group group-id mirroring-port interface-list { both | inbound | outbound }

By default, no source port is configured for a local mirroring group.

 

Configuring source ports in interface view

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the port as a source port for the specified local mirroring group.

mirroring-group group-id mirroring-port { both | inbound | outbound }

By default, a port does not act as a source port for any local mirroring group.

 

Configuring source CPUs for the local mirroring group

A mirroring group can contain multiple source CPUs.

To configure source CPUs for a local mirroring group:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure source CPUs for the specified local mirroring group.

mirroring-group group-id mirroring-cpu slot slot-number-list { both | inbound | outbound }

By default, no source CPU is configured for a local mirroring group.

 

Configuring the monitor port for the local mirroring group

To configure the monitor port for a local mirroring group, use one of the following methods:

·          Configure the monitor port for the local mirroring group in system view.

·          Assign a port to the mirroring group as the monitor port in interface view.

Configuration restrictions and guidelines

When you configure the monitor port for a mirroring group, follow these restrictions and guidelines:

·          For the mirroring function to operate correctly, disable the spanning tree feature on the monitor port.

·          For a Layer 2 aggregate interface configured as the monitor port of a mirroring group, do not configure its member interfaces as source ports of the mirroring group.

·          A mirroring group contains only one monitor port.

·          Use a monitor port for port mirroring only, so the data monitoring device receives only the mirrored traffic.

·          In source CPU mode, directly connect the monitor port to the data monitoring device. Disable the following features on the monitor port:

?  IGMP snooping.

?  MAC address learning.

?  Spanning tree.

?  Static ARP.

Configuring the monitor port in system view

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the monitor port for the specified local mirroring group.

mirroring-group group-id monitor-port interface-type interface-number

By default, no monitor port is configured for a local mirroring group.

 

Configuring the monitor port in interface view

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the port as the monitor port for the specified mirroring group.

mirroring-group group-id monitor-port

By default, a port does not act as the monitor port for any local mirroring group.

 

Configure local port mirroring with multiple monitor ports

Typically, you can configure only one monitor port in a local mirroring group. To configure local port mirroring to support multiple monitor ports, use the remote probe VLAN.

In Layer 2 remote port mirroring, mirrored packets are broadcast within the remote probe VLAN.

To broadcast mirrored packets to multiple monitor ports through the remote probe VLAN, perform the following tasks:

1.        Configure a remote source group on the local device.

2.        Specify the reflector port for this mirroring group.

3.        Configure a remote probe VLAN for this mirroring group.

4.        Assign the ports connecting the data monitoring devices to the remote probe VLAN.

Configuration restrictions and guidelines

When you configure local port mirroring to support multiple monitor ports, follow these restrictions and guidelines:

·          Do not configure a Layer 2 aggregate interface as the reflector port.

·          As a best practice, configure an unused port as the reflector port of a remote source group, and do not connect a cable to the reflector port.

·          A mirroring group can contain multiple source ports.

·          For the port mirroring function to operate correctly, do not assign a source port to the remote probe VLAN.

·          If you have already configured a reflector port for a remote source group, do not configure an egress port for it.

·          A VLAN can act as the remote probe VLAN for only one remote source group. As a best practice, use the remote probe VLAN for port mirroring exclusively. Do not create a VLAN interface for the VLAN or configure any other features for the VLAN.

·          A remote probe VLAN must be a static VLAN. To delete this static VLAN, you must first remove the remote probe VLAN configuration by using the undo mirroring-group remote-probe vlan command.

·          If the remote probe VLAN of a remote mirroring group is removed, the remote mirroring group will become invalid.

Configuration procedure

To configure local port mirroring with multiple monitor ports:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.     Create a remote source group.

mirroring-group group-id remote-source [ sampler sampler-name ]

By default, no mirroring group exists on a device.

3.     Configure source ports for the remote source group.

·      In system view:
mirroring-group group-id mirroring-port mirroring-port-list { both | inbound | outbound }

·      In interface view:

a.     interface interface-type interface-number

b.     mirroring-group group-id mirroring-port { both | inbound | outbound }

c.     quit

By default, no source port is configured for a mirroring group.

4.     Configure the reflector port for the remote source group.

mirroring-group group-id reflector-port reflector-port

By default, no reflector port is configured for a mirroring group.

5.     Create the remote probe VLAN and enter VLAN view.

vlan vlan-id

By default, no remote probe VLAN is configured for a mirroring group.

6.     Assign monitor ports to the remote probe VLAN.

port interface-list

By default, a newly-created VLAN does not have any member port.

7.     Return to system view.

quit

N/A

8.     Configure the remote probe VLAN for the remote source group.

mirroring-group group-id remote-probe vlan rprobe-vlan-id

By default, no remote probe VLAN is configured for a mirroring group.

 

Configuring Layer 2 remote port mirroring

To configure Layer 2 remote port mirroring, perform the following tasks:

·          Configure a remote source group on the source device.

·          Configure a cooperating remote destination group on the destination device.

·          If intermediate devices exist, configure the following devices and ports to allow the remote probe VLAN to pass through:

?  Intermediate devices.

?  Ports connected to the intermediate devices on the source and destinations devices.

When you configure Layer 2 remote port mirroring, follow these restrictions and guidelines:

·          For a mirrored packet to successfully arrive at the remote destination device, make sure the VLAN ID of the mirrored packet is not removed or changed.

·          Layer 2 remote port mirroring does not support using Layer 2 aggregate interfaces as source ports or monitor ports.

·          As a best practice, configure devices in the order of the destination device, the intermediate devices, and the source device.

·          On an IRF fabric, mirroring traffic between IRF member devices is not supported.

Layer 2 remote port mirroring configuration task list

Tasks at a glance

(Required.) Configuring a remote destination group on the destination device:

1.       Creating a remote destination group

2.       Configuring the monitor port for a remote destination group

3.       Configuring the remote probe VLAN for a remote destination group

4.       Assigning the monitor port to the remote probe VLAN

(Required.) Configuring a remote source group on the source device:

5.       Creating a remote source group

6.       Perform at least one of the following tasks:

?  Configuring source ports for a remote source group

?  Configuring source CPUs for a remote source group

7.       Configuring the egress port for a remote source group

8.       Configuring the remote probe VLAN for a remote source group

 

Configuring a remote destination group on the destination device

Creating a remote destination group

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a remote destination group.

mirroring-group group-id remote-destination [ sampler sampler-name ]

By default, no remote destination group exists on a device.

 

Configuring the monitor port for a remote destination group

To configure the monitor port for a mirroring group, use one of the following methods:

·          Configure the monitor port for the mirroring group in system view.

·          Assign a port to the mirroring group as the monitor port in interface view.

When you configure the monitor port for a remote destination group, follow these restrictions and guidelines:

·          Do not enable the spanning tree feature on the monitor port.

·          Use a monitor port only for port mirroring, so the data monitoring device receives only the mirrored traffic.

·          A mirroring group must contain only one monitor port.

·          In source CPU mode, directly connect the monitor port to the data monitoring device. Disable the following features on the monitor port:

?  IGMP snooping.

?  MAC address learning.

?  Spanning tree.

?  Static ARP.

Configuring the monitor port for a remote destination group in system view

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the monitor port for the specified remote destination group.

mirroring-group group-id monitor-port interface-type interface-number

By default, no monitor port is configured for a remote destination group.

 

Configuring the monitor port for a remote destination group in interface view

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the port as the monitor port for the specified remote destination group.

mirroring-group group-id monitor-port

By default, a port does not act as the monitor port for any remote destination group.

 

Configuring the remote probe VLAN for a remote destination group

When you configure the remote probe VLAN for a remote destination group, follow these restrictions and guidelines:

·          Only an existing static VLAN can be configured as a remote probe VLAN.

·          When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port mirroring exclusively.

·          Configure the same remote probe VLAN for the remote mirroring groups on the source and destination devices.

To configure the remote probe VLAN for a remote destination group:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the remote probe VLAN for the specified remote destination group.

mirroring-group group-id remote-probe vlan vlan-id

By default, no remote probe VLAN is configured for a remote destination group.

 

Assigning the monitor port to the remote probe VLAN

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter the interface view of the monitor port.

interface interface-type interface-number

N/A

3.       Assign the port to the remote probe VLAN.

·         For an access port:
port access vlan vlan-id

·         For a trunk port:
port trunk permit vlan vlan-id

·         For a hybrid port:
port hybrid vlan vlan-id { tagged | untagged }

For more information about the port access vlan, port trunk permit vlan, and port hybrid vlan commands, see Layer 2—LAN Switching Command Reference.

 

Configuring a remote source group on the source device

Creating a remote source group

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a remote source group.

mirroring-group group-id remote-source [ sampler sampler-name ]

By default, no remote source group exists on a device.

 

Configuring source ports for a remote source group

To configure source ports for a mirroring group, use one of the following methods:

·          Assign a list of source ports to the mirroring group in system view.

·          Assign a port to the mirroring group as a source port in interface view.

To assign multiple ports to the mirroring group as source ports in interface view, repeat the operation.

When you configure source ports for a remote source group, follow these restrictions and guidelines:

·          Do not assign a source port of a mirroring group to the remote probe VLAN of the mirroring group.

·          A mirroring group can contain multiple source ports.

·          A source port can belong to only one mirroring group.

·          A source port cannot be configured as a reflector port, monitor port, or egress port.

Configuring source ports for a remote source group in system view

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure source ports for the specified remote source group.

mirroring-group group-id mirroring-port interface-list { both | inbound | outbound }

By default, no source port is configured for a remote source group.

 

Configuring a source port for a remote source group in interface view

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the port as a source port for the specified remote source group.

mirroring-group group-id mirroring-port { both | inbound | outbound }

By default, a port does not act as a source port for any remote source group.

 

Configuring source CPUs for a remote source group

A mirroring group can contain multiple source CPUs.

To configure source CPUs for a remote source group:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure source CPUs for the specified remote source group.

·         In standalone mode:
mirroring-group group-id mirroring-cpu slot slot-number-list { both | inbound | outbound }

·         In IRF mode:
mirroring-group group-id mirroring-cpu chassis chassis-number slot slot-number-list { both | inbound | outbound }

By default, no source CPU is configured for a remote source group.

 

Configuring the egress port for a remote source group

To configure the egress port for a remote source group, use one of the following tasks:

·          Configure the egress port for the remote source group in system view.

·          Assign a port to the remote source group as the egress port in interface view.

When you configure the egress port for a remote source group, follow these restrictions and guidelines:

·          Disable the following features on the egress port:

?  Spanning tree.

?  IGMP snooping.

?  Static ARP.

?  MAC address learning.

·          A mirroring group contains only one egress port.

·          A port of an existing mirroring group cannot be configured as an egress port.

Configuring the egress port for a remote source group in system view

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the egress port for the specified remote source group.

mirroring-group group-id monitor-egress interface-type interface-number

By default, no egress port is configured for a remote source group.

 

Configuring the egress port for a remote source group in interface view

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter interface view.

interface interface-type interface-number

N/A

3.       Configure the port as the egress port for the specified remote source group.

mirroring-group group-id monitor-egress

By default, a port does not act as the egress port for any remote source group.

 

Configuring the remote probe VLAN for a remote source group

When you configure the remote probe VLAN for a remote source group, follow these restrictions and guidelines:

·          Only an existing static VLAN can be configured as a remote probe VLAN.

·          When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port mirroring exclusively.

·          The remote mirroring groups on the source device and destination device must use the same remote probe VLAN.

To configure the remote probe VLAN for a remote source group:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure the remote probe VLAN for the specified remote source group.

mirroring-group group-id remote-probe vlan vlan-id

By default, no remote probe VLAN is configured for a remote source group.

 

Displaying and maintaining port mirroring

Execute display commands in any view.

 

Task

Command

Display mirroring group information.

display mirroring-group { group-id | all | local | remote-destination | remote-source }

 

Port mirroring configuration examples

Local port mirroring configuration example (in source port mode)

Network requirements

As shown in Figure 29, configure local port mirroring in source port mode to enable the server to monitor the bidirectional traffic of the Marketing department and the Technical department.

Figure 29 Network diagram

 

Configuration procedure

# Create local mirroring group 1.

<Device> system-view

[Device] mirroring-group 1 local

# Configure FortyGigE 1/0/1 and FortyGigE 1/0/2 as source ports for local mirroring group 1.

[Device] mirroring-group 1 mirroring-port fortygige 1/0/1 fortygige 1/0/2 both

# Configure FortyGigE 1/0/3 as the monitor port for local mirroring group 1.

[Device] mirroring-group 1 monitor-port fortygige 1/0/3

# Disable the spanning tree feature on the monitor port FortyGigE 1/0/3.

[Device] interface fortygige 1/0/3

[Device-FortyGigE1/0/3] undo stp enable

[Device-FortyGigE1/0/3] quit

Verifying the configuration

# Display information about all mirroring groups.

[Device] display mirroring-group all

Mirroring group 1:

    Type: Local

    Status: Active

    Mirroring port:

        FortyGigE1/0/1  Both

        FortyGigE1/0/2  Both

    Monitor port: FortyGigE1/0/3

Local port mirroring configuration example (in source CPU mode)

Network requirements

As shown in Figure 30, FortyGigE 1/0/1 and FortyGigE 1/0/2 are located on the card in slot 1.

Configure local port mirroring in source CPU mode to enable the server to monitor all packets matching the following criteria:

·          Received and sent by the Marketing department and the Technical department.

·          Processed by the CPU of the card in slot 1 of the device.

Figure 30 Network diagram

 

Configuration procedure

# Create local mirroring group 1.

<Device> system-view

[Device] mirroring-group 1 local

# Configure the CPU of the card in slot 1 of the device as a source CPU for local mirroring group 1.

[Device] mirroring-group 1 mirroring-cpu slot 1 both

# Configure FortyGigE 1/0/3 as the monitor port for local mirroring group 1.

[Device] mirroring-group 1 monitor-port fortygige 1/0/3

# Disable the spanning tree feature on the monitor port FortyGigE 1/0/3.

[Device] interface fortygige 1/0/3

[Device-FortyGigE1/0/3] undo stp enable

[Device-FortyGigE1/0/3] quit

Verifying the configuration

# Display information about all mirroring groups.

[Device] display mirroring-group all

Mirroring group 1:

    Type: Local

    Status: Active

    Mirroring CPU:

        Slot 1  Both

    Monitor port: FortyGigE1/0/3

Local port mirroring with multiple monitor ports configuration example

Network requirements

As shown in Figure 31, configure port mirroring to enable all data monitoring devices (Server A, Server B, and Server C) to monitor the bidirectional traffic of the three departments.

Figure 31 Network diagram

 

Configuration procedure

# Create remote source group 1.

<DeviceA> system-view

[DeviceA] mirroring-group 1 remote-source

# Configure FortyGigE 1/0/1 through FortyGigE 1/0/3 as source ports of remote source group 1.

[DeviceA] mirroring-group 1 mirroring-port fortygige 1/0/1 to fortygige 1/0/3 both

# Configure an unused port (FortyGigE 1/0/5, for example) of Device A as the reflector port of remote source group 1.

[DeviceA] mirroring-group 1 reflector-port fortygige 1/0/5

# Create VLAN 10 and assign the FortyGigE 1/0/11 through FortyGigE 1/0/13 to VLAN 10.

[DeviceA] vlan 10

[DeviceA-vlan10] port fortygige 1/0/11 to fortygige 1/0/13

[DeviceA-vlan10] quit

# Configure VLAN 10 as the remote probe VLAN of remote source group 1.

[DeviceA] mirroring-group 1 remote-probe vlan 10

Layer 2 remote port mirroring configuration example

Network requirements

As shown in Figure 32, configure Layer 2 remote port mirroring to enable the server to monitor the outbound traffic from the Marketing department.

Figure 32 Network diagram

 

Configuration procedure

1.        Configure Device C (the destination device):

# Configure FortyGigE 1/0/1 as a trunk port, and assign the port to VLAN 2.

<DeviceC> system-view

[DeviceC] interface fortygige 1/0/1

[DeviceC-FortyGigE1/0/1] port link-type trunk

[DeviceC-FortyGigE1/0/1] port trunk permit vlan 2

[DeviceC-FortyGigE1/0/1] quit

# Create a remote destination group.

[DeviceC] mirroring-group 2 remote-destination

# Create VLAN 2.

[DeviceC] vlan 2

# Disable MAC address learning for VLAN 2.

[DeviceC-vlan2] undo mac-address mac-learning enable

[DeviceC-vlan2] quit

# Configure VLAN 2 as the remote probe VLAN for the mirroring group.

[DeviceC] mirroring-group 2 remote-probe vlan 2

# Configure FortyGigE 1/0/2 as the monitor port for the mirroring group.

[DeviceC] interface fortygige 1/0/2

[DeviceC-FortyGigE1/0/2] mirroring-group 2 monitor-port

# Disable the spanning tree feature on FortyGigE 1/0/2.

[DeviceC-FortyGigE1/0/2] undo stp enable

# Assign FortyGigE 1/0/2 to VLAN 2.

[DeviceC-FortyGigE1/0/2] port access vlan 2

[DeviceC-FortyGigE1/0/2] quit

2.        Configure Device B (the intermediate device):

# Create VLAN 2.

<DeviceB> system-view

[DeviceB] vlan 2

# Disable MAC address learning for VLAN 2.

[DeviceB-vlan2] undo mac-address mac-learning enable

[DeviceB-vlan2] quit

# Configure FortyGigE 1/0/1 as a trunk port, and assign the port to VLAN 2.

[DeviceB] interface fortygige 1/0/1

[DeviceB-FortyGigE1/0/1] port link-type trunk

[DeviceB-FortyGigE1/0/1] port trunk permit vlan 2

[DeviceB-FortyGigE1/0/1] quit

# Configure FortyGigE 1/0/2 as a trunk port, and assign the port to VLAN 2.

[DeviceB] interface fortygige 1/0/2

[DeviceB-FortyGigE1/0/2] port link-type trunk

[DeviceB-FortyGigE1/0/2] port trunk permit vlan 2

[DeviceB-FortyGigE1/0/2] quit

3.        Configure Device A (the source device):

# Create a remote source group.

<DeviceA> system-view

[DeviceA] mirroring-group 1 remote-source

# Create VLAN 2.

[DeviceA] vlan 2

# Disable MAC address learning for VLAN 2.

[DeviceA-vlan2] undo mac-address mac-learning enable

[DeviceA-vlan2] quit

# Configure VLAN 2 as the remote probe VLAN for the mirroring group.

[DeviceA] mirroring-group 1 remote-probe vlan 2

# Configure FortyGigE 1/0/1 as a source port for the mirroring group.

[DeviceA] mirroring-group 1 mirroring-port fortygige 1/0/1 outbound

# Configure FortyGigE 1/0/2 as the egress port for the mirroring group.

[DeviceA] mirroring-group 1 monitor-egress fortygige 1/0/2

# Configure FortyGigE 1/0/2 as a trunk port, and assign the port to VLAN 2.

[DeviceA] interface fortygige 1/0/2

[DeviceA-FortyGigE1/0/2] port link-type trunk

[DeviceA-FortyGigE1/0/2] port trunk permit vlan 2

# Disable the spanning tree feature on FortyGigE 1/0/2.

[DeviceA-FortyGigE1/0/2] undo stp enable

[DeviceA-FortyGigE1/0/2] quit

Verifying the configuration

# Display information about all mirroring groups on Device C.

[DeviceC] display mirroring-group all

Mirroring group 2:

    Type: Remote destination

    Status: Active

    Monitor port: FortyGigE1/0/2

    Remote probe VLAN: 2

# Display information about all mirroring groups on Device A.

[DeviceA] display mirroring-group all

Mirroring group 1:

    Type: Remote source

    Status: Active

    Mirroring port:

        FortyGigE1/0/1  Outbound

    Remote probe VLAN: 2

 


Configuring flow mirroring

The flow mirroring feature is available on both Layer 2 and Layer 3 Ethernet interfaces. The term "interface" in this chapter collectively refers to these two types of interfaces. You can use the port link-mode command to configure an Ethernet port as a Layer 2 or Layer 3 interface (see Layer 2—LAN Switching Configuration Guide).

Overview

Flow mirroring copies packets matching a class to a destination for packet analysis and monitoring. It is implemented through QoS policies.

To configure flow mirroring, perform the following tasks:

·          Define traffic classes and configure match criteria to classify packets to be mirrored. Flow mirroring allows you to flexibly classify packets to be analyzed by defining match criteria.

·          Configure traffic behaviors to mirror the matching packets to the specified destination.

You can configure an action to mirror the matching packets to one of the following destinations:

·          InterfaceThe matching packets are copied to an interface connecting to a data monitoring device. The data monitoring device analyzes the packets received on the interface.

·          CPUThe matching packets are copied to the CPU of the card where they are received, The CPU analyzes the packets or deliver the packets to upper layers.

For more information about QoS policies, traffic classes, and traffic behaviors, see ACL and QoS Configuration Guide.

Flow mirroring configuration task list

Tasks at a glance

(Required.) Configuring match criteria

(Required.) Configuring a traffic behavior

(Required.) Configuring a QoS policy

(Required.) Applying a QoS policy:

·         Applying a QoS policy to an interface

·         Applying a QoS policy to a VLAN

·         Applying a QoS policy globally

 

For more information about the following commands except the mirror-to command, see ACL and QoS Command Reference.

Configuring match criteria

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a class and enter class view.

traffic classifier tcl-name [ operator { and | or } ]

By default, no traffic class exists.

3.       Configure match criteria.

if-match match-criteria

By default, no match criterion is configured in a traffic class.

 

Configuring a traffic behavior

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a traffic behavior and enter traffic behavior view.

traffic behavior behavior-name

By default, no traffic behavior exists.

3.       Configure a mirroring action for the traffic behavior.

·         Mirror traffic to an interface:
mirror-to interface interface-type interface-number

·         Mirror traffic to a CPU:
mirror-to cpu

By default, no mirroring action is configured for a traffic behavior.

If you execute the mirror-to interface command for a traffic behavior multiple times, the most recent configuration takes effect.

When you configure flow mirroring to CPUs, the switch does not support applying QoS policies globally or to an interface or a VLAN for the outbound traffic.

4.       (Optional.) Display traffic behavior configuration.

display traffic behavior

Available in any view.

 

Configuring a QoS policy

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a QoS policy and enter QoS policy view.

qos policy policy-name

By default, no QoS policy exists.

3.       Associate a class with a traffic behavior in the QoS policy.

classifier tcl-name behavior behavior-name

By default, no traffic behavior is associated with a class.

4.       (Optional.) Display QoS policy configuration.

display qos policy

Available in any view.

 

Applying a QoS policy

Applying a QoS policy to an interface

By applying a QoS policy to an interface, you can mirror the traffic in the specified direction of the interface. A policy can be applied to multiple interfaces. In one direction (inbound or outbound) of an interface, only one policy can be applied.

To apply a QoS policy to an interface:

 

Step

Command

1.       Enter system view.

system-view

2.       Enter interface view.

interface interface-type interface-number

3.       Apply a policy to the interface.

qos apply policy policy-name { inbound | outbound }

 

Applying a QoS policy to a VLAN

You can apply a QoS policy to a VLAN to mirror the traffic in the specified direction on all ports in the VLAN.

To apply the QoS policy to a VLAN:

 

Step

Command

1.       Enter system view.

system-view

2.       Apply a QoS policy to a VLAN.

qos vlan-policy policy-name vlan vlan-id-list { inbound | outbound }

 

Applying a QoS policy globally

You can apply a QoS policy globally to mirror the traffic in the specified direction on all ports.

To apply a QoS policy globally:

 

Step

Command

1.       Enter system view.

system-view

2.       Apply a QoS policy globally.

qos apply policy policy-name global { inbound | outbound }

 

Flow mirroring configuration example

Network requirements

As shown in Figure 33, different departments use IP addresses on different subnets.

Configure flow mirroring so that the server can monitor the following traffic:

·          Traffic that the Technical department sends to access the Internet.

·          IP traffic that the Technical department sends to the Marketing department during working hours (8:00 to 18:00) on weekdays.

Figure 33 Network diagram

 

Configuration procedure

# Create a working hour range work, in which the working hour is from 8:00 to 18:00 on weekdays.

<DeviceA> system-view

[DeviceA] time-range work 8:00 to 18:00 working-day

# Create ACL 3000 to allow packets from the Technical department to access the Internet and to the Marketing department during working hours.

[DeviceA] acl number 3000

[DeviceA-acl-adv-3000] rule permit tcp source 192.168.2.0 0.0.0.255 destination-port eq www

[DeviceA-acl-adv-3000] rule permit ip source 192.168.2.0 0.0.0.255 destination 192.168.1.0 0.0.0.255 time-range work

[DeviceA-acl-adv-3000] quit

# Create traffic class tech_c, and configure the match criterion as ACL 3000.

[DeviceA] traffic classifier tech_c

[DeviceA-classifier-tech_c] if-match acl 3000

[DeviceA-classifier-tech_c] quit

# Create traffic behavior tech_b, configure the action of mirroring traffic to port FortyGigE 1/0/3.

[DeviceA] traffic behavior tech_b

[DeviceA-behavior-tech_b] mirror-to interface fortygige 1/0/3

[DeviceA-behavior-tech_b] quit

# Create QoS policy tech_p, and associate traffic class tech_c with traffic behavior tech_b in the QoS policy.

[DeviceA] qos policy tech_p

[DeviceA-qospolicy-tech_p] classifier tech_c behavior tech_b

[DeviceA-qospolicy-tech_p] quit

# Apply QoS policy tech_p to the incoming packets of FortyGigE 1/0/4.

[DeviceA] interface fortygige 1/0/4

[DeviceA-FortyGigE1/0/4] qos apply policy tech_p inbound

[DeviceA-FortyGigE1/0/4] quit

Verifying the configuration

# Verify that you can monitor the following traffic through the server:

·          All traffic sent by the Technical department to access the Internet.

·          IP traffic that the Technical department sends to the Marketing department during working hours on weekdays.

(Details not shown.)


Configuring sFlow

Sampled Flow (sFlow) is a traffic monitoring technology.

As shown in Figure 34, the sFlow system involves an sFlow agent embedded in a device and a remote sFlow collector. The sFlow agent collects interface counter information and packet information and encapsulates the sampled information in sFlow packets. When the sFlow packet buffer is full, or the aging timer (fixed to 1 second) of sFlow packets expires, the sFlow agent sends the sFlow packets in UDP datagrams to the specified sFlow collector. The sFlow collector analyzes the information and displays the results.

sFlow provides the following sampling mechanisms:

·          Flow samplingObtains packet information.

·          Counter sampling—Obtains interface counter information.

Figure 34 sFlow system

 

As a traffic monitoring technology, sFlow has the following advantages:

·          Supports traffic monitoring on Gigabit and higher-speed networks.

·          Provides good scalability to allow one sFlow collector to monitor multiple sFlow agents.

·          Saves money by embedding the sFlow agent in a device, instead of using a dedicated sFlow agent device.

Protocols and standards

·          RFC 3176, InMon Corporation's sFlow: A Method for Monitoring Traffic in Switched and Routed Networks

·          sFlow.org, sFlow Version 5

sFlow configuration task list

Tasks at a glance

(Required.) Configuring the sFlow agent and sFlow collector information

Perform at least one of the following tasks:

·         Configuring flow sampling

·         Configuring counter sampling

 

Configuring the sFlow agent and sFlow collector information

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       (Optional.) Configure an IP address for the sFlow agent.

sflow agent ip ip-address

By default, no IP address is configured for the sFlow agent. The device periodically checks whether the sFlow agent has an IP address. If not, the device automatically selects an IPv4 address for the sFlow agent but does not save the IPv4 address in the configuration file.

NOTE:

·         As a best practice, manually configure an IP address for the sFlow agent.

·         Only one IP address can be configured for the sFlow agent on the device, and a newly configured IP address overwrites the existing one.

3.       Configure the sFlow collector information.

sflow collector collector-id [ vpn-instance vpn-instance-name ] ip ip-address [ port port-number | datagram-size size | time-out seconds | description text ] *

By default, no sFlow collector information is configured.

4.       (Optional.) Specify the source IP address of sFlow packets.

sflow source ip ip-address

By default, the source IP address is determined by routing.

 

Configuring flow sampling

Perform this task to configure flow sampling on an Ethernet interface. The sFlow agent samples packets on that interface according to the configured parameters, encapsulates them into sFlow packets, and sends them in UDP packets to the specified sFlow collector.

To configure flow sampling:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Ethernet interface view.

interface interface-type interface-number

N/A

3.       (Optional.) Set the flow sampling mode.

sflow sampling-mode { determine | random }

The default setting is random.

The switch does not support the determine mode.

4.       Enable flow sampling and specify the number of packets out of which flow sampling samples a packet on the interface.

sflow sampling-rate rate

By default, flow sampling samples no packet.

5.       (Optional.) Set the maximum number of bytes of a packet (starting from the packet header) that flow sampling can copy.

sflow flow max-header length

The default setting is 128 bytes.

6.       Specify the sFlow collector for flow sampling.

sflow flow collector collector-id

By default, no sFlow collector is specified for flow sampling.

 

Configuring counter sampling

Perform this task to configure counter sampling on an Ethernet interface. The sFlow agent periodically collects the counter information on that interface, encapsulates the information into sFlow packets, and sends them in UDP packets to the specified sFlow collector.

To configure counter sampling:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter Ethernet interface view.

interface interface-type interface-number

N/A

3.       Enable counter sampling and set the counter sampling interval.

sflow counter interval interval-time

By default, counter sampling is disabled.

4.       Specify the sFlow collector for counter sampling.

sflow counter collector collector-id

By default, no sFlow collector is specified for counter sampling.

 

Displaying and maintaining sFlow

Execute display commands in any view.

 

Task

Command

Display sFlow configuration.

display sflow

 

sFlow configuration example

Network requirements

As shown in Figure 35, perform the following tasks:

·          Configure flow sampling in random mode and counter sampling on FortyGigE 1/0/1 of the device to monitor traffic on the port.

·          Configure the device to send sampled information in sFlow packets through FortyGigE 1/0/3 to the sFlow collector.

Figure 35 Network diagram

 

Configuration procedure

1.        Configure the IP addresses and subnet masks for interfaces, as shown in Figure 35. (Details not shown.)

2.        Configure the sFlow agent and configure information about the sFlow collector:

# Configure the IP address for the sFlow agent.

<Sysname> system-view

[Sysname] sflow agent ip 3.3.3.1

# Configure information about the sFlow collector: specify the sFlow collector ID as 1, IP address as 3.3.3.2, port number as 6343 (default), and description as netserver.

[Sysname] sflow collector 1 ip 3.3.3.2 description netserver

3.        Configure counter sampling:

# Enable counter sampling and set the counter sampling interval to 120 seconds on FortyGigE 1/0/1.

[Sysname] interface fortygige 1/0/1

[Sysname-FortyGigE1/0/1] sflow counter interval 120

# Specify sFlow collector 1 for counter sampling.

[Sysname-FortyGigE1/0/1] sflow counter collector 1

4.        Configure flow sampling:

# Enable flow sampling and set the flow sampling mode to random and sampling interval to 4000.

[Sysname-FortyGigE1/0/1] sflow sampling-mode random

[Sysname-FortyGigE1/0/1] sflow sampling-rate 4000

# Specify sFlow collector 1 for flow sampling.

[Sysname-FortyGigE1/0/1] sflow flow collector 1

Verifying the configuration

# Verify that FortyGigE 1/0/1 enabled with sFlow is active, and sFlow is operating correctly.

[Sysname-FortyGigE1/0/1] display sflow

sFlow datagram version: 5

Global information:

Agent IP: 3.3.3.1(CLI)

Source address:

Collector information:

ID    IP              Port  Aging      Size VPN-instance Description

1     3.3.3.2         6343  N/A        1400              netserver

Port information:                                                              

Interface      CID   Interval(s) FID   MaxHLen Rate     Mode      Status

FGE1/1         1     120         1     128     4000     Random    Active

Troubleshooting sFlow configuration

The remote sFlow collector cannot receive sFlow packets

Symptom

The remote sFlow collector cannot receive sFlow packets.

Analysis

The possible reasons include:

·          The sFlow collector is not specified.

·          sFlow is not configured on the interface.

·          The IP address of the sFlow collector specified on the sFlow agent is different from that of the remote sFlow collector.

·          No IP address is configured for the Layer 3 interface that sends sFlow packets,

·          An IP address is configured for the Layer 3 interface that sends sFlow packets. However, the UDP datagrams with this source IP address cannot reach the sFlow collector.

·          The physical link between the device and the sFlow collector fails.

·          The length of an sFlow packet is less than the sum of the following two values:

?  The length of the sFlow packet header.

?  The number of bytes that flow sampling can copy per packet.

·          The sFlow collector is bound to a non-existent VPN.

Solution

To resolve the problem:

1.        Verify that sFlow is correctly configured by using the display sflow command.

2.        Verify that a correct IP address is configured for the device to communicate with the sFlow collector.

3.        Verify that the physical link between the device and the sFlow collector is up.

4.        Verify that the length of an sFlow packet is greater than the sum of the following two values:

?  The length of the sFlow packet header.

?  The number of bytes (as a best practice, use the default) that flow sampling can copy per packet.

5.        Verify that the VPN bound to the sFlow collector already exists.


Configuring EAA

Overview

Embedded Automation Architecture (EAA) is a monitoring framework that enables you to self-define monitored events and actions to take in response to an event. It allows you to create monitor policies by using the CLI or Tcl scripts.

EAA framework

EAA framework includes a set of event sources, a set of event monitors, a real-time event manager (RTM), and a set of user-defined monitor policies, as shown in Figure 36.

Figure 36 EAA framework

 

Event sources

Event sources are software or hardware modules that trigger events (see Figure 36).

For example, the CLI module triggers an event when you enter a command. The Syslog module (the information center) triggers an event when it receives a log message.

Event monitors

EAA creates one event monitor to monitor the system for the event specified in each monitor policy. An event monitor notifies the RTM to run the monitor policy when the monitored event occurs.

RTM

RTM manages the creation, state machine, and execution of monitor policies.

EAA monitor policies

A monitor policy specifies the event to monitor and actions to take when the event occurs.

You can configure EAA monitor policies by using the CLI or Tcl.

A monitor policy contains the following elements:

·          One event.

·          A minimum of one action.

·          A minimum of one user role.

·          One running time setting.

For more information about these elements, see "Elements in a monitor policy."

Elements in a monitor policy

Event

Table 16 shows types of events that EAA can monitor.

Table 16 Monitored events

Event type

Description

CLI

CLI event occurs in response to monitored operations performed at the CLI. For example, a command is entered, a question mark (?) is entered, or the Tab key is pressed to complete a command.

Syslog

Syslog event occurs when the information center receives the monitored log within a specific period.

NOTE:

The log that is generated by the EAA RTM does not trigger the monitor policy to run.

Process

Process event occurs in response to a state change of the monitored process (such as an exception, shutdown, start, or restart). Both manual and automatic state changes can cause the event to occur.

Hotplug

Hotplug event occurs when the monitored card is inserted or removed while the device is operating.

Interface

Each interface event is associated with two user-defined thresholds: start and restart.

An interface event occurs when the monitored interface traffic statistic crosses the start threshold in the following situations:

·         The statistic crosses the start threshold for the first time.

·         The statistic crosses the start threshold each time after it crosses the restart threshold.

SNMP

Each SNMP event is associated with two user-defined thresholds: start and restart.

SNMP event occurs when the monitored MIB variable's value crosses the start threshold in the following situations:

·         The monitored variable's value crosses the start threshold for the first time.

·         The monitored variable's value crosses the start threshold each time after it crosses the restart threshold.

SNMP-Notification

SNMP-Notification event occurs when the monitored MIB variable's value in an SNMP notification matches the specified condition. For example, the broadcast traffic rate on an Ethernet interface reaches or exceeds 30%.

Track

Track event occurs when the state of the track entry changes from positive to negative. If you specify multiple track entries for a policy, EAA triggers the policy only when the state of all the track entries changes from positive to negative.

If you set a suppress time for a policy, the timer starts when the policy is triggered. The system does not process the messages that report the track entry positive-to-negative state change until the timer times out.

 

Action

You can create a series of order-dependent actions to take in response to the event specified in the monitor policy.

The following are available actions:

·          Executing a command.

·          Sending a log.

·          Enabling an active/standby switchover.

·          Executing a reboot without saving the running configuration.

User role

For EAA to execute an action in a monitor policy, you must assign the policy the user role that has access to the action-specific commands and resources. If EAA lacks access to an action-specific command or resource, EAA does not perform the action and all the subsequent actions.

For example, a monitor policy has four actions numbered from 1 to 4. The policy has user roles that are required for performing actions 1, 3, and 4. However, it does not have the user role required for performing action 2. When the policy is triggered, EAA executes only action 1.

For more information about user roles, see RBAC in Fundamentals Configuration Guide.

Runtime

Policy runtime limits the amount of time that the monitor policy can run from the time it is triggered. This setting prevents system resources from being occupied by incorrectly defined policies.

EAA environment variables

EAA environment variables decouple the configuration of action arguments from the monitor policy so you can modify a policy easily.

An EAA environment variable is defined as a <variable_name variable_value> pair and can be used in different policies. When you define an action, you can enter a variable name with a leading dollar sign ($variable_name). EAA will replace the variable name with the variable value when it performs the action.

To change the value for an action argument, modify the value specified in the variable pair instead of editing each affected monitor policy.

EAA environment variables include system-defined variables and user-defined variables.

System-defined variables

System-defined variables are provided by default, and they cannot be created, deleted, or modified by users. System-defined variable names start with an underscore (_) sign. The variable values are set automatically depending on the event setting in the policy that uses the variables.

System-defined variables include the following types:

·          Public variable—Available for any events.

·          Event-specific variable—Available only for a type of event.

Table 17 shows all system-defined variables.

Table 17 System-defined EAA environment variables by event type

Variable name

Description

Any event:

 

_event_id

Event ID.

_event_type

Event type.

_event_type_string

Event type description.

_event_time

Time when the event occurs.

_event_severity

Severity level of an event.

CLI:

 

_cmd

Commands that are matched.

Syslog:

 

_syslog_pattern

Log message content.

Hotplug:

 

_slot

ID of the slot where a hot swap event occurs.

Interface:

 

_ifname

Interface name.

SNMP:

 

_oid

OID of the MIB variable where an SNMP operation is performed.

_oid_value

Value of the MIB variable.

SNMP-Notification:

 

_oid

OID that is included in the SNMP notification.

Process:

 

_process_name

Process name.

 

User-defined variables

You can use user-defined variables for all types of events.

User-defined variable names can contain digits, characters, and the underscore sign (_), except that the underscore sign cannot be the leading character.

Feature and software version compatibility

The EAA feature is available in Release 1138P01 and later versions.

Configuring a user-defined EAA environment variable

Configure a user-defined EAA environment variable before you use it in an action.

To configure a user-defined EAA environment variable:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Configure a user-defined EAA environment variable.

rtm environment var-name var-value

By default, no user-defined environment variables exist.

The system provides the system-defined variables in Table 17.

 

Configuring a monitor policy

You can configure a monitor policy by using the CLI or Tcl.

Configuration restrictions and guidelines

When you configure monitor policies, follow these restrictions and guidelines:

·          Make sure the actions in different policies do not conflict. Policy execution result will be unpredictable if policies that conflict in actions are running concurrently.

·          You can assign the same policy name to a CLI-defined policy and a Tcl-defined policy. However, you cannot assign the same name to policies that are the same type.

·          The system executes the actions in a policy in ascending order of action IDs. When you add actions to a policy, you must make sure the execution order is correct.

Configuring a monitor policy from the CLI

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enter CLI-defined policy view.

rtm cli-policy policy-name

If the policy does not exist, this command creates the policy first.

3.       Configure an event in the policy.

·         Configure a CLI event:
event cli { async [ skip ] | sync } mode { execute | help | tab } pattern regular-exp

·         Configure a hotplug event in standalone mode:
event hotplug [ insert | remove ]
slot slot-number

·         Configure a hotplug event in IRF mode:
event hotplug [ insert | remove ]
chassis chassis-number slot slot-number

·         Configure an interface event:
event interface interface-type interface-number monitor-obj monitor-obj start-op start-op start-val start-val restart-op restart-op restart-val restart-val [ interval interval ]

·         Configure a process event in standalone mode:
event process { exception | restart | shutdown | start } [ name process-name [ instance instance-id ] ] [ slot slot-number ]

·         Configure a process event in IRF mode:
event process { exception | restart | shutdown | start } [ name process-name [ instance instance-id ] ] [ chassis chassis-number [ slot slot-number ] ]

·         Configure an SNMP event:
event snmp oid oid monitor-obj { get | next } start-op start-op start-val start-val restart-op restart-op restart-val restart-val [ interval interval ]

·         Configure an SNMP-Notification event:
event snmp-notification oid oid oid-val oid-val op op [ drop ]

·         Configure a Syslog event:
event syslog priority priority msg msg occurs times period period

·         Configure a track event:
event track track-list state negative [ suppress-time suppress-time ]

By default, a monitor policy does not contain an event.

You can configure only one event in a monitor policy. If the monitor policy already contains an event, the new event overrides the old event.

4.       Configure the actions to take when the event occurs.

·         Configure the action to execute a command:
action number cli command-line

·         Configure a reboot action in standalone mode:
action number reboot
[ slot slot-number ]

·         Configure a reboot action in IRF mode:
action number reboot [ chassis chassis-number [ slot slot-number ] ]

·         Configure a logging action:
action syslog priority priority facility local-number msg msg-body

·         Configure an active/standby switchover action:
action number switchover

By default, a monitor policy does not contain any actions.

Repeat this step to add a maximum of 232 actions to the policy.

When you define an action, you can specify a value or specify a variable name in $variable_name format for an argument.

5.       (Optional.) Assign a user role to the policy.

user-role role-name

By default, a monitor policy contains user roles that its creator had at the time of policy creation.

A monitor policy supports a maximum of 64 valid user roles. User roles added after this limit is reached do not take effect.

6.       (Optional.) Configure the policy runtime.

running-time time

The default runtime is 20 seconds.

7.       Enable the policy.

commit

By default, CLI-defined policies are not enabled.

A CLI-defined policy can take effect only after you perform this step.

 

Configuring a monitor policy by using Tcl

Step

Command

Remarks

1.       Edit a Tcl script file (see Table 18).

N/A

The supported Tcl version is 8.5.8.

2.       Download the file to the device by using FTP or TFTP.

N/A

For more information about using FTP and TFTP, see Fundamentals Configuration Guide.

3.       Enter system view.

system-view

N/A

4.       Create a Tcl-defined policy and bind it to the Tcl script file.

rtm tcl-policy policy-name tcl-filename

By default, no Tcl policies exist.

Make sure the script file is saved on all MPUs. This practice ensures that the policy can run correctly after an active/standby or master/standby switchover occurs or the MPU where the script file resides fails or is removed.

This step enables the Tcl-defined policy.

To revise the Tcl script of a policy, you must suspend all monitor policies first, and then resume the policies after you finish revising the script. The system cannot execute a Tcl-defined policy if you edit its Tcl script without suspending policies.

 

Write a Tcl script in two lines for a monitor policy, as shown in Table 18.

Table 18 Tcl script requirements

Line

Content

Requirements

Line 1

Event, user roles, and policy runtime

This line must use the following format:

::comware::rtm::event_register eventname arg1 arg2 arg3user-role role-name1 | [ user-role role-name2 | [ ] ][ running-time running-time ]

The arg1 arg2 arg3 … arguments represent event matching rules. If an argument value contains spaces, use double quotation marks ("") to enclose the value. For example, "a b c."

Line 2

Actions

When you define an action, you can specify a value or specify a variable name in $variable_name format for an argument.

The following actions are available:

·         Standard Tcl commands.

·         EAA-specific Tcl commands.

·         Commands supported by the device.

 

Suspending monitor policies

This task suspends all CLI-defined and Tcl-defined monitor policies except for the policies that are running.

To suspend monitor policies:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Suspend monitor policies.

rtm scheduler suspend

To resume monitor polices, use the undo rtm scheduler suspend command.

 

Displaying and maintaining EAA settings

Execute display commands except for the display this command in any view.

 

Task

Command

Display user-defined EAA environment variables.

display rtm environment [ var-name ]

Display EAA monitor policies.

display rtm policy { active | registered [ verbose ] } [ policy-name ]

Display the running configuration of all CLI-defined monitor policies.

display current-configuration

Display the running configuration of a CLI-defined monitor policy in CLI-defined monitor policy view.

display this

 

EAA configuration examples

CLI event monitor policy configuration example

Network requirements

Configure a policy from the CLI to monitor the event that occurs when a question mark (?) is entered at the command line that contains letters and digits.

When the event occurs, the system executes the command and sends the log message "hello world" to the information center.

Configuration procedure

# Create CLI-defined policy test and enter its view.

<Sysname> system-view

[Sysname] rtm cli-policy test

# Add a CLI event that occurs when a question mark (?) is entered at any command line that contains letters and digits.

[Sysname-rtm-test] event cli async mode help pattern [a-zA-Z0-9]

# Add an action that sends the message "hello world" with a priority of 4 from the logging facility local3 when the event occurs.

[Sysname-rtm-test] action 0 syslog priority 4 facility local3 msg “hello world”

# Add an action that enters system view when the event occurs.

[Sysname-rtm-test] action 2 cli system-view

# Add an action that creates VLAN 2 when the event occurs.

[Sysname-rtm-test] action 3 cli vlan 2

# Set the policy runtime to 2000 seconds. The system stops executing the policy and displays an execution failure message if it fails to complete policy execution within 2000 seconds.

[Sysname-rtm-test] running-time 2000

# Specify the network-admin user role for executing the policy.

[Sysname-rtm-test] user-role network-admin

# Enable the policy.

[Sysname-rtm-test] commit

Verifying the configuration

# Display information about the policy.

[Sysname-rtm-test] display rtm policy registered

Total number: 1

Type  Event      TimeRegistered       PolicyName

CLI   CLI        Aug 29 14:56:50 2013 test

# Enable the information center to output log messages to the current monitoring terminal.

[Sysname-rtm-test] return

<Sysname> terminal monitor

# Enter a question mark (?) at a command line that contains a letter d. Verify that the system displays the "hello world" message and a policy successfully executed message on the terminal screen.

<Sysname> d?

  debugging

  delete

  diagnostic-logfile

  dir

  display

 

<Sysname>d%May  7 02:10:03:218 2013 Sysname RTM/4/RTM_ACTION: "hello world"

%May  7 02:10:04:176 2013 Sysname RTM/6/RTM_POLICY: CLI policy test is running successfully.

Track event monitor policy configuration example

Network requirements

As shown in Figure 37, Device A has established BGP sessions with Device D and Device E. Traffic from Device D and Device E to the Internet is forwarded through Device A.

Configure a CLI-defined EAA monitor policy on Device A to disconnect the sessions with Device D and Device E when FortyGigE 1/0/1 connected to Device C is down. In this way, traffic from Device D and Device E to the Internet can be forwarded through Device B.

Figure 37 Network diagram

 

Configuration procedures

# Display BGP peer information for Device A.

<Sysname> display bgp peer ipv4

 

 BGP local router ID: 1.1.1.1

 Local AS number: 100

 Total number of peers: 3                  Peers in established state: 3

 

  * - Dynamically created peer

  Peer                    AS  MsgRcvd  MsgSent OutQ PrefRcv Up/Down  State

 

  10.2.1.2                200       13       16    0       0 00:16:12 Established

  10.3.1.2                300       13       16    0       0 00:10:34 Established

  10.3.2.2                300       13       16    0       0 00:10:38 Established

# Create track entry 1 and associate it with the link state of FortyGigE 1/0/1.

<Sysname> system-view

[Sysname] track 1 interface fortygige 1/0/1

# Configure a CLI-defined EAA monitor policy so that the system automatically disables session establishment with Device D and Device E when FortyGigE 1/0/1 is down.

[Sysname] rtm cli-policy test

[Sysname-rtm-test] event track 1 state negative

[Sysname-rtm-test] action 0 cli system-view

[Sysname-rtm-test] action 1 cli bgp 100

[Sysname-rtm-test] action 2 cli peer 10.3.1.2 ignore

[Sysname-rtm-test] action 3 cli peer 10.3.2.2 ignore

[Sysname-rtm-test] user-role network-admin

[Sysname-rtm-test] commit

[Sysname-rtm-test] quit

Verifying the configuration

# Shut down FortyGigE 1/0/1.

[Sysname] interface fortygige 1/0/1

[H3C-FortyGigE1/0/1] shutdown

# Display BGP peer information.

<Sysname> display bgp peer ipv4

 

 BGP local router ID: 1.1.1.1

 Local AS number: 100

 Total number of peers: 0                  Peers in established state: 0

 

  * - Dynamically created peer

  Peer                    AS  MsgRcvd  MsgSent OutQ PrefRcv Up/Down  State

The command output shows that Device A does not have any BGP peers.

CLI-defined policy with EAA environment variables configuration example

Network requirements

Define an environment variable to match the IP address 1.1.1.1.

Configure a policy from the CLI to monitor the event that occurs when a command line that contains loopback0 is executed. In the policy, use the environment variable for IP address assignment.

When the event occurs, the system performs the following tasks:

·          Creates the Loopback 0 interface.

·          Assigns 1.1.1.1/24 to the interface.

·          Sends the matching command line to the information center.

Configuration procedure

# Configure an EAA environment variable for IP address assignment. The variable name is loopback0IP, and the variable value is 1.1.1.1.

<Sysname> system-view

[Sysname] rtm environment loopback0IP 1.1.1.1

# Create the CLI-defined policy test and enter its view.

[Sysname] rtm cli-policy test

# Add a CLI event that occurs when a command line that contains loopback0 is executed.

[Sysname-rtm-test] event cli async mode execute pattern loopback0

# Add an action that enters system view when the event occurs.

[Sysname-rtm-test] action 0 cli system-view

# Add an action that creates the interface Loopback 0 and enters loopback interface view.

[Sysname-rtm-test] action 1 cli interface loopback 0

# Add an action that assigns the IP address 1.1.1.1 to Loopback 0. The loopback0IP variable is used in the action for IP address assignment.

[Sysname-rtm-test] action 2 cli ip address $loopback0IP 24

# Add an action that sends the matching loopback0 command with a priority of 0 from the logging facility local7 when the event occurs.

[Sysname-rtm-test] action 3 syslog priority 0 facility local7 msg $_cmd

# Specify the network-admin user role for executing the policy.

[Sysname-rtm-test] user-role network-admin

# Enable the policy.

[Sysname-rtm-test] commit

[Sysname-rtm-test] return

<Sysname>

Verifying the configuration

# Enable the information center to output log messages to the current monitoring terminal.

<Sysname> terminal monitor

# Execute the loopback0 command. Verify that the system displays the loopback0 message and a policy successfully executed message on the terminal screen.

<Sysname> loopback0

<Sysname>

%Jan  3 09:46:10:592 2014 Sysname RTM/0/RTM_ACTION: -MDC=1; loopback0

%Jan  3 09:46:10:613 2014 Sysname RTM/6/RTM_POLICY: -MDC=1; CLI policy test is running successfully.

# Verify that Loopback 0 has been created and assigned the IP address 1.1.1.1.

<Sysname> display interface loopback brief

Brief information on interfaces in route mode:

Link: ADM - administratively down; Stby - standby

Protocol: (s) - spoofing

Interface            Link Protocol Primary IP         Description

Loop0                UP   UP(s)    1.1.1.1

 

<Sysname>

Tcl-defined policy configuration example

Network requirements

As shown in Figure 38, use Tcl to create a monitor policy on the Device. This policy must meet the following requirements:

·          EAA sends the log message "rtm_tcl_test is running" when a command that contains the display this string is entered.

·          The system executes the command only after it executes the policy successfully.

Figure 38 Network diagram

Configuration procedure

# Edit a Tcl script file (rtm_tcl_test.tcl, in this example) for EAA to send the message "rtm_tcl_test is running" when a command that contains the display this string is executed.

::comware::rtm::event_register cli sync mode execute pattern display this user-role network-admin

::comware::rtm::action syslog priority 1 facility local4 msg rtm_tcl_test is running

# Download the Tcl script file from the TFTP server at 1.2.1.1.

<Sysname> tftp 1.2.1.1 get rtm_tcl_test.tcl

# Create Tcl-defined policy test and bind it to the Tcl script file.

<Sysname> system-view

[Sysname] rtm tcl-policy test rtm_tcl_test.tcl

[Sysname] quit

Verifying the configuration

# Display information about the policy.

<Sysname> display rtm policy registered

Total number: 1

Type  Event      TimeRegistered       PolicyName

TCL   CLI        Aug 29 14:54:50 2013 test

# Enable the information center to output log messages to the current monitoring terminal.

<Sysname> terminal monitor

# Execute the display this command. Verify that the system displays the rtm_tcl_test is running message and a message that the policy is being successfully executed.

<Sysname> display this

#

return

<Sysname>%Jun  4 15:02:30:354 2013 Sysname RTM/1/RTM_ACTION: -MDC=1; rtm_tcl_test is running

%Jun  4 15:02:30:382 2013 Sysname RTM/6/RTM_POLICY: -MDC=1; TCL policy test is running successfully.


Configuring NQA

Overview

Network quality analyzer (NQA) allows you to measure network performance, verify the service levels for IP services and applications, and troubleshoot network problems. It provides the following types of operations:

·          ICMP echo.

·          ICMP jitter.

·          DHCP.

·          DLSw.

·          DNS.

·          FTP.

·          HTTP.

·          Path jitter.

·          SNMP.

·          TCP.

·          UDP echo.

·          UDP jitter.

·          UDP tracert.

·          Voice.

As shown in Figure 39, the NQA source device (NQA client) sends data to the NQA destination device by simulating IP services and applications to measure network performance. The obtained performance metrics include the one-way latency, jitter, packet loss, voice quality, application performance, and server response time.

All types of NQA operations require the NQA client, but only the TCP, UDP echo, UDP jitter, and voice operations require the NQA server. The NQA operations for services that are already provided by the destination device such as FTP do not need the NQA server.

You can configure the NQA server to listen and respond to specific IP addresses and ports to meet various test needs.

Figure 39 Network diagram

 

NQA operation

The following describes how NQA performs different types of operations:

·          A TCP or DLSw operation sets up a connection.

·          An ICMP jitter, UDP jitter, or voice operation sends a number of probe packets. The number of probe packets is set by using the probe packet-number command.

·          An FTP operation uploads or downloads a file.

·          An HTTP operation gets a Web page.

·          A DHCP operation gets an IP address through DHCP.

·          A DNS operation translates a domain name to an IP address.

·          An ICMP echo operation sends an ICMP echo request.

·          A UDP echo operation sends a UDP packet.

·          An SNMP operation sends one SNMPv1 packet, one SNMPv2c packet, and one SNMPv3 packet.

·          A path jitter operation is accomplished in the following steps:

a.    The operation uses tracert to obtain the path from the NQA client to the destination. A maximum of 64 hops can be detected.

b.    The NQA client sends ICMP echo requests to each hop along the path. The number of ICMP echo requests is set by using the probe packet-number command.

·          A UDP tracert operation determines the routing path from the source to the destination. The number of the probe packets sent to each hop is set by using the probe count command.

Collaboration

NQA can collaborate with the Track module to notify application modules of state or performance changes so that the application modules can take predefined actions.

Figure 40 Collaboration

 

The following describes how a static route destined for 192.168.0.88 is monitored through collaboration:

1.        NQA monitors the reachability to 192.168.0.88.

2.        When 192.168.0.88 becomes unreachable, NQA notifies the Track module of the change.

3.        The Track module notifies the static routing module of the state change.

4.        The static routing module sets the static route to invalid according to a predefined action.

For more information about collaboration, see High Availability Configuration Guide.

Threshold monitoring

Threshold monitoring enables the NQA client to take a predefined action when the NQA operation performance metrics violate the specified thresholds.

Table 19 describes the relationships between performance metrics and NQA operation types.

Table 19 Performance metrics and NQA operation types

Performance metric

NQA operation types that can gather the metric

Probe duration

All NQA operation types except UDP jitter, UDP tracert, path jitter, and voice

Number of probe failures

All NQA operation types except UDP jitter, UDP tracert, path jitter, and voice

Round-trip time

ICMP jitter, UDP jitter, and voice

Number of discarded packets

ICMP jitter, UDP jitter, and voice

One-way jitter (source-to-destination or destination-to-source)

ICMP jitter, UDP jitter, and voice

One-way delay (source-to-destination or destination-to-source)

ICMP jitter, UDP jitter, and voice

Calculated Planning Impairment Factor (ICPIF) (see "Configuring the voice operation")

Voice

Mean Opinion Scores (MOS) (see "Configuring the voice operation")

Voice

 

NQA configuration task list

Tasks at a glance

Remarks

Configuring the NQA server

Required for TCP, UDP echo, UDP jitter, and voice operations.

(Required.) Enabling the NQA client

N/A

(Required.) Perform at least one of the following tasks:

·         Configuring NQA operations on the NQA client

·         Configuring NQA templates on the NQA client

When you configure an NQA template to analyze network performance, the feature that uses the template performs the NQA operation.

 

Configuring the NQA server

To perform TCP, UDP echo, UDP jitter, and voice operations, you must enable the NQA server on the destination device. The NQA server listens and responds to requests on the specified IP addresses and ports.

You can configure multiple TCP or UDP listening services on an NQA server, where each corresponds to a specific IP address and port number. The IP address and port number for a listening service must be unique on the NQA server and match the configuration on the NQA client.

To configure the NQA server:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the NQA server.

nqa server enable

By default, the NQA server is disabled.

3.       Configure a TCP or UDP listening service.

·         TCP listening service:
nqa server tcp-connect
ip-address port-number [ vpn-instance vpn-instance-name ] [ tos tos ]

·         UDP listening service:
nqa server udp-echo
ip-address port-number [ vpn-instance vpn-instance-name ] [ tos tos ]

The default ToS value is 0.

You can set the ToS value in the IP header of reply packets sent by the NQA server.

 

Enabling the NQA client

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable the NQA client.

nqa agent enable

By default, the NQA client is enabled.

The NQA client configuration takes effect after you enable the NQA client.

 

Configuring NQA operations on the NQA client

NQA operation configuration task list

Tasks at a glance

(Required.) Perform at least one of the following tasks:

·         Configuring the ICMP echo operation

·         Configuring the ICMP jitter operation

·         Configuring the DHCP operation

·         Configuring the DNS operation

·         Configuring the FTP operation

·         Configuring the HTTP operation

·         Configuring the UDP jitter operation

·         Configuring the SNMP operation

·         Configuring the TCP operation

·         Configuring the UDP echo operation

·         Configuring the UDP tracert operation

·         Configuring the voice operation

·         Configuring the DLSw operation

·         Configuring the path jitter operation

(Optional.) Configuring optional parameters for the NQA operation

(Optional.) Configuring the collaboration feature

(Optional.) Configuring threshold monitoring

(Optional.) Configuring the NQA statistics collection feature

(Optional.) Configuring the saving of NQA history records

(Required.) Scheduling the NQA operation on the NQA client

 

Configuring the ICMP echo operation

The ICMP echo operation measures the reachability of a destination device. It has the same function as the ping command, but provides more output information. In addition, if multiple paths exist between the source and destination devices, you can specify the next hop for the ICMP echo operation.

To configure the ICMP echo operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the ICMP echo type and enter its view.

type icmp-echo

N/A

4.       Specify the destination IP address for ICMP echo requests.

destination ip ip-address

By default, no destination IP address is specified.

5.       (Optional.) Set the payload size for each ICMP echo request.

data-size size

The default setting is 100 bytes.

6.       (Optional.) Specify the payload fill string for ICMP echo requests.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

7.       (Optional.) Specify the output interface for ICMP echo requests.

out interface interface-type interface-number

By default, the output interface for ICMP echo requests is not specified. The NQA client determines the output interface based on the routing table lookup.

8.       (Optional.) Specify the source IP address for ICMP echo requests.

·         Use the IP address of the specified interface as the source IP address:
source interface interface-type interface-number

·         Specify the source IP address:
source ip ip-address

By default, the requests take the primary IP address of the output interface as their source IP address.

If you execute the source interface and source ip commands multiple times, the most recent configuration takes effect.

The specified source interface must be up.

The specified source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

9.       (Optional.) Specify the next hop IP address for ICMP echo requests.

next-hop ip ip-address

By default, no next hop IP address is configured.

 

Configuring the ICMP jitter operation

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

The ICMP jitter operation measures unidirectional and bidirectional jitters. The operation result helps you to determine whether the network can carry jitter-sensitive services such as real-time voice and video services.

The ICMP jitter operation works as follows:

1.        The NQA client sends ICMP packets to the destination device.

2.        The destination device time stamps each packet it receives, and then sends the packet back to the NQA client.

3.        Upon receiving the responses, the NQA client calculates the jitter according to the timestamps.

To configure the ICMP jitter operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the ICMP jitter type and enter its view.

type icmp-jitter

N/A

4.       Specify the destination address of ICMP packets.

destination ip ip-address

By default, no destination IP address is specified.

5.       (Optional.) Set the number of ICMP packets sent in one ICMP jitter operation.

probe packet-number

packet-number

The default setting is 10.

6.       (Optional.) Set the interval for sending ICMP packets.

probe packet-interval interval

The default setting is 20 milliseconds.

7.       (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out.

probe packet-timeout timeout

The default setting is 3000 milliseconds.

8.       (Optional.) Specify the source IP address for ICMP packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no ICMP packets can be sent out.

 

 

NOTE:

Use the display nqa result or display nqa statistics command to verify the ICMP jitter operation. The display nqa history command does not display the ICMP jitter operation results or statistics.

 

Configuring the DHCP operation

The DHCP operation measures whether or not the DHCP server can respond to client requests. DHCP also measures the amount of time it takes the NQA client to obtain an IP address from a DHCP server.

The NQA client simulates the DHCP relay agent to forward DHCP requests for IP address acquisition from the DHCP server. The interface that performs the DHCP operation does not change its IP address. When the DHCP operation completes, the NQA client sends a packet to release the obtained IP address.

To configure the DHCP operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the DHCP type and enter its view.

type dhcp

N/A

4.       Specify the IP address of the DHCP server as the destination IP address of DHCP packets.

destination ip ip-address

By default, no destination IP address is specified.

5.       (Optional.) Specify an output interface for DHCP request packets.

out interface interface-type interface-number

By default, the output interface for DHCP request packets is not specified. The NQA client determines the output interface based on the routing table lookup.

6.       (Optional.) Specify the source IP address of DHCP request packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The specified source IP address must be the IP address of a local interface, and the local interface must be up. Otherwise, no probe packets can be sent out.

The NQA client adds the source IP address to the giaddr field in DHCP requests to be sent to the DHCP server. For more information about the giaddr field, see Layer 3—IP Services Configuration Guide.

 

Configuring the DNS operation

The DNS operation measures the time for the NQA client to translate a domain name into an IP address through a DNS server.

A DNS operation simulates domain name resolution and does not save the obtained DNS entry.

To configure the DNS operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the DNS type and enter its view.

type dns

N/A

4.       Specify the IP address of the DNS server as the destination address of DNS packets.

destination ip ip-address

By default, no destination IP address is specified.

5.       Specify the domain name to be translated.

resolve-target domain-name

By default, no domain name is specified.

 

Configuring the FTP operation

The FTP operation measures the time for the NQA client to transfer a file to or download a file from an FTP server.

When you configure the FTP operation, follow these restrictions and guidelines:

·          When you perform the put operation with the filename command configured, make sure the file exists on the NQA client.

·          If you get a file from the FTP server, make sure the file specified in the URL exists on the FTP server.

·          The NQA client does not save the file obtained from the FTP server.

·          Use a small file for the FTP operation. A big file might result in transfer failure because of timeout, or might affect other services for occupying much network bandwidth.

To configure the FTP operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the FTP type and enter its view.

type ftp

N/A

4.       Specify the URL of the destination FTP server.

url url

By default, no URL is specified for the destination FTP server.

Enter the URL in one of the following formats:

·         ftp://host/filename.

·         ftp://host:port/filename.

When you perform the get operation, the file name is required.

5.       (Optional.) Specify the source IP address of FTP request packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no FTP requests can be sent out.

6.       (Optional.) Specify the FTP operation type.

operation { get | put }

By default, the FTP operation type is get, which means obtaining files from the FTP server.

7.       Specify an FTP login username.

username username

By default, no FTP login username is configured.

8.       Specify an FTP login password.

password { cipher | simple } string

By default, no FTP login password is configured.

9.       (Optional.) Specify the name of a file to be transferred.

filename file-name

By default, no file is specified.

This step is required if you perform the put operation.

10.     Set the data transmission mode.

mode { active | passive }

The default mode is active.

 

Configuring the HTTP operation

An HTTP operation measures the time for the NQA client to obtain data from an HTTP server.

To configure an HTTP operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the HTTP type and enter its view.

type http

N/A

4.       Specify the URL of the destination HTTP server.

url url

By default, no URL is specified for the destination HTTP server.

Enter the URL in one of the following formats:

·         http://host/resource.

·         http://host:port/resource.

5.       Specify an HTTP login username.

username username

By default, no HTTP login username is specified.

6.       Specify an HTTP login password.

password { cipher | simple } string

By default, no HTTP login password is specified.

7.       (Optional.) Specify the source IP address of request packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no request packets can be sent out.

8.       (Optional.) Specify the HTTP version.

version { v1.0 | v1.1 }

By default, HTTP 1.0 is used.

9.       (Optional.) Specify the HTTP operation type.

operation { get | post | raw }

By default, the HTTP operation type is get, which means obtaining data from the HTTP server.

If you set the HTTP operation type to raw, configure the content of the HTTP request to be sent to the HTTP server in raw request view.

10.     (Optional.) Enter raw request view.

raw-request

Every time you enter raw request view, the previously configured content of the HTTP request is removed.

11.     (Optional.) Specify the HTTP request content.

Enter or paste the content.

By default, no contents are specified.

This step is required for the raw operation.

12.     Save the input and return to HTTP operation view.

quit

N/A

 

Configuring the UDP jitter operation

CAUTION:

To ensure successful UDP jitter operations and avoid affecting existing services, do not perform the operations on well-known ports from 1 to 1023.

 

Jitter means inter-packet delay variance. A UDP jitter operation measures unidirectional and bidirectional jitters. The operation result helps you to determine whether the network can carry jitter-sensitive services such as real-time voice and video services through the UDP jitter operation.

The UDP jitter operation works as follows:

1.        The NQA client sends UDP packets to the destination port.

2.        The destination device time stamps each packet it receives, and then sends the packet back to the NQA client.

3.        Upon receiving the responses, the NQA client calculates the jitter according to the timestamps.

The UDP jitter operation requires both the NQA server and the NQA client. Before you perform the UDP jitter operation, configure the UDP listening service on the NQA server. For more information about UDP listening service configuration, see "Configuring the NQA server."

To configure a UDP jitter operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the UDP jitter type and enter its view.

type udp-jitter

N/A

4.       Specify the destination address of UDP packets.

destination ip ip-address

By default, no destination IP address is specified.

The destination IP address must be the same as the IP address of the listening service on the NQA server.

5.       Specify the destination port of UDP packets.

destination port port-number

By default, no destination port number is specified.

The destination port number must be the same as the port number of the listening service on the NQA server.

6.       (Optional.) Specify the source IP address for UDP packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no UDP packets can be sent out.

7.       (Optional.) Specify the source port number of UDP packets.

source port port-number

By default, no source port number is specified.

8.       (Optional.) Set the payload size for each UDP packet.

data-size size

The default setting is 100 bytes.

9.       (Optional.) Specify the payload fill string for UDP packets.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

10.     (Optional.) Set the number of UDP packets sent in one UDP jitter operation.

probe packet-number

packet-number

The default setting is 10.

11.     (Optional.) Set the interval for sending UDP packets.

probe packet-interval interval

The default setting is 20 milliseconds.

12.     (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out.

probe packet-timeout timeout

The default setting is 3000 milliseconds.

 

 

NOTE:

Use the display nqa result or display nqa statistics command to verify the UDP jitter operation. The display nqa history command does not display the UDP jitter operation results or statistics.

 

Configuring the SNMP operation

The SNMP operation measures the time for the NQA client to get a response packet from an SNMP agent.

To configure the SNMP operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the SNMP type and enter its view.

type snmp

N/A

4.       Specify the destination address of SNMP packets.

destination ip ip-address

By default, no destination IP address is specified.

5.       (Optional.) Specify the source port of SNMP packets.

source port port-number

By default, no source port number is specified.

6.       (Optional.) Specify the source IP address of SNMP packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no SNMP packets can be sent out.

7.       (Optional.) Specify the read-only community name for the SNMP operation if the operation uses the SNMPv1 or SNMPv2c agent.

community read { cipher | simple } community-name

The default read-only community name is public.

Make sure the specified community name is the same as the community name configured on the SNMP agent.

 

Configuring the TCP operation

The TCP operation measures the time for the NQA client to establish a TCP connection to a port on the NQA server.

The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP operation, configure a TCP listening service on the NQA server. For more information about the TCP listening service configuration, see "Configuring the NQA server."

To configure the TCP operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the TCP type and enter its view.

type tcp

N/A

4.       Specify the destination address of TCP packets.

destination ip ip-address

By default, no destination IP address is specified.

The destination address must be the same as the IP address of the listening service configured on the NQA server.

5.       Specify the destination port of TCP packets.

destination port port-number

By default, no destination port number is configured.

The destination port number must be the same as the port number of the listening service on the NQA server.

6.       (Optional.) Specify the source IP address of TCP packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no TCP packets can be sent out.

 

Configuring the UDP echo operation

The UDP echo operation measures the round-trip time between the client and a UDP port on the NQA server.

The UDP echo operation requires both the NQA server and the NQA client. Before you perform a UDP echo operation, configure a UDP listening service on the NQA server. For more information about the UDP listening service configuration, see "Configuring the NQA server."

To configure the UDP echo operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the UDP echo type and enter its view.

type udp-echo

N/A

4.       Specify the destination address of UDP packets.

destination ip ip-address

By default, no destination IP address is specified.

The destination address must be the same as the IP address of the listening service configured on the NQA server.

5.       Specify the destination port of UDP packets.

destination port port-number

By default, no destination port number is specified.

The destination port number must be the same as the port number of the listening service on the NQA server.

6.       (Optional.) Set the payload size for each UDP packet.

data-size size

The default setting is 100 bytes.

7.       (Optional.) Specify the payload fill string for UDP packets.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

8.       (Optional.) Specify the source port of UDP packets.

source port port-number

By default, no source port number is specified.

9.       (Optional.) Specify the source IP address of UDP packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no UDP packets can be sent out.

 

Configuring the UDP tracert operation

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

The UDP tracert operation determines the routing path from the source device to the destination device.

Before you configure the UDP tracert operation, perform the following tasks:

·          Enable sending ICMP time exceeded messages on the intermediate devices between the source and destination devices. If the intermediate devices are H3C devices, use the ip ttl-expires enable command.

·          Enable sending ICMP destination unreachable messages on the destination device. If the destination device is an H3C device, use the ip unreachables enable command.

For more information about the ip ttl-expires enable and ip unreachables enable commands, see Layer 3—IP Services Command Reference.

To configure the UDP tracert operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the UDP tracert operation type and enter its view.

type udp-tracert

N/A

4.       Specify the destination device for the operation.

·         Specify the destination device by its host name:
destination host
host-name

·         Specify the destination device by its IP address:
destination ip ip-address

By default, no destination IP address or host name is specified.

5.       (Optional.) Specify the destination port of UDP packets.

destination port port-number

By default, the destination port number is 33434.

This port number must be an unused number on the destination device, so that the destination device can reply with ICMP port unreachable messages.

6.       (Optional.) Set the payload size for each UDP packet.

data-size size

The default setting is 100 bytes.

7.       (Optional.) Enable the no-fragmentation feature.

no-fragment enable

By default, the no-fragmentation feature is disabled.

8.       (Optional.) Set the maximum number of consecutive probe failures.

max-failure times

The default setting is 5.

9.       (Optional.) Set the TTL value for UDP packets in the start round of the UDP tracert operation.

init-ttl value

The default setting is 1.

10.     (Optional.) Specify an output interface for UDP packets.

out interface interface-type interface-number

By default, the output interface for UDP packets is not specified. The NQA client determines the output interface based on the routing table lookup.

11.     (Optional.) Specify the source port of UDP packets.

source port port-number

By default, no source port number is specified.

12.     (Optional.) Specify the source IP address of UDP packets.

·         Specify the IP address of the specified interface as the source IP address:
source interface interface-type interface-number

·         Specify the source IP address:
source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

If you execute the source ip and source interface commands multiple times, the most recent configuration takes effect.

The specified source interface must be up. The source IP address must be the IP address of a local interface, and the local interface must be up. Otherwise, no probe packets can be sent out.

 

Configuring the voice operation

CAUTION:

To ensure successful voice operations and avoid affecting existing services, do not perform the operations on well-known ports from 1 to 1023.

 

The voice operation measures VoIP network performance.

The voice operation works as follows:

1.        The NQA client sends voice packets at sending intervals to the destination device (NQA server).

The voice packets are of one of the following codec types:

?  G.711 A-law.

?  G.711 μ-law.

?  G.729 A-law.

2.        The destination device time stamps each voice packet it receives and sends it back to the source.

3.        Upon receiving the packet, the source device calculates the jitter and one-way delay based on the timestamp.

The following parameters that reflect VoIP network performance can be calculated by using the metrics gathered by the voice operation:

·          Calculated Planning Impairment Factor (ICPIF)—Measures impairment to voice quality in a VoIP network. It is decided by packet loss and delay. A higher value represents a lower service quality.

·          Mean Opinion Scores (MOS)—A MOS value can be evaluated by using the ICPIF value, in the range of 1 to 5. A higher value represents a higher service quality.

The evaluation of voice quality depends on users' tolerance for voice quality. For users with higher tolerance for voice quality, use the advantage-factor command to set an advantage factor. When the system calculates the ICPIF value, it subtracts the advantage factor to modify ICPIF and MOS values for voice quality evaluation.

The voice operation requires both the NQA server and the NQA client. Before you perform a voice operation, configure a UDP listening service on the NQA server. For more information about UDP listening service configuration, see "Configuring the NQA server."

The voice operation cannot repeat.

To configure the voice operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the voice type and enter its view.

type voice

N/A

4.       Specify the destination address of voice packets.

destination ip ip-address

By default, no destination IP address is configured.

The destination IP address must be the same as the IP address of the listening service on the NQA server.

5.       Specify the destination port of voice packets.

destination port port-number

By default, no destination port number is configured.

The destination port number must be the same as the port number of the listening service on the NQA server.

6.       (Optional.) Specify the codec type.

codec-type { g711a | g711u | g729a }

By default, the codec type is G.711 A-law.

7.       (Optional.) Set the advantage factor for calculating MOS and ICPIF values.

advantage-factor factor

By default, the advantage factor is 0.

8.       (Optional.) Specify the source IP address of voice packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no voice packets can be sent out.

9.       (Optional.) Specify the source port number of voice packets.

source port port-number

By default, no source port number is specified.

10.     (Optional.) Set the payload size for each voice packet.

data-size size

By default, the voice packet size varies by codec type. The default packet size is 172 bytes for G.711A-law and G.711 μ-law codec type, and 32 bytes for G.729 A-law codec type.

11.     (Optional.) Specify the payload fill string for voice packets.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

12.     (Optional.) Set the number of voice packets to be sent in a voice probe.

probe packet-number packet-number

The default setting is 1000.

13.     (Optional.) Set the interval for sending voice packets.

probe packet-interval interval

The default setting is 20 milliseconds.

14.     (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out.

probe packet-timeout timeout

The default setting is 5000 milliseconds.

 

 

NOTE:

Use the display nqa result or display nqa statistics command to verify the voice operation. The display nqa history command does not display the voice operation results or statistics.

 

Configuring the DLSw operation

The DLSw operation measures the response time of a DLSw device.

To configure the DLSw operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the DLSw type and enter its view.

type dlsw

N/A

4.       Specify the destination IP address of probe packets.

destination ip ip-address

By default, no destination IP address is specified.

5.       (Optional.) Specify the source IP address of probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

 

Configuring the path jitter operation

The path jitter operation measures the jitter, negative jitters, and positive jitters from the NQA client to each hop on the path to the destination.

Before you configure the path jitter operation, perform the following tasks:

·          Enable sending ICMP time exceeded messages on the intermediate devices between the source and destination devices. If the intermediate devices are H3C devices, use the ip ttl-expires enable command.

·          Enable sending ICMP destination unreachable messages on the destination device. If the destination device is an H3C device, use the ip unreachables enable command.

For more information about the ip ttl-expires enable and ip unreachables enable commands, see Layer 3—IP Services Command Reference.

To configure the path jitter operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify the path jitter type and enter its view.

type path-jitter

N/A

4.       Specify the destination address of ICMP echo requests.

destination ip ip-address

By default, no destination IP address is specified.

5.       (Optional.) Set the payload size for each ICMP echo request.

data-size size

The default setting is 100 bytes.

6.       (Optional.) Specify the payload fill string for ICMP echo requests.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

7.       Specify the source IP address of ICMP echo requests.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no ICMP echo requests can be sent out.

8.       (Optional.) Set the number of ICMP echo requests to be sent in a path jitter operation.

probe packet-number packet-number

The default setting is 10.

9.       (Optional.) Set the interval for sending ICMP echo requests.

probe packet-interval interval

The default setting is 20 milliseconds.

10.     (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out.

probe packet-timeout timeout

The default setting is 3000 milliseconds.

11.     (Optional.) Specify an LSR path.

lsr-path ip-address&<1-8>

By default, no LSR path is specified.

The path jitter operation uses the tracert to detect the LSR path to the destination, and sends ICMP echo requests to each hop on the LSR.

12.     (Optional.) Perform the path jitter operation only on the destination address.

target-only

By default, the path jitter operation is performed on each hop on the path to the destination.

 

Configuring optional parameters for the NQA operation

Unless otherwise specified, the following optional parameters apply to all types of NQA operations.

The parameter settings take effect only on the current operation.

To configure optional parameters for an NQA operation:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify an NQA operation type and enter its view.

type { dhcp | dlsw | dns | ftp | http | icmp-echo | icmp-jitter | path-jitter | snmp | tcp | udp-echo | udp-jitter | udp-tracert | voice }

N/A

4.       Configure a description.

description text

By default, no description is configured.

5.       Set the interval at which the NQA operation repeats.

frequency interval

For a voice or path jitter operation, the default setting is 60000 milliseconds.

For other types of operations, the default setting is 0 milliseconds, and only one operation is performed.

If the operation is not completed when the interval expires, the next operation does not start.

6.       Specify the probe times.

probe count times

By default:

·         In an UDP tracert operation, the NQA client performs three probes to each hop to the destination.

·         In other types of operations, the NQA client performs one probe to the destination per operation.

This command is not available for the path jitter and voice operations. Each of these operations performs only one probe.

7.       Set the probe timeout time.

probe timeout timeout

The default setting is 3000 milliseconds.

This command is not available for the ICMP jitter, path jitter, UDP jitter, or voice operations.

8.       Set the maximum number of hops that the probe packets can traverse.

ttl value

The default setting is 30 for probe packets of the UDP tracert operation, and is 20 for probe packets of other types of operations.

This command is not available for the DHCP or path jitter operation.

9.       Set the ToS value in the IP header of probe packets.

tos value

The default setting is 0.

10.     Enable the routing table bypass feature.

route-option bypass-route

By default, the routing table bypass feature is disabled.

This command is not available for the DHCP and path jitter operations.

11.     Specify the VPN instance where the operation is performed.

vpn-instance vpn-instance-name

By default, the operation is performed on the public network.

 

Configuring the collaboration feature

Collaboration is implemented by associating a reaction entry of an NQA operation with a track entry. The reaction entry monitors the NQA operation. If the number of operation failures reaches the specified threshold, the configured action is triggered.

To configure the collaboration feature:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify an NQA operation type and enter its view.

type { dhcp | dlsw | dns | ftp | http | icmp-echo | snmp | tcp | udp-echo }

The collaboration feature is not available for the ICMP jitter, path jitter, UDP tracert, UDP jitter, or voice operations.

4.       Configure a reaction entry.

reaction item-number checked-element probe-fail threshold-type consecutive consecutive-occurrences action-type trigger-only

By default, no reaction entry is configured.

You cannot modify the content of an existing reaction entry.

5.       Return to system view.

quit

N/A

6.       Associate Track with NQA.

See High Availability Configuration Guide.

N/A

7.       Associate Track with an application module.

See High Availability Configuration Guide.

N/A

 

Configuring threshold monitoring

This feature allows you to monitor the NQA operation running status.

Threshold types

An NQA operation supports the following threshold types:

·          average—If the average value for the monitored performance metric either exceeds the upper threshold or goes below the lower threshold, a threshold violation occurs.

·          accumulate—If the total number of times that the monitored performance metric is out of the specified value range reaches or exceeds the specified threshold, a threshold violation occurs.

·          consecutive—If the number of consecutive times that the monitored performance metric is out of the specified value range reaches or exceeds the specified threshold, a threshold violation occurs.

Threshold violations for the average or accumulate threshold type are determined on a per NQA operation basis. The threshold violations for the consecutive type are determined from the time the NQA operation starts.

Triggered actions

The following actions might be triggered:

·          none—NQA displays results only on the terminal screen. It does not send traps to the NMS.

·          trap-only—NQA displays results on the terminal screen, and meanwhile it sends traps to the NMS.

·          trigger-only—NQA displays results on the terminal screen, and meanwhile triggers other modules for collaboration.

The DNS operation does not support the action of sending trap messages.

Reaction entry

In a reaction entry, configure a monitored element, a threshold type, and an action to be triggered to implement threshold monitoring.

The state of a reaction entry can be invalid, over-threshold, or below-threshold.

·          Before an NQA operation starts, the reaction entry is in invalid state.

·          If the threshold is violated, the state of the entry is set to over-threshold. Otherwise, the state of the entry is set to below-threshold.

If the action is trap-only for a reaction entry, a trap message is sent to the NMS when the state of the entry changes.

Configuration procedure

Before you configure threshold monitoring, configure the destination address of the trap messages by using the snmp-agent target-host command. For more information about the command, see Network Management and Monitoring Command Reference.

To configure threshold monitoring:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Enter NQA operation view.

type { dhcp | dlsw | dns | ftp | http | icmp-echo | icmp-jitter | snmp | tcp | udp-echo | udp-jitter | udp-tracert | voice }

The threshold monitoring feature is not available for the path jitter operation.

4.       Enable sending traps to the NMS when specific conditions are met.

reaction trap { path-change | probe-failure consecutive-probe-failures | test-complete | test-failure [ accumulate-probe-failures ] }

By default, no traps are sent to the NMS.

The ICMP jitter, UDP jitter, and voice operations support only the test-complete keyword.

The following parameters are not available for the UDP tracert operation:

·         The probe-failure consecutive-probe-failures option.

·         The accumulate-probe-failures argument.

5.       Configure threshold monitoring.

·         Monitor the operation duration (not supported in the ICMP jitter, UDP jitter, UDP tracert, or voice operations):
reaction item-number checked-element probe-duration threshold-type { accumulate accumulate-occurrences | average | consecutive consecutive-occurrences } threshold-value upper-threshold lower-threshold [ action-type { none | trap-only } ]

·         Monitor failure times (not supported in the ICMP jitter, UDP jitter, UDP tracert, or voice operations):
reaction item-number checked-element probe-fail threshold-type { accumulate accumulate-occurrences | consecutive consecutive-occurrences } [ action-type { none | trap-only } ]

·         Monitor the round-trip time (only for the ICMP jitter, UDP jitter, and voice operations):
reaction item-number checked-element rtt threshold-type { accumulate accumulate-occurrences | average } threshold-value upper-threshold lower-threshold [ action-type { none | trap-only } ]

·         Monitor packet loss (only for the ICMP jitter, UDP jitter, and voice operations):
reaction item-number checked-element packet-loss threshold-type accumulate accumulate-occurrences [ action-type { none | trap-only } ]

·         Monitor the one-way jitter (only for the ICMP jitter, UDP jitter, and voice operations):
reaction item-number checked-element { jitter-ds | jitter-sd } threshold-type { accumulate accumulate-occurrences | average } threshold-value upper-threshold lower-threshold [ action-type { none | trap-only } ]

·         Monitor the one-way delay (only for the ICMP jitter, UDP jitter, and voice operations):
reaction item-number checked-element { owd-ds | owd-sd } threshold-value upper-threshold lower-threshold

·         Monitor the ICPIF value (only for the voice operation):
reaction item-number checked-element icpif threshold-value upper-threshold lower-threshold [ action-type { none | trap-only } ]

·         Monitor the MOS value (only for the voice operation):
reaction item-number checked-element mos threshold-value upper-threshold lower-threshold [ action-type { none | trap-only } ]

N/A

 

Configuring the NQA statistics collection feature

NQA forms statistics within the same collection interval as a statistics group. To display information about the statistics groups, use the display nqa statistics command.

If you use the frequency command to set the interval to 0 milliseconds for an NQA operation, NQA does not generate any statistics group for the operation.

To configure the NQA statistics collection feature:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Specify an NQA operation type and enter its view.

type { dhcp | dlsw | dns | ftp | http | icmp-echo | icmp-jitter | path-jitter | snmp | tcp | udp-echo | udp-jitter| voice }

The NQA statistics collection feature is not available for UDP tracert operation.

4.       (Optional.) Set the interval for collecting the statistics.

statistics interval interval

The default setting is 60 minutes.

5.       (Optional.) Set the maximum number of statistics groups that can be saved.

statistics max-group number

The default setting is two groups.

To disable the NQA statistics collection feature, set the maximum number to 0.

When the maximum number of statistics groups is reached, to save a new statistics group, the oldest statistics group is deleted.

6.       (Optional.) Set the hold time of statistics groups.

statistics hold-time hold-time

The default setting is 120 minutes.

A statistics group is deleted when its hold time expires.

 

Configuring the saving of NQA history records

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA operation and enter NQA operation view.

nqa entry admin-name operation-tag

By default, no NQA operations exist.

3.       Enter NQA operation type view.

type { dhcp | dlsw | dns | ftp | http | icmp-echo | snmp | tcp | udp-echo | udp-tracert }

The history record saving feature is not available for the ICMP jitter, UDP jitter, path jitter, or voice operations.

4.       Enable the saving of history records for the NQA operation.

history-record enable

By default, this feature is enabled only for the UDP tracert operation.

5.       (Optional.) Set the lifetime of history records.

history-record keep-time keep-time

The default setting is 120 minutes.

A record is deleted when its lifetime is reached.

6.       (Optional.) Set the maximum number of history records that can be saved.

history-record number number

The default setting is 50.

If the maximum number of history records for an NQA operation is reached, the earliest history records are deleted.

7.       (Optional.) Display NQA history records.

display nqa history

N/A

 

Scheduling the NQA operation on the NQA client

The NQA operation works between the specified start time and the end time (the start time plus operation duration). If the specified start time is ahead of the system time, the operation starts immediately. If both the specified start and end time are ahead of the system time, the operation does not start. To display the current system time, use the display clock command.

When you schedule an NQA operation, follow these restrictions and guidelines:

·          You cannot enter the operation type view or the operation view of a scheduled NQA operation.

·          A system time adjustment does not affect started or completed NQA operations. It affects only the NQA operations that have not started.

To schedule the NQA operation on the NQA client:

 

Step

Command

1.       Enter system view.

system-view

2.       Specify the scheduling parameters for an NQA operation.

nqa schedule admin-name operation-tag start-time { hh:mm:ss [ yyyy/mm/dd | mm/dd/yyyy ] | now } lifetime { lifetime | forever } [ recurring ]

 

Configuring NQA templates on the NQA client

An NQA template is a set of operation parameters, such as the destination address, the destination port number, and the destination server URL. You can use an NQA template in a feature module to provide statistics. You can create multiple templates on a device, and each template must be uniquely named.

NQA template supports the DNS, FTP, HTTP, HTTPS, ICMP, SSL, TCP, TCP half open, and UDP operation types.

Some operation parameters for an NQA template can be specified by the template configuration or the feature that uses the template. When both are specified, the parameters in the template configuration take effect.

NQA template configuration task list

Tasks at a glance

(Required.) Perform at least one of the following tasks:

·         Configuring the ICMP template

·         Configuring the DNS template

·         Configuring the TCP template

·         Configuring the TCP half open template

·         Configuring the UDP template

·         Configuring the HTTP template

·         Configuring the HTTPS template

·         Configuring the FTP template

·         Configuring the SSL template

(Optional.) Configuring optional parameters for the NQA template

 

Configuring the ICMP template

A feature that uses the ICMP template performs the ICMP operation to measure the reachability of a destination device.

To configure the ICMP template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an ICMP template and enter its view.

nqa template icmp name

By default, no ICMP templates exist.

3.       (Optional.) Specify the destination IP address of the operation.

destination ip ip-address

By default, no destination IP address is configured.

4.       (Optional.) Set the payload size for each ICMP request.

data-size size

The default setting is 100 bytes.

5.       (Optional.) Specify the payload fill string for requests.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

6.       (Optional.) Specify the source IP address for ICMP echo requests.

·         Use the IP address of the specified interface as the source IP address:
source interface interface-type interface-number

·         Specify the source IP address:
source ip ip-address

By default, the requests take the primary IP address of the output interface as their source IP address.

If you execute the source interface and source ip commands multiple times, the most recent configuration takes effect.

The specified source interface must be up.

The specified source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

7.       (Optional.) Specify the next hop IP address for ICMP echo requests.

next-hop ip ip-address

By default, no IP address of the next hop is configured.

8.       (Optional.) Configure the probe result sending on a per-probe basis.

reaction trigger per-probe

By default, the probe result is sent to the feature that uses the template after three consecutive failed or successful probes.

If you execute the reaction trigger per-probe and reaction trigger probe-pass commands multiple times, the most recent configuration takes effect.

If you execute the reaction trigger per-probe and reaction trigger probe-fail commands multiple times, the most recent configuration takes effect.

 

Configuring the DNS template

A feature that uses the DNS template performs the DNS operation to determine the status of the server. In DNS template view, you can specify the address expected to be returned. If the returned IP addresses include the expected address, the DNS server is valid and the operation succeeds. Otherwise, the operation fails.

Create a mapping between the domain name and an address before you perform the DNS operation. For information about configuring the DNS server, see Layer 3—IP Services Configuration Guide.

To configure the DNS template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a DNS template and enter DNS template view.

nqa template dns name

By default, no DNS templates exist.

3.       (Optional.) Specify the destination IP address of DNS packets.

destination ip ip-address

By default, no destination IP address is specified.

4.       (Optional.) Specify the destination port number for the operation.

destination port port-number

By default, the destination port number is 53.

5.       Specify the domain name to be translated.

resolve-target domain-name

By default, no domain name is specified.

6.       Specify the domain name resolution type.

resolve-type A

By default, the type is type A.

A type A query resolves a domain name to a mapped IP address.

7.       (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

8.       (Optional.) Specify the source port for probe packets.

source port port-number

By default, no source port number is specified.

9.       (Optional.) Specify the IP address that is expected to be returned.

expect ip ip-address

By default, no expected IP address is specified.

 

Configuring the TCP template

A feature that uses the TCP template performs the TCP operation to test whether the NQA client can establish a TCP connection to a specific port on the server.

In TCP template view, you can specify the expected data to be returned. If you do not specify the expected data, the TCP operation tests only whether the client can establish a TCP connection to the server.

The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP operation, configure a TCP listening service on the NQA server. For more information about the TCP listening service configuration, see "Configuring the NQA server."

To configure the TCP template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a TCP template and enter its view.

nqa template tcp name

By default, no TCP templates exist.

3.       (Optional.) Specify the destination IP address of the operation.

destination ip ip-address

By default, no destination IP address is specified.

The destination address must be the same as the IP address of the listening service configured on the NQA server.

4.       (Optional.) Specify the destination port number for the operation.

destination port port-number

By default, no destination port number is specified.

The destination port number must be the same as the port number of the listening service on the NQA server.

5.       (Optional.) Specify the payload fill string for requests.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

6.       (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

7.       (Optional.) Configure the expected data.

expect data expression [ offset number ]

By default, no expected data is configured.

The NQA client performs expect data check only when you configure both the data-fill and expect-data commands.

 

Configuring the TCP half open template

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

A feature that uses the TCP half open template performs the TCP half open operation to test whether the TCP service is available on the server. The TCP half open operation is used when the feature cannot get a response from the TCP server through an existing TCP connection.

In the TCP half open operation, the NQA client sends a TCP ACK packet to the server. If the client receives an RST packet, it considers that the TCP service is available on the server.

To configure the TCP half open template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a TCP half open template and enter its view.

nqa template tcphalfopen name

By default, no TCP half open templates exist.

3.       (Optional.) Specify the destination IP address of the operation.

destination ip ip-address

By default, no destination IP address is specified.

4.       (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

5.       (Optional.) Specify the next hop IP address for the probe packets.

next-hop ip ip-address

By default, the IP address of the next hop is configured.

6.       (Optional.) Configure the probe result sending on a per-probe basis.

reaction trigger per-probe

By default, the probe result is sent to the feature that uses the template after three consecutive failed or successful probes.

If you execute the reaction trigger per-probe and reaction trigger probe-pass commands multiple times, the most recent configuration takes effect.

If you execute the reaction trigger per-probe and reaction trigger probe-fail commands multiple times, the most recent configuration takes effect.

 

Configuring the UDP template

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

A feature that uses the UDP template performs the UDP operation to test the following items:

·          Reachability of a specific port on the NQA server.

·          Availability of the requested service on the NQA server.

In UDP template view, you can specify the expected data to be returned. If you do not specify the expected data, the UDP operation tests only whether the client can receive the response packet from the server.

The UDP operation requires both the NQA server and the NQA client. Before you perform a UDP operation, configure a UDP listening service on the NQA server. For more information about the UDP listening service configuration, see "Configuring the NQA server."

To configure the UDP template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create a UDP template and enter its view.

nqa template udp name

By default, no UDP templates exist.

3.       (Optional.) Specify the destination IP address of the operation.

destination ip ip-address

By default, no destination IP address is specified.

The destination address must be the same as the IP address of the listening service configured on the NQA server.

4.       (Optional.) Specify the destination port number for the operation.

destination port port-number

By default, no destination port number is specified.

The destination port number must be the same as the port number of the listening service on the NQA server.

5.       (Optional.) Specify the payload fill string for the probe packets.

data-fill string

The default payload fill string is the hexadecimal string 00010203040506070809.

6.       (Optional.) Set the payload size for the probe packets.

data-size size

The default setting is 100 bytes.

7.       (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

8.       (Optional.) Configure the expected data.

expect data expression [ offset number ]

By default, no expected data is configured.

If you want to configure this command, make sure the data-fill command is already executed.

 

Configuring the HTTP template

A feature that uses the HTTP template performs the HTTP operation to measure the time it takes the NQA client to obtain data from an HTTP server.

The expected data is checked only when the data is configured and the HTTP response contains the Content-Length field in the HTTP header.

The status code of the HTTP packet is a three-digit field in decimal notation, and it includes the status information for the HTTP server. The first digit defines the class of response.

Configure the HTTP server before you perform the HTTP operation.

To configure the HTTP template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an HTTP template and enter its view.

nqa template http name

By default, no HTTP templates exist.

3.       Specify the URL of the destination HTTP server.

url url

By default, no URL is specified for the destination HTTP server.

Enter the URL in one of the following formats:

·         http://host/resource.

·         http://host:port/resource.

4.       Specify an HTTP login username.

username username

By default, no HTTP login username is specified.

5.       Specify an HTTP login password.

password { cipher | simple } string

By default, no HTTP login password is specified.

6.       (Optional.) Specify the HTTP version.

version { v1.0 | v1.1 }

By default, HTTP 1.0 is used.

7.       (Optional.) Specify the HTTP operation type.

operation { get | post | raw }

By default, the HTTP operation type is get, which means obtaining data from the HTTP server.

If you set the HTTP operation type to raw, use the raw-request command to specify the content of the HTTP request to be sent to the HTTP server.

8.       (Optional.) Enter raw request view.

raw-request

This step is required for the raw operation.

Every time you enter the raw request view, the existing request content configuration is removed.

9.       (Optional.) Enter or paste the content of the HTTP request for the HTTP operation.

N/A

This step is required for the raw operation.

By default, the HTTP request content is not specified.

10.     (Optional.) Return to HTTP template view.

quit

The system automatically saves the configuration in raw request view before it returns to HTTP template view.

11.     (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

12.     (Optional.) Configure the expected status codes.

expect status status-list

By default, no expected status code is configured.

13.     (Optional.) Configure the expected data.

expect data expression [ offset number ]

By default, no expected data is configured.

 

Configuring the HTTPS template

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

A feature that uses the HTTPS template performs the HTTPS operation to measure the time it takes for the NQA client to obtain data from an HTTPS server.

The expected data is checked only when the expected data is configured and the HTTPS response contains the Content-Length field in the HTTPS header.

The status code of the HTTPS packet is a three-digit field in decimal notation, and it includes the status information for the HTTPS server. The first digit defines the class of response.

Before you perform the HTTPS operation, configure the HTTPS server and the SSL client policy for the SSL client. For information about configuring SSL client policies, see Security Configuration Guide.

To configure the HTTPS template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an HTTPS template and enter its view.

nqa template https name

By default, no HTTPS templates exist.

3.       Specify the URL of the destination HTTPS server.

url url

By default, no URL is specified for the destination HTTPS server.

Enter the URL in one of the following formats:

·         https://host/resource.

·         https://host:port/resource.

4.       Specify an HTTPS login username.

username username

By default, no HTTPS login username is specified.

5.       Specify an HTTPS login password.

password { cipher | simple } string

By default, no HTTPS login password is specified.

6.       Specify an SSL client policy.

ssl-client-policy policy-name

By default, no SSL client policy is specified.

7.       (Optional.) Specify the HTTPS version.

version { v1.0 | v1.1 }

By default, HTTPS 1.0 is used.

8.       (Optional.) Specify the HTTPS operation type.

operation { get | post | raw }

By default, the HTTPS operation type is get, which means obtaining data from the HTTPS server.

If you set the HTTPS operation type to raw, use the raw-request command to configure the content of the request to be sent to the HTTPS server.

9.       (Optional.) Enter raw request view.

raw-request

This step is required for the raw operation.

Every time you enter the raw request view, the previously configured request content is removed.

10.     (Optional.) Enter or paste the content of the HTTPS request for the HTTPS operation.

N/A

This step is required for the raw operation.

By default, the HTTPS request content is not specified.

11.     (Optional.) Return to HTTPS template view.

quit

The system automatically saves the configuration in raw request view before it returns to HTTPS template view.

12.     (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

13.     (Optional.) Configure the expected data.

expect data expression [ offset number ]

By default, no expected data is configured.

14.     (Optional.) Configure the expected status codes.

expect status status-list

By default, no expected status code is configured.

 

Configuring the FTP template

A feature that uses the FTP template performs the FTP operation. The operation measures the time it takes the NQA client to transfer a file to or download a file from an FTP server.

Configure the username and password for the FTP client to log in to the FTP server before you perform an FTP operation. For information about configuring the FTP server, see Fundamentals Configuration Guide.

To configure the FTP template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an FTP template and enter its view.

nqa template ftp name

By default, no FTP templates exist.

3.       Specify the URL of the destination FTP server.

url url

By default, no URL is specified for the destination FTP server.

Enter the URL in one of the following formats:

·         ftp://host/filename.

·         ftp://host:port/filename.

When you perform the get operation, the file name is required.

When you perform the put operation, the filename argument does not take effect, even if it is specified. The file name for the put operation is determined by the filename command.

4.       (Optional.) Specify the FTP operation type.

operation { get | put }

By default, the FTP operation type is get, which means obtaining files from the FTP server.

5.       Specify an FTP login username.

username username

By default, no FTP login username is specified.

6.       Specify an FTP login password.

password { cipher | simple } sting

By default, no FTP login password is specified.

7.       (Optional.) Specify the name of a file to be transferred.

filename filename

This step is required if you perform the put operation.

This configuration does not take effect for the get operation.

By default, no file is specified.

8.       Set the data transmission mode.

mode { active | passive }

The default mode is active.

9.       (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

 

Configuring the SSL template

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

A feature that uses the SSL template performs the SSL operation to measure the time required to establish an SSL connection to an SSL server.

Before you configure the SSL template, configure the SSL client policy. For information about configuring SSL client policies, see Security Configuration Guide.

To configure the SSL template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an SSL template and enter its view.

nqa template ssl name

By default, no SSL templates exist.

3.       (Optional.) Specify the destination IP address of the operation.

destination ip ip-address

By default, no destination IP address is configured.

4.       (Optional.) Specify the destination port number for the operation.

destination port port-number

By default, the destination port number is not specified.

5.       (Optional.) Specify the source IP address for the probe packets.

source ip ip-address

By default, the packets take the primary IP address of the output interface as their source IP address.

The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out.

6.       Specify an SSL client policy.

ssl-client-policy policy-name

By default, no SSL client policy is specified.

 

Configuring optional parameters for the NQA template

Unless otherwise specified, the following optional parameters apply to all types of NQA templates.

The parameter settings take effect only on the current NQA template.

To configure optional parameters for an NQA template:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Create an NQA template and enter its view.

nqa template { dns | ftp | http | https | icmp | ssl | tcp | tcphalfopen | udp } name

By default, no NQA templates exist.

3.       Configure a description.

description text

By default, no description is configured.

4.       Set the interval at which the NQA operation repeats.

frequency interval

The default setting is 5000 milliseconds.

If the operation is not completed when the interval expires, the next operation does not start.

5.       Set the probe timeout time.

probe timeout timeout

The default setting is 3000 milliseconds.

6.       Set the TTL for probe packets.

ttl value

The default setting is 20.

7.       Set the ToS value in the IP header of probe packets.

tos value

The default setting is 0.

8.       Specify the VPN instance where the operation is performed.

vpn-instance vpn-instance-name

By default, the operation is performed on the public network.

9.       Set the number of consecutive successful probes to determine a successful operation event.

reaction trigger probe-pass count

The default setting is 3.

If the number of consecutive successful probes for an NQA operation is reached, the NQA client notifies the feature that uses the template of the successful operation event.

10.     Set the number of consecutive probe failures to determine an operation failure.

reaction trigger probe-fail count

The default setting is 3.

If the number of consecutive probe failures for an NQA operation is reached, the NQA client notifies the feature that uses the NQA template of the operation failure.

 

Displaying and maintaining NQA

Execute display commands in any view.

 

Task

Command

Display history records of NQA operations.

display nqa history [ admin-name operation-tag ]

Display the current monitoring results of reaction entries.

display nqa reaction counters [ admin-name operation-tag [ item-number ] ]

Display the most recent result of the NQA operation.

display nqa result [ admin-name operation-tag ]

Display NQA statistics.

display nqa statistics [ admin-name operation-tag ]

Display NQA server status.

display nqa server status

 

NQA configuration examples

ICMP echo operation configuration example

Network requirements

As shown in Figure 41, configure an ICMP echo operation on the NQA client (Device A) to test the round-trip time to Device B. The next hop of Device A is Device C.

Figure 41 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 41. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create an ICMP echo operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type icmp-echo

# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.

[DeviceA-nqa-admin-test1-icmp-echo] destination ip 10.2.2.2

# Specify 10.1.1.2 as the next hop. The ICMP echo requests are sent through Device C to Device B.

[DeviceA-nqa-admin-test1-icmp-echo] next-hop ip 10.1.1.2

# Configure the ICMP echo operation to perform 10 probes.

[DeviceA-nqa-admin-test1-icmp-echo] probe count 10

# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.

[DeviceA-nqa-admin-test1-icmp-echo] probe timeout 500

# Configure the ICMP echo operation to repeat every 5000 milliseconds.

[DeviceA-nqa-admin-test1-icmp-echo] frequency 5000

# Enable saving history records.

[DeviceA-nqa-admin-test1-icmp-echo] history-record enable

# Set the maximum number of history records to 10.

[DeviceA-nqa-admin-test1-icmp-echo] history-record number 10

[DeviceA-nqa-admin-test1-icmp-echo] quit

# Start the ICMP echo operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the ICMP echo operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the ICMP echo operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 10             Receive response times: 10

    Min/Max/Average round trip time: 2/5/3

    Square-Sum of round trip time: 96

    Last succeeded probe time: 2011-08-23 15:00:01.2

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the ICMP echo operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test) history records:

  Index      Response     Status           Time

  370        3            Succeeded        2007-08-23 15:00:01.2

  369        3            Succeeded        2007-08-23 15:00:01.2

  368        3            Succeeded        2007-08-23 15:00:01.2

  367        5            Succeeded        2007-08-23 15:00:01.2

  366        3            Succeeded        2007-08-23 15:00:01.2

  365        3            Succeeded        2007-08-23 15:00:01.2

  364        3            Succeeded        2007-08-23 15:00:01.1

  363        2            Succeeded        2007-08-23 15:00:01.1

  362        3            Succeeded        2007-08-23 15:00:01.1

  361        2            Succeeded        2007-08-23 15:00:01.1

The output shows that the packets sent by Device A can reach Device B through Device C. No packet loss occurs during the operation. The minimum, maximum, and average round-trip times are 2, 5, and 3 milliseconds, respectively.

ICMP jitter operation configuration example

Network requirements

As shown in Figure 42, configure an ICMP jitter operation to test the jitter between Device A and Device B.

Figure 42 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 42. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device A:

# Create an ICMP jitter operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type icmp-jitter

# Specify 10.2.2.2 as the destination address for the operation.

[DeviceA-nqa-admin-test1-icmp-jitter] destination ip 10.2.2.2

# Configure the operation to repeat every 1000 milliseconds.

[DeviceA-nqa-admin-test1-icmp-jitter] frequency 1000

[DeviceA-nqa-admin-test1-icmp-jitter] quit

# Start the ICMP jitter operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the ICMP jitter operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the ICMP jitter operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 10             Receive response times: 10

    Min/Max/Average round trip time: 1/2/1

    Square-Sum of round trip time: 13

    Last packet received time: 2015-03-09 17:40:29.8

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

    Packets out of sequence: 0

    Packets arrived late: 0

  ICMP-jitter results:

   RTT number: 10

    Min positive SD: 0                     Min positive DS: 0

    Max positive SD: 0                     Max positive DS: 0

    Positive SD number: 0                  Positive DS number: 0

    Positive SD sum: 0                     Positive DS sum: 0

    Positive SD average: 0                 Positive DS average: 0

    Positive SD square-sum: 0              Positive DS square-sum: 0

    Min negative SD: 1                     Min negative DS: 2

    Max negative SD: 1                     Max negative DS: 2

    Negative SD number: 1                  Negative DS number: 1

    Negative SD sum: 1                     Negative DS sum: 2

    Negative SD average: 1                 Negative DS average: 2

    Negative SD square-sum: 1              Negative DS square-sum: 4

  One way results:

    Max SD delay: 1                        Max DS delay: 2

    Min SD delay: 1                        Min DS delay: 2

    Number of SD delay: 1                  Number of DS delay: 1

    Sum of SD delay: 1                     Sum of DS delay: 2

    Square-Sum of SD delay: 1              Square-Sum of DS delay: 4

    Lost packets for unknown reason: 0

# Display the statistics of the ICMP jitter operation.

[DeviceA] display nqa statistics admin test1

NQA entry (admin admin, tag test1) test statistics:

  NO. : 1

    Start time: 2015-03-09 17:42:10.7

    Life time: 156 seconds

    Send operation times: 1560           Receive response times: 1560

    Min/Max/Average round trip time: 1/2/1

    Square-Sum of round trip time: 1563

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

    Packets out of sequence: 0

    Packets arrived late: 0

  ICMP-jitter results:

   RTT number: 1560

    Min positive SD: 1                     Min positive DS: 1

    Max positive SD: 1                     Max positive DS: 2

    Positive SD number: 18                 Positive DS number: 46

    Positive SD sum: 18                    Positive DS sum: 49

    Positive SD average: 1                 Positive DS average: 1

    Positive SD square-sum: 18             Positive DS square-sum: 55

    Min negative SD: 1                     Min negative DS: 1

    Max negative SD: 1                     Max negative DS: 2

    Negative SD number: 24                 Negative DS number: 57

    Negative SD sum: 24                    Negative DS sum: 58

    Negative SD average: 1                 Negative DS average: 1

    Negative SD square-sum: 24             Negative DS square-sum: 60

  One way results:

    Max SD delay: 1                        Max DS delay: 2

    Min SD delay: 1                        Min DS delay: 1

    Number of SD delay: 4                  Number of DS delay: 4

    Sum of SD delay: 4                     Sum of DS delay: 5

    Square-Sum of SD delay: 4              Square-Sum of DS delay: 7

    Lost packets for unknown reason: 0

DHCP operation configuration example

Network requirements

As shown in Figure 43, configure a DHCP operation to test the time required for Switch A to obtain an IP address from the DHCP server (Switch B).

Figure 43 Network diagram

 

Configuration procedure

# Create a DHCP operation.

<SwitchA> system-view

[SwitchA] nqa entry admin test1

[SwitchA-nqa-admin-test1] type dhcp

# Specify the DHCP server address 10.1.1.2 as the destination address.

[SwitchA-nqa-admin-test1-dhcp] destination ip 10.1.1.2

# Enable the saving of history records.

[SwitchA-nqa-admin-test1-dhcp] history-record enable

[SwitchA-nqa-admin-test1-dhcp] quit

# Start the DHCP operation.

[SwitchA] nqa schedule admin test1 start-time now lifetime forever

# After the DHCP operation runs for a period of time, stop the operation.

[SwitchA] undo nqa schedule admin test1

# Display the most recent result of the DHCP operation.

[SwitchA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 512/512/512

    Square-Sum of round trip time: 262144

    Last succeeded probe time: 2011-11-22 09:56:03.2

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the DHCP operation.

[SwitchA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

  Index      Response     Status           Time

  1          512          Succeeded        2011-11-22 09:56:03.2

The output shows that it took Switch A 512 milliseconds to obtain an IP address from the DHCP server.

DNS operation configuration example

Network requirements

As shown in Figure 44, configure a DNS operation to test whether Device A can perform address resolution through the DNS server and test the resolution time.

Figure 44 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 44. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create a DNS operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type dns

# Specify the IP address of the DNS server 10.2.2.2 as the destination address.

[DeviceA-nqa-admin-test1-dns] destination ip 10.2.2.2

# Specify host.com as the domain name to be translated.

[DeviceA-nqa-admin-test1-dns] resolve-target host.com

# Enable the saving of history records.

[DeviceA-nqa-admin-test1-dns] history-record enable

[DeviceA-nqa-admin-test1-dns] quit

# Start the DNS operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the DNS operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the DNS operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 62/62/62

    Square-Sum of round trip time: 3844

    Last succeeded probe time: 2011-11-10 10:49:37.3

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the DNS operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test) history records:

  Index      Response     Status           Time

  1          62           Succeeded        2011-11-10 10:49:37.3

The output shows that it took Device A 62 milliseconds to translate domain name host.com into an IP address.

FTP operation configuration example

Network requirements

As shown in Figure 45, configure an FTP operation to test the time required for Device A to upload a file to the FTP server. The login username and password are admin and systemtest, respectively. The file to be transferred to the FTP server is config.txt.

Figure 45 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 45. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create an FTP operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type ftp

# Specify the URL of the FTP server.

[DeviceA-nqa-admin-test-ftp] url ftp://10.2.2.2

# Specify 10.1.1.1 as the source IP address.

[DeviceA-nqa-admin-test1-ftp] source ip 10.1.1.1

# Configure the device to upload file config.txt to the FTP server.

[DeviceA-nqa-admin-test1-ftp] operation put

[DeviceA-nqa-admin-test1-ftp] filename config.txt

# Set the username to admin for the FTP operation.

[DeviceA-nqa-admin-test1-ftp] username admin

# Set the password to systemtest for the FTP operation.

[DeviceA-nqa-admin-test1-ftp] password simple systemtest

# Enable the saving of history records.

[DeviceA-nqa-admin-test1-ftp] history-record enable

[DeviceA-nqa-admin-test1-ftp] quit

# Start the FTP operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the FTP operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the FTP operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 173/173/173

    Square-Sum of round trip time: 29929

    Last succeeded probe time: 2011-11-22 10:07:28.6

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to disconnect: 0

    Failures due to no connection: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the FTP operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

  Index      Response     Status           Time

  1          173          Succeeded        2011-11-22 10:07:28.6

The output shows that it took Device A 173 milliseconds to upload a file to the FTP server.

HTTP operation configuration example

Network requirements

As shown in Figure 46, configure an HTTP operation on the NQA client to test the time required to obtain data from the HTTP server.

Figure 46 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 46. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create an HTTP operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type http

# Specify the URL of the HTTP server.

[DeviceA-nqa-admin-test-http] url http://10.2.2.2/index.htm

# Configure the HTTP operation to get data from the HTTP server.

[DeviceA-nqa-admin-test1-http] operation get

# Configure the operation to use HTTP version 1.0.

[DeviceA-nqa-admin-test1-http] version v1.0

# Enable the saving of history records.

[DeviceA-nqa-admin-test1-http] history-record enable

[DeviceA-nqa-admin-test1-http] quit

# Start the HTTP operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the HTTP operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the HTTP operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 64/64/64

    Square-Sum of round trip time: 4096

    Last succeeded probe time: 2011-11-22 10:12:47.9

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to disconnect: 0

    Failures due to no connection: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the HTTP operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

  Index      Response     Status           Time

  1          64           Succeeded        2011-11-22 10:12:47.9

The output shows that it took Device A 64 milliseconds to obtain data from the HTTP server.

UDP jitter operation configuration example

Network requirements

As shown in Figure 47, configure a UDP jitter operation to test the jitter, delay, and round-trip time between Device A and Device B.

Figure 47 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 47. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device B:

# Enable the NQA server.

<DeviceB> system-view

[DeviceB] nqa server enable

# Configure a listening service to listen on the IP address 10.2.2.2 and UDP port 9000.

[DeviceB] nqa server udp-echo 10.2.2.2 9000

4.        Configure Device A:

# Create a UDP jitter operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type udp-jitter

# Specify 10.2.2.2 as the destination address of the operation.

[DeviceA-nqa-admin-test1-udp-jitter] destination ip 10.2.2.2

# Set the destination port number to 9000.

[DeviceA-nqa-admin-test1-udp-jitter] destination port 9000

# Configure the operation to repeat every 1000 milliseconds.

[DeviceA-nqa-admin-test1-udp-jitter] frequency 1000

[DeviceA-nqa-admin-test1-udp-jitter] quit

# Start the UDP jitter operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the UDP jitter operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the UDP jitter operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 10             Receive response times: 10

    Min/Max/Average round trip time: 15/32/17

    Square-Sum of round trip time: 3235

    Last packet received time: 2011-05-29 13:56:17.6

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

    Packets out of sequence: 0

    Packets arrived late: 0

  UDP-jitter results:

   RTT number: 10

    Min positive SD: 4                     Min positive DS: 1

    Max positive SD: 21                    Max positive DS: 28

    Positive SD number: 5                  Positive DS number: 4

    Positive SD sum: 52                    Positive DS sum: 38

    Positive SD average: 10                Positive DS average: 10

    Positive SD square-sum: 754            Positive DS square-sum: 460

    Min negative SD: 1                     Min negative DS: 6

    Max negative SD: 13                    Max negative DS: 22

    Negative SD number: 4                  Negative DS number: 5

    Negative SD sum: 38                    Negative DS sum: 52

    Negative SD average: 10                Negative DS average: 10

    Negative SD square-sum: 460            Negative DS square-sum: 754

  One way results:

    Max SD delay: 15                       Max DS delay: 16

    Min SD delay: 7                        Min DS delay: 7

    Number of SD delay: 10                 Number of DS delay: 10

    Sum of SD delay: 78                    Sum of DS delay: 85

    Square-Sum of SD delay: 666            Square-Sum of DS delay: 787

    SD lost packets: 0                   DS lost packets: 0

    Lost packets for unknown reason: 0

# Display the statistics of the UDP jitter operation.

[DeviceA] display nqa statistics admin test1

NQA entry (admin admin, tag test1) test statistics:

  NO. : 1

    Start time: 2011-05-29 13:56:14.0

    Life time: 47 seconds

    Send operation times: 410            Receive response times: 410

    Min/Max/Average round trip time: 1/93/19

    Square-Sum of round trip time: 206176

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

    Packets out of sequence: 0

    Packets arrived late: 0

  UDP-jitter results:

   RTT number: 410

    Min positive SD: 3                     Min positive DS: 1

    Max positive SD: 30                    Max positive DS: 79

    Positive SD number: 186                Positive DS number: 158

    Positive SD sum: 2602                  Positive DS sum: 1928

    Positive SD average: 13                Positive DS average: 12

    Positive SD square-sum: 45304          Positive DS square-sum: 31682

    Min negative SD: 1                     Min negative DS: 1

    Max negative SD: 30                    Max negative DS: 78

    Negative SD number: 181                Negative DS number: 209

    Negative SD sum: 181                   Negative DS sum: 209

    Negative SD average: 13                Negative DS average: 14

    Negative SD square-sum: 46994          Negative DS square-sum: 3030

  One way results:

    Max SD delay: 46                       Max DS delay: 46

    Min SD delay: 7                        Min DS delay: 7

    Number of SD delay: 410                Number of DS delay: 410

    Sum of SD delay: 3705                  Sum of DS delay: 3891

    Square-Sum of SD delay: 45987          Square-Sum of DS delay: 49393

    SD lost packets: 0                   DS lost packets: 0

    Lost packets for unknown reason: 0

SNMP operation configuration example

Network requirements

As shown in Figure 48, configure an SNMP operation to test the time the NQA client uses to get a response from the SNMP agent.

Figure 48 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 48. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure the SNMP agent (Device B):

# Set the SNMP version to all.

<DeviceB> system-view

[DeviceB] snmp-agent sys-info version all

# Set the read community to public.

[DeviceB] snmp-agent community read public

# Set the write community to private.

[DeviceB] snmp-agent community write private

4.        Configure Device A:

# Create an SNMP operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type snmp

# Specify 10.2.2.2 as the destination IP address of the SNMP operation.

[DeviceA-nqa-admin-test1-snmp] destination ip 10.2.2.2

# Enable the saving of history records.

[DeviceA-nqa-admin-test1-snmp] history-record enable

[DeviceA-nqa-admin-test1-snmp] quit

# Start the SNMP operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the SNMP operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the SNMP operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 50/50/50

    Square-Sum of round trip time: 2500

    Last succeeded probe time: 2011-11-22 10:24:41.1

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the SNMP operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

  Index      Response     Status           Time

  1          50           Succeeded        2011-11-22 10:24:41.1

The output shows that it took Device A 50 milliseconds to receive a response from the SNMP agent.

TCP operation configuration example

Network requirements

As shown in Figure 49, configure a TCP operation to test the time required for Device A to establish a TCP connection with Device B.

Figure 49 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 49. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device B:

# Enable the NQA server.

<DeviceB> system-view

[DeviceB] nqa server enable

# Configure a listening service to listen on the IP address 10.2.2.2 and TCP port 9000.

[DeviceB] nqa server tcp-connect 10.2.2.2 9000

4.        Configure Device A:

# Create a TCP operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type tcp

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqa-admin-test1-tcp] destination ip 10.2.2.2

# Set the destination port number to 9000.

[DeviceA-nqa-admin-test1-tcp] destination port 9000

# Enable the saving of history records.

[DeviceA-nqa-admin-test1-tcp] history-record enable

[DeviceA-nqa-admin-test1-tcp] quit

# Start the TCP operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the TCP operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the TCP operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 13/13/13

    Square-Sum of round trip time: 169

    Last succeeded probe time: 2011-11-22 10:27:25.1

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to disconnect: 0

    Failures due to no connection: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the TCP operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

  Index      Response     Status           Time

  1          13           Succeeded        2011-11-22 10:27:25.1

The output shows that it took Device A 13 milliseconds to establish a TCP connection to port 9000 on the NQA server.

UDP echo operation configuration example

Network requirements

As shown in Figure 50, configure a UDP echo operation on the NQA client to test the round-trip time to Device B. The destination port number is 8000.

Figure 50 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 50. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device B:

# Enable the NQA server.

<DeviceB> system-view

[DeviceB] nqa server enable

# Configure a listening service to listen on the IP address 10.2.2.2 and UDP port 8000.

[DeviceB] nqa server udp-echo 10.2.2.2 8000

4.        Configure Device A:

# Create a UDP echo operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type udp-echo

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqa-admin-test1-udp-echo] destination ip 10.2.2.2

# Set the destination port number to 8000.

[DeviceA-nqa-admin-test1-udp-echo] destination port 8000

# Enable the saving of history records.

[DeviceA-nqa-admin-test1-udp-echo] history-record enable

[DeviceA-nqa-admin-test1-udp-echo] quit

# Start the UDP echo operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the UDP echo operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the UDP echo operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 25/25/25

    Square-Sum of round trip time: 625

    Last succeeded probe time: 2011-11-22 10:36:17.9

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the UDP echo operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

  Index      Response     Status           Time

  1          25           Succeeded        2011-11-22 10:36:17.9

The output shows that the round-trip time between Device A and port 8000 on Device B is 25 milliseconds.

UDP tracert operation configuration example

Network requirements

As shown in Figure 51, configure a UDP tracert operation to determine the routing path from Device A to Device B.

Figure 51 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 51. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Execute the ip ttl-expires enable command on the intermediate devices and execute the ip unreachables enable command on Device B.

4.        Configure Device A:

# Create a UDP tracert operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type udp-tracert

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqa-admin-test1-udp-tracert] destination ip 10.2.2.2

# Set the destination port number to 33434.

[DeviceA-nqa-admin-test1-udp-tracert] destination port 33434

# Configure Device A to perform three probes to each hop.

[DeviceA-nqa-admin-test1-udp-tracert] probe count 3

# Set the probe timeout time to 500 milliseconds.

[DeviceA-nqa-admin-test1-udp-tracert] probe timeout 500

# Configure the UDP tracert operation to repeat every 5000 milliseconds.

[DeviceA-nqa-admin-test1-udp-tracert] frequency 5000

# Specify GigabitEthernet 1/0/1 as the output interface for UDP packets.

[DeviceA-nqa-admin-test1-udp-tracert] out interface gigabitethernet 1/0/1

# Enable the no-fragmentation feature.

[DeviceA-nqa-admin-test1-udp-tracert] no-fragment enable

# Set the maximum number of consecutive probe failures to 6.

[DeviceA-nqa-admin-test1-udp-tracert] max-failure 6

# Set the TTL value to 1 for UDP packets in the start round of the UDP tracert operation.

[DeviceA-nqa-admin-test1-udp-tracert] init-ttl 1

# Start the UDP tracert operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the UDP tracert operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the UDP tracert operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 6              Receive response times: 6

    Min/Max/Average round trip time: 1/1/1

    Square-Sum of round trip time: 1

    Last succeeded probe time: 2013-09-09 14:46:06.2

  Extended results:

    Packet loss in test: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

Failures due to other errors: 0

  UDP-tracert results:

    TTL    Hop IP             Time

    1      3.1.1.1            2013-09-09 14:46:03.2

    2      10.2.2.2           2013-09-09 14:46:06.2

# Display the history records of the UDP tracert operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

Index      TTL  Response  Hop IP           Status          Time

1          2    2         10.2.2.2         Succeeded       2013-09-09 14:46:06.2

1          2    1         10.2.2.2         Succeeded       2013-09-09 14:46:05.2

1          2    2         10.2.2.2         Succeeded       2013-09-09 14:46:04.2

1          1    1         3.1.1.1          Succeeded       2013-09-09 14:46:03.2

1          1    2         3.1.1.1          Succeeded       2013-09-09 14:46:02.2

1          1    1         3.1.1.1          Succeeded       2013-09-09 14:46:01.2

Voice operation configuration example

Network requirements

As shown in Figure 52, configure a voice operation to test jitters, delay, MOS, and ICPIF between Device A and Device B.

Figure 52 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 52. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device B:

# Enable the NQA server.

<DeviceB> system-view

[DeviceB] nqa server enable

# Configure a listening service to listen on IP address 10.2.2.2 and UDP port 9000.

[DeviceB] nqa server udp-echo 10.2.2.2 9000

4.        Configure Device A:

# Create a voice operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type voice

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqa-admin-test1-voice] destination ip 10.2.2.2

# Set the destination port number to 9000.

[DeviceA-nqa-admin-test1-voice] destination port 9000

[DeviceA-nqa-admin-test1-voice] quit

# Start the voice operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the voice operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the voice operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1000           Receive response times: 1000

    Min/Max/Average round trip time: 31/1328/33

    Square-Sum of round trip time: 2844813

    Last packet received time: 2011-06-13 09:49:31.1

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

Packets out of sequence: 0

    Packets arrived late: 0

  Voice results:

   RTT number: 1000

    Min positive SD: 1                     Min positive DS: 1

    Max positive SD: 204                   Max positive DS: 1297

    Positive SD number: 257                Positive DS number: 259

    Positive SD sum: 759                   Positive DS sum: 1797

    Positive SD average: 2                 Positive DS average: 6

    Positive SD square-sum: 54127          Positive DS square-sum: 1691967

    Min negative SD: 1                     Min negative DS: 1

    Max negative SD: 203                   Max negative DS: 1297

    Negative SD number: 255                Negative DS number: 259

    Negative SD sum: 759                   Negative DS sum: 1796

    Negative SD average: 2                 Negative DS average: 6

    Negative SD square-sum: 53655          Negative DS square-sum: 1691776

  One way results:

    Max SD delay: 343                      Max DS delay: 985

    Min SD delay: 343                      Min DS delay: 985

    Number of SD delay: 1                  Number of DS delay: 1

    Sum of SD delay: 343                   Sum of DS delay: 985

    Square-Sum of SD delay: 117649         Square-Sum of DS delay: 970225

    SD lost packets: 0                   DS lost packets: 0

    Lost packets for unknown reason: 0

  Voice scores:

    MOS value: 4.38                        ICPIF value: 0

# Display the statistics of the voice operation.

[DeviceA] display nqa statistics admin test1

NQA entry (admin admin, tag test1) test statistics:

  NO. : 1

 

    Start time: 2011-06-13 09:45:37.8

    Life time: 331 seconds

    Send operation times: 4000           Receive response times: 4000

    Min/Max/Average round trip time: 15/1328/32

    Square-Sum of round trip time: 7160528

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

Packets out of sequence: 0

    Packets arrived late: 0

  Voice results:

   RTT number: 4000

    Min positive SD: 1                     Min positive DS: 1

    Max positive SD: 360                   Max positive DS: 1297

    Positive SD number: 1030               Positive DS number: 1024

    Positive SD sum: 4363                  Positive DS sum: 5423

    Positive SD average: 4                 Positive DS average: 5

    Positive SD square-sum: 497725         Positive DS square-sum: 2254957

    Min negative SD: 1                     Min negative DS: 1

    Max negative SD: 360                   Max negative DS: 1297

    Negative SD number: 1028               Negative DS number: 1022

    Negative SD sum: 1028                  Negative DS sum: 1022

    Negative SD average: 4                 Negative DS average: 5

    Negative SD square-sum: 495901         Negative DS square-sum: 5419

  One way results:

    Max SD delay: 359                      Max DS delay: 985

    Min SD delay: 0                        Min DS delay: 0

    Number of SD delay: 4                  Number of DS delay: 4

    Sum of SD delay: 1390                  Sum of DS delay: 1079

    Square-Sum of SD delay: 483202         Square-Sum of DS delay: 973651

    SD lost packets: 0                   DS lost packets: 0

    Lost packets for unknown reason: 0

  Voice scores:

    Max MOS value: 4.38                    Min MOS value: 4.38

    Max ICPIF value: 0                     Min ICPIF value: 0

DLSw operation configuration example

Network requirements

As shown in Figure 53, configure a DLSw operation to test the response time of the DLSw device.

Figure 53 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 53. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create a DLSw operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type dlsw

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqa-admin-test1-dlsw] destination ip 10.2.2.2

# Enable the saving of history records.

[DeviceA-nqa-admin-test1-dlsw] history-record enable

[DeviceA-nqa-admin-test1-dlsw] quit

# Start the DLSw operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the DLSw operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the DLSw operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

    Send operation times: 1              Receive response times: 1

    Min/Max/Average round trip time: 19/19/19

    Square-Sum of round trip time: 361

    Last succeeded probe time: 2011-11-22 10:40:27.7

  Extended results:

    Packet loss ratio: 0%

    Failures due to timeout: 0

    Failures due to disconnect: 0

    Failures due to no connection: 0

    Failures due to internal error: 0

    Failures due to other errors: 0

# Display the history records of the DLSw operation.

[DeviceA] display nqa history admin test1

NQA entry (admin admin, tag test1) history records:

  Index      Response     Status           Time

  1          19           Succeeded        2011-11-22 10:40:27.7

The output shows that the response time of the DLSw device is 19 milliseconds.

Path jitter operation configuration example

Network requirements

As shown in Figure 54, configure a path jitter operation to test the round trip time and jitters from Device A to Device B and Device C.

Figure 54 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 54. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Execute the ip ttl-expires enable command on Device B and execute the ip unreachables enable command on Device C.

# Create a path jitter operation.

<DeviceA> system-view

[DeviceA] nqa entry admin test1

[DeviceA-nqa-admin-test1] type path-jitter

# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.

[DeviceA-nqa-admin-test1-path-jitter] destination ip 10.2.2.2

# Configure the path jitter operation to repeat every 10000 milliseconds.

[DeviceA-nqa-admin-test1-path-jitter] frequency 10000

[DeviceA-nqa-admin-test1-path-jitter] quit

# Start the path jitter operation.

[DeviceA] nqa schedule admin test1 start-time now lifetime forever

# After the path jitter operation runs for a period of time, stop the operation.

[DeviceA] undo nqa schedule admin test1

# Display the most recent result of the path jitter operation.

[DeviceA] display nqa result admin test1

NQA entry (admin admin, tag test1) test results:

  Hop IP 10.1.1.2

    Basic Results

      Send operation times: 10             Receive response times: 10

      Min/Max/Average round trip time: 9/21/14

      Square-Sum of round trip time: 2419

    Extended Results

      Failures due to timeout: 0

      Failures due to internal error: 0

      Failures due to other errors: 0

      Packets out of sequence: 0

      Packets arrived late: 0

    Path-Jitter Results

      Jitter number: 9

        Min/Max/Average jitter: 1/10/4

      Positive jitter number: 6

        Min/Max/Average positive jitter: 1/9/4

        Sum/Square-Sum positive jitter: 25/173

      Negative jitter number: 3

        Min/Max/Average negative jitter: 2/10/6

        Sum/Square-Sum positive jitter: 19/153

 

  Hop IP 10.2.2.2

    Basic Results

      Send operation times: 10             Receive response times: 10

      Min/Max/Average round trip time: 15/40/28

      Square-Sum of round trip time: 4493

    Extended Results

      Failures due to timeout: 0

      Failures due to internal error: 0

      Failures due to other errors: 0

      Packets out of sequence: 0

      Packets arrived late: 0

    Path-Jitter Results

      Jitter number: 9

        Min/Max/Average jitter: 1/10/4

      Positive jitter number: 6

        Min/Max/Average positive jitter: 1/9/4

        Sum/Square-Sum positive jitter: 25/173

      Negative jitter number: 3

        Min/Max/Average negative jitter: 2/10/6

        Sum/Square-Sum positive jitter: 19/153

NQA collaboration configuration example (on routers)

Network requirements

As shown in Figure 55, configure a static route to Router C with Router B as the next hop on Router A. Associate the static route, a track entry, and an ICMP echo operation to monitor the state of the static route.

Figure 55 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 55. (Details not shown.)

2.        On Router A, configure a static route, and associate the static route with track entry 1.

<RouterA> system-view

[RouterA] ip route-static 10.1.1.2 24 10.2.1.1 track 1

3.        On Router A, configure an ICMP echo operation:

# Create an NQA operation with the administrator name admin and operation tag test1.

[RouterA] nqa entry admin test1

# Set the NQA operation type to ICMP echo.

[RouterA-nqa-admin-test1] type icmp-echo

# Specify 10.2.1.1 as the destination IP address.

[RouterA-nqa-admin-test1-icmp-echo] destination ip 10.2.1.1

# Configure the operation to repeat every 100 milliseconds.

[RouterA-nqa-admin-test1-icmp-echo] frequency 100

# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration is triggered.

[RouterA-nqa-admin-test1-icmp-echo] reaction 1 checked-element probe-fail threshold-type consecutive 5 action-type trigger-only

[RouterA-nqa-admin-test1-icmp-echo] quit

# Start the ICMP echo operation.

[RouterA] nqa schedule admin test1 start-time now lifetime forever

4.        On Router A, create track entry 1, and associate it with reaction entry 1 of the NQA operation.

[RouterA] track 1 nqa entry admin test1 reaction 1

Verifying the configuration

# Display information about all the track entries on Router A.

[RouterA] display track all

Track ID: 1

  State: Positive

  Duration: 0 days 0 hours 0 minutes 0 seconds

  Notification delay: Positive 0, Negative 0 (in seconds)

  Tracked object:

    NQA entry: admin test1

    Reaction: 1

# Display brief information about active routes in the routing table on Router A.

[RouterA] display ip routing-table

 

Destinations : 13        Routes : 13

 

Destination/Mask    Proto  Pre  Cost         NextHop         Interface

0.0.0.0/32          Direct 0    0            127.0.0.1       InLoop0

10.1.1.0/24         Static 60   0            10.2.1.1        GE1/0/1

10.2.1.0/24         Direct 0    0            10.2.1.2        GE1/0/1

10.2.1.0/32         Direct 0    0            10.2.1.2        GE1/0/1

10.2.1.2/32         Direct 0    0            127.0.0.1       InLoop0

10.2.1.255/32       Direct 0    0            10.2.1.2        GE1/0/1

127.0.0.0/8         Direct 0    0            127.0.0.1       InLoop0

127.0.0.0/32        Direct 0    0            127.0.0.1       InLoop0

127.0.0.1/32        Direct 0    0            127.0.0.1       InLoop0

127.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

224.0.0.0/4         Direct 0    0            0.0.0.0         NULL0

224.0.0.0/24        Direct 0    0            0.0.0.0         NULL0

255.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track entry is positive.

# Remove the IP address of GigabitEthernet 1/0/1 on Router B.

<RouterB> system-view

[RouterB] interface gigabitethernet 1/0/1

[RouterB-GigabitEthernet1/0/1] undo ip address

# Display information about all the track entries on Router A.

[RouterA] display track all

Track ID: 1

  State: Negative

  Duration: 0 days 0 hours 0 minutes 0 seconds

  Notification delay: Positive 0, Negative 0 (in seconds)

  Tracked object:

    NQA entry: admin test1

    Reaction: 1

# Display brief information about active routes in the routing table on Router A.

[RouterA] display ip routing-table

 

Destinations : 12        Routes : 12

 

Destination/Mask    Proto  Pre  Cost         NextHop         Interface

0.0.0.0/32          Direct 0    0            127.0.0.1       InLoop0

10.2.1.0/24         Direct 0    0            10.2.1.2        GE1/0/1

10.2.1.0/32         Direct 0    0            10.2.1.2        GE1/0/1

10.2.1.2/32         Direct 0    0            127.0.0.1       InLoop0

10.2.1.255/32       Direct 0    0            10.2.1.2        GE1/0/1

127.0.0.0/8         Direct 0    0            127.0.0.1       InLoop0

127.0.0.0/32        Direct 0    0            127.0.0.1       InLoop0

127.0.0.1/32        Direct 0    0            127.0.0.1       InLoop0

127.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

224.0.0.0/4         Direct 0    0            0.0.0.0         NULL0

224.0.0.0/24        Direct 0    0            0.0.0.0         NULL0

255.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

The output shows that the static route does not exist, and the status of the track entry is negative.

NQA collaboration configuration example

Network requirements

As shown in Figure 56, configure a static route to Switch C with Switch B as the next hop on Switch A. Associate the static route, a track entry, and an ICMP echo operation to monitor the state of the static route.

Figure 56 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 56. (Details not shown.)

2.        On Switch A, configure a static route, and associate the static route with track entry 1.

<SwitchA> system-view

[SwitchA] ip route-static 10.1.1.2 24 10.2.1.1 track 1

3.        On Switch A, configure an ICMP echo operation:

# Create an NQA operation with the administrator name admin and operation tag test1.

[SwitchA] nqa entry admin test1

# Configure the NQA operation type as ICMP echo.

[SwitchA-nqa-admin-test1] type icmp-echo

# Specify 10.2.1.1 as the destination IP address.

[SwitchA-nqa-admin-test1-icmp-echo] destination ip 10.2.1.1

# Configure the operation to repeat every 100 milliseconds.

[SwitchA-nqa-admin-test1-icmp-echo] frequency 100

# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration is triggered.

[SwitchA-nqa-admin-test1-icmp-echo] reaction 1 checked-element probe-fail threshold-type consecutive 5 action-type trigger-only

[SwitchA-nqa-admin-test1-icmp-echo] quit

# Start the ICMP operation.

[SwitchA] nqa schedule admin test1 start-time now lifetime forever

4.        On Switch A, create track entry 1, and associate it with reaction entry 1 of the NQA operation.

[SwitchA] track 1 nqa entry admin test1 reaction 1

Verifying the configuration

# Display information about all the track entries on Switch A.

[SwitchA] display track all

Track ID: 1

  State: Positive

  Duration: 0 days 0 hours 0 minutes 0 seconds

  Notification delay: Positive 0, Negative 0 (in seconds)

  Tracked object:

    NQA entry: admin test1

    Reaction: 1

# Display brief information about active routes in the routing table on Switch A.

[SwitchA] display ip routing-table

 

Destinations : 13        Routes : 13

 

Destination/Mask    Proto  Pre  Cost         NextHop         Interface

0.0.0.0/32          Direct 0    0            127.0.0.1       InLoop0

10.1.1.0/24         Static 60   0            10.2.1.1        Vlan3

10.2.1.0/24         Direct 0    0            10.2.1.2        Vlan3

10.2.1.0/32         Direct 0    0            10.2.1.2        Vlan3

10.2.1.2/32         Direct 0    0            127.0.0.1       InLoop0

10.2.1.255/32       Direct 0    0            10.2.1.2        Vlan3

127.0.0.0/8         Direct 0    0            127.0.0.1       InLoop0

127.0.0.0/32        Direct 0    0            127.0.0.1       InLoop0

127.0.0.1/32        Direct 0    0            127.0.0.1       InLoop0

127.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

224.0.0.0/4         Direct 0    0            0.0.0.0         NULL0

224.0.0.0/24        Direct 0    0            0.0.0.0         NULL0

255.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track entry is positive.

# Remove the IP address of VLAN-interface 3 on Switch B.

<SwitchB> system-view

[SwitchB] interface vlan-interface 3

[SwitchB-Vlan-interface3] undo ip address

# Display information about all the track entries on Switch A.

[SwitchA] display track all

Track ID: 1

  State: Negative

  Duration: 0 days 0 hours 0 minutes 0 seconds

  Notification delay: Positive 0, Negative 0 (in seconds)

  Tracked object:

    NQA entry: admin test1

    Reaction: 1

# Display brief information about active routes in the routing table on Switch A.

[SwitchA] display ip routing-table

 

Destinations : 12        Routes : 12

 

Destination/Mask    Proto  Pre  Cost         NextHop         Interface

0.0.0.0/32          Direct 0    0            127.0.0.1       InLoop0

10.2.1.0/24         Direct 0    0            10.2.1.2        Vlan3

10.2.1.0/32         Direct 0    0            10.2.1.2        Vlan3

10.2.1.2/32         Direct 0    0            127.0.0.1       InLoop0

10.2.1.255/32       Direct 0    0            10.2.1.2        Vlan3

127.0.0.0/8         Direct 0    0            127.0.0.1       InLoop0

127.0.0.0/32        Direct 0    0            127.0.0.1       InLoop0

127.0.0.1/32        Direct 0    0            127.0.0.1       InLoop0

127.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

224.0.0.0/4         Direct 0    0            0.0.0.0         NULL0

224.0.0.0/24        Direct 0    0            0.0.0.0         NULL0

255.255.255.255/32  Direct 0    0            127.0.0.1       InLoop0

The output shows that the static route does not exist, and the status of the track entry is negative.

ICMP template configuration example

Network requirements

As shown in Figure 57, configure an ICMP template for a feature to perform the ICMP echo operation from Device A to Device B.

Figure 57 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 57. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create ICMP template icmp.

<DeviceA> system-view

[DeviceA] nqa template icmp icmp

# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.

[DeviceA-nqatplt-icmp-icmp] destination ip 10.2.2.2

# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.

[DeviceA-nqatplt-icmp-icmp] probe timeout 500

# Configure the ICMP echo operation to repeat every 3000 milliseconds.

[DeviceA-nqatplt-icmp-icmp] frequency 3000

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-fail 2

DNS template configuration example

Network requirements

As shown in Figure 58, configure a DNS template for a feature to perform the DNS operation. The operation tests whether Device A can perform the address resolution through the DNS server.

Figure 58 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 58. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create DNS template dns.

<DeviceA> system-view

[DeviceA] nqa template dns dns

# Specify the IP address of the DNS server 10.2.2.2 as the destination IP address.

[DeviceA-nqatplt-dns-dns] destination ip 10.2.2.2

# Specify host.com as the domain name to be translated.

[DeviceA-nqatplt-dns-dns] resolve-target host.com

# Set the domain name resolution type to type A.

[DeviceA-nqatplt-dns-dns] resolve-type A

# Specify 3.3.3.3 as the expected IP address.

[DeviceA-nqatplt-dns-dns] expect ip 3.3.3.3

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-dns-dns] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-dns-dns] reaction trigger probe-fail 2

TCP template configuration example

Network requirements

As shown in Figure 59, configure a TCP template for a feature to perform the TCP operation. The operation tests whether Device A can establish a TCP connection to Device B.

Figure 59 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 59. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device B:

# Enable the NQA server.

<DeviceB> system-view

[DeviceB] nqa server enable

# Configure a listening service to listen to the IP address 10.2.2.2 and TCP port 9000.

[DeviceB] nqa server tcp-connect 10.2.2.2 9000

4.        Configure Device A:

# Create TCP template tcp.

<DeviceA> system-view

[DeviceA] nqa template tcp tcp

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqatplt-tcp-tcp] destination ip 10.2.2.2

# Set the destination port number to 9000.

[DeviceA-nqatplt-tcp-tcp] destination port 9000

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-fail 2

TCP half open template configuration example

Network requirements

As shown in Figure 60, configure a TCP half open template for a feature to test whether Device B can provide the TCP service for Device A.

Figure 60 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 60. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device A:

# Create TCP half open template test.

<DeviceA> system-view

[DeviceA] nqa template tcphalfopen test

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqatplt-tcphalfopen-test] destination ip 10.2.2.2

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-fail 2

UDP template configuration example

Network requirements

As shown in Figure 61, configure a UDP template for a feature to perform the UDP operation. The operation tests whether Device A can receive a response from Device B.

Figure 61 Network diagram

 

Configuration procedure

1.        Assign IP addresses to interfaces, as shown in Figure 61. (Details not shown.)

2.        Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

3.        Configure Device B:

# Enable the NQA server.

<DeviceB> system-view

[DeviceB] nqa server enable

# Configure a listening service to listen to the IP address 10.2.2.2 and UDP port 9000.

[DeviceB] nqa server udp-echo 10.2.2.2 9000

4.        Configure Device A:

# Create UDP template udp.

<DeviceA> system-view

[DeviceA] nqa template udp udp

# Specify 10.2.2.2 as the destination IP address.

[DeviceA-nqatplt-udp-udp] destination ip 10.2.2.2

# Set the destination port number to 9000.

[DeviceA-nqatplt-udp-udp] destination port 9000

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-udp-udp] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-udp-udp] reaction trigger probe-fail 2

HTTP template configuration example

Network requirements

As shown in Figure 62, configure an HTTP template for a feature to perform the HTTP operation. The operation tests whether the NQA client can get data from the HTTP server.

Figure 62 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 62. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create the HTTP template http.

<DeviceA> system-view

[DeviceA] nqa template http http

# Specify http://10.2.2.2/index.htm as the URL of the HTTP server.

[DeviceA-nqatplt-http-http] url http://10.2.2.2/index.htm

# Set the HTTP operation type to get.

[DeviceA-nqatplt-http-http] operation get

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-http-http] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-http-http] reaction trigger probe-fail 2

HTTPS template configuration example

Network requirements

As shown in Figure 63, configure an HTTPS template for a feature to test whether the NQA client can get data from the HTTPS server (Device B).

Figure 63 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 63. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Configure an SSL client policy named abc on Device A, and make sure Device A can use the policy to connect to the HTTPS server. (Details not shown.)

# Create the HTTPS template test.

<DeviceA> system-view

[DeviceA] nqa template https https

# Specify http://10.2.2.2/index.htm as the URL of the HTTPS server.

[DeviceA-nqatplt-https- https] url https://10.2.2.2/index.htm

# Specify the SSL client policy abc for the HTTPS template.

[DeviceA-nqatplt-https- https] ssl-client-policy abc

# Set the HTTPS operation type to get (the default HTTPS operation type).

[DeviceA-nqatplt-https- https] operation get

# Set the HTTPS version to 1.0 (the default HTTPS version).

 [DeviceA-nqatplt-https- https] version v1.0

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-https- https] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-https- https] reaction trigger probe-fail 2

FTP template configuration example

Network requirements

As shown in Figure 64, configure an FTP template for a feature to perform the FTP operation. The operation tests whether Device A can upload a file to the FTP server. The login username and password are admin and systemtest, respectively. The file to be transferred to the FTP server is config.txt.

Figure 64 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 64. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Create the FTP template ftp.

<DeviceA> system-view

[DeviceA] nqa template ftp ftp

# Specify the URL of the FTP server.

[DeviceA-nqatplt-ftp-ftp] url ftp://10.2.2.2

# Specify 10.1.1.1 as the source IP address.

[DeviceA-nqatplt-ftp-ftp] source ip 10.1.1.1

# Configure the device to upload file config.txt to the FTP server.

[DeviceA-nqatplt-ftp-ftp] operation put

[DeviceA-nqatplt-ftp-ftp] filename config.txt

# Set the username to admin for the FTP server login.

[DeviceA-nqatplt-ftp-ftp] username admin

# Set the password to systemtest for the FTP server login.

[DeviceA-nqatplt-ftp-ftp] password simple systemtest

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-fail 2

SSL template configuration example

Network requirements

As shown in Figure 65, configure an SSL template for a feature to test whether Device A can establish an SSL connection to the SSL server on Device B.

Figure 65 Network diagram

 

Configuration procedure

# Assign IP addresses to interfaces, as shown in Figure 65. (Details not shown.)

# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)

# Configure an SSL client policy named abc on Device A, and make sure Device A can use the policy to connect to the SSL server on Device B. (Details not shown.)

# Create the SSL template ssl.

<DeviceA> system-view

[DeviceA] nqa template ssl ssl

# Set the destination IP address and port number to 10.2.2.2 and 9000, respectively.

[DeviceA-nqatplt-ssl-ssl] destination ip 10.2.2.2

[DeviceA-nqatplt-ssl-ssl] destination port 9000

# Specify the SSL client policy abc for the SSL template.

[DeviceA-nqatplt-ssl-ssl] ssl-client-policy abc

# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.

[DeviceA-nqatplt-ssl-ssl] reaction trigger probe-pass 2

# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.

[DeviceA-nqatplt-ssl-ssl] reaction trigger probe-fail 2

 


Configuring NETCONF

Overview

Network Configuration Protocol (NETCONF) is an XML-based network management protocol with good filtering capabilities. It provides programmable mechanisms to manage and configure network devices. Through NETCONF, you can configure device parameters, retrieve parameter values, and get statistics information.

In NETCONF messages, each data item is contained in a fixed element. This enables different devices of the same vendor to provide the same access method and the same result presentation method. For the devices of different vendors, XML mapping in NETCONF messages can achieve the same effect. For a network environment containing different devices regardless of vendors, you can develop a NETCONF-based NMS system to configure and manage devices in a simple and effective way.

NETCONF structure

NETCONF has four layers: content layer, operations layer, RPC layer, and transport protocol layer.

Table 20 NETCONF layers and XML layers

NETCONF layer

XML layer

Description

Content

Configuration data, status data, and statistics information

The content layer contains a set of managed objects, which can be configuration data, status data, and statistics information. For more information about the operable data, see the NETCONF XML API reference for the switch.

Operations

<get>,<get-config>,<edit-config>…

The operations layer defines a set of base operations invoked as RPC methods with XML-encoded parameters. NETCONF base operations include data retrieval operations, configuration operations, lock operations, and session operations. For the device supported operations, see "Appendix A Supported NETCONF operations."

RPC

<rpc>,<rpc-reply>

The RPC layer provides a simple, transport-independent framing mechanism for encoding RPCs. The <rpc> and <rpc-reply> elements are used to enclose NETCONF requests and responses (data at the operations layer and the content layer).

Transport Protocol

·         In non-FIPS mode:
Console/Telnet/SSH/HTTP/HTTPS
/TLS

·         In FIPS mode:
Console/SSH/HTTPS/TLS

The transport protocol layer provides reliable, connection-oriented, serial data links.

In non-FIPS mode, you can log in through Telnet, SSH, or the console port to perform NETCONF operations at the CLI. You can also log in through HTTP or HTTPS to perform NETCONF operations in the Web interface or perform NETCONF-over-SOAP operations.

In FIPS mode, all login methods are the same as in non-FIPS mode except that you cannot use HTTP or Telnet.

 

NETCONF message format

NETCONF

IMPORTANT

IMPORTANT:

When configuring NETCONF in XML view, you must add the end mark "]]>]]>" at the end of an XML message. Otherwise, the device cannot identify the message.

 

All NETCONF messages are XML-based and comply with RFC 4741. Any incoming NETCONF messages must pass XML Schema check before it can be processed. If a NETCONF message fails XML Schema check, the device sends an error message to the client.

For information about the NETCONF operations supported by the device and the operable data, see the NETCONF XML API reference for the switch.

The following example shows a NETCONF message for getting all parameters of all interfaces on the device:

<?xml version="1.0" encoding="utf-8"?>

<rpc message-id ="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get-bulk>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Ifmgr>

          <Interfaces>

                 <Interface/>

          </Interfaces>

        </Ifmgr>

      </top>

    </filter>

  </get-bulk>

</rpc>

NETCONF over SOAP

All NETCONF over SOAP messages are XML-based and comply with RFC 4741. NETCONF messages are contained in the <Body> element of SOAP messages. NETCONF over SOAP messages also comply with the following rules:

·          SOAP messages must use the SOAP Envelope namespaces.

·          SOAP messages must use the SOAP Encoding namespaces.

·          SOAP messages cannot contain the following information:

?  DTD reference.

?  XML processing instructions.

The following example shows a NETCONF over SOAP message for getting all parameters of all interfaces on the device:

<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">

  <env:Header>

    <auth:Authentication env:mustUnderstand="1" xmlns:auth="http://www.h3c.com/netconf/base:1.0">

      <auth:AuthInfo>800207F0120020C</auth:AuthInfo>

    </auth:Authentication>

  </env:Header>

  <env:Body>

    <rpc message-id ="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <get-bulk>

        <filter type="subtree">

          <top xmlns="http://www.h3c.com/netconf/data:1.0">

            <Ifmgr>

              <Interfaces>

                     <Interface/>

              </Interfaces>

            </Ifmgr>

          </top>

        </filter>

      </get-bulk>

    </rpc>

  </env:Body>

</env:Envelope>

How to use NETCONF

You can use NETCONF to manage and configure the device by using the methods in Table 21.

Table 21 NETCONF methods for configuring the device

Configuration tool

Login method

Remarks

CLI

·         Console port

·         SSH

·         Telnet

To implement NETCONF operations, copy valid NETCONF messages to the CLI in XML view.

This method is suitable for R&D and test purposes.

Custom interface

N/A

To use this method, you must enable NETCONF over SOAP.

By default, the device cannot interpret Custom Web interfaces' URLs. For the device to interpret these URLs, you must encode the NETCONF messages sent from a custom interface in SOAP.

 

Protocols and standards

·          RFC 3339, Date and Time on the Internet: Timestamps

·          RFC 4741, NETCONF Configuration Protocol

·          RFC 4742, Using the NETCONF Configuration Protocol over Secure SHell (SSH)

·          RFC 5277, NETCONF Event Notifications

·          RFC 5539, NETCONF over Transport Layer Security (TLS)

·          RFC 6241, Network Configuration Protocol

FIPS compliance

The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for features, commands, and parameters might differ in FIPS mode (see Security Configuration Guide) and non-FIPS mode.

NETCONF configuration task list

Task at a glance

(Optional.) Enabling NETCONF over SOAP

(Optional.) Enabling NETCONF over SSH

(Optional.) Enabling NETCONF logging

(Required.) Establishing a NETCONF session

(Optional.) Subscribing to event notifications

(Optional.) Locking/unlocking the configuration

(Optional.) Performing the get/get-bulk operation

(Optional.) Performing the get-config/get-bulk-config operation

(Optional.) Performing the edit-config operation

(Optional.) Saving, rolling back, and loading the configuration

(Optional.) Filtering data

(Optional.) Performing CLI operations through NETCONF

(Optional.) Retrieving NETCONF session information

(Optional.) Terminating another NETCONF session

(Optional.) Returning to the CLI

 

Enabling NETCONF over SOAP

NETCONF messages can be encapsulated into SOAP messages and transmitted over HTTP and HTTPS. After enabling NETCONF over SOAP, you can develop configuration interface to perform NETCONF operations.

To enable NETCONF over SOAP:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NETCONF over SOAP.

·         Enable NETCONF over SOAP over HTTP (not available in FIPS mode):
netconf soap http enable

·         Enable NETCONF over SOAP over HTTPS:
netconf soap https enable

By default, NETCONF over SOAP is disabled.

 

Enabling NETCONF over SSH

This feature allows users to use a client to perform NETCONF operations on the device through a NETCONF over SSH connection.

To enable NETCONF over SSH:

 

Step

Command

Remark

1.       Enter system view.

system-view

N/A

2.       Specify a port to listen for NETCONF over SSH connections.

netconf ssh server port port number

By default, port 830 listens for NETCONF over SSH connections.

3.       Enable NETCONF over SSH.

netconf ssh server enable

By default, NETCONF over SSH is disabled.

 

Enabling NETCONF logging

NETCONF logging generates logs for different NETCONF operation sources and NETCONF operations.

To enable NETCONF logging:

 

Step

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Enable NETCONF logging.

netconf log source { all | { agent | soap | web } * } { { protocol-operation { all | { action | config | get | set | session | syntax | others } * } } | verbose }

By default, NETCONF logging is disabled.

 

Establishing a NETCONF session

A client must send a hello message to a device and finish capabilities exchange before the device processes other requests from the client.

The device supports a maximum of 32 NETCONF sessions. If the upper limit is reached, new NETCONF users cannot access the device.

Do not configure NETCONF when another user is configuring NETCONF. If multiple users simultaneously configure NETCONF, the configuration result returned to each user might be inconsistent with the user request.

Setting the NETCONF session idle timeout time

IMPORTANT

IMPORTANT:

This feature is available in Release 1138P01 and later versions.

 

If no packets are exchanged between the device and a user within the NETCONF session idle timeout time, the device tears down the session.

To set the NETCONF session idle timeout time:

 

Task

Command

Remarks

1.       Enter system view.

system-view

N/A

2.       Set the NETCONF session idle timeout time.

netconf { soap | agent } idle-timeout minute

By default, the NETCONF session idle timeout time is as follows:

·         10 minutes for NETCONF over SOAP over HTTP and for NETCONF over SOAP over HTTPS sessions.

·         0 minutes for SSH, Telnet, and NETCONF over SSH sessions.

 

Entering XML view

Task

Command

Remarks

Enter XML view.

xml

Available in user view.

 

Exchanging capabilities

After you enter XML view, the client and the device exchange their capabilities before you can perform subsequent operations. The device automatically advertises its NETCONF capabilities to the client in a hello message as follows:

<?xml version="1.0" encoding="UTF-8"?><hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"><capabilities><capability>urn:ietf:params:netconf:base:1.1</capability><capability>urn:ietf:params:netconf:writable-running</capability><capability>urn:ietf:params:netconf:capability:notification:1.0</capability><capability>urn:ietf:params:netconf:capability:validate:1.1</capability><capability>urn:ietf:params:netconf:capability:interleave:1.0</capability><capability>urn:ietf:params:netconf:capability:h3c-netconf-ext:1.0</capability></capabilities><session-id>1</session-id></hello>]]>]]>

The <capabilities> parameter represents the capabilities supported by the device.

The <session-id> parameter represents the unique ID assigned to the current session.

After receiving the hello message from the device, copy the following message to notify the device of the capabilities (user-configurable) supported by the client:

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <capabilities>

    <capability>

     capability-set

   </capability>

 </capabilities>

</hello>

The capability-set parameter represents the capabilities supported by the client. Use a pair of <capability> and </capability> tags to enclose a capability.

Subscribing to event notifications

After you subscribe to event notifications, the device sends event notifications to the NETCONF client when a subscribed event takes place on the device. The notifications include the code, group, severity, start time, and description of the events. The device supports only log subscription.

A subscription takes effect only on the current session. If the session is terminated, the subscription is automatically canceled.

You can send multiple subscription messages to subscribe to notification of multiple events.

Subscription procedure

# Copy the following message to the client to complete the subscription:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns ="urn:ietf:params:xml:ns:netconf:base:1.0">

    <create-subscription  xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">

         <stream>NETCONF</stream>

         <filter>

             <event xmlns="http://www.h3c.com/netconf/event:1.0">

                 <Code>code</Code>

                 <Group>group</Group>

                 <Severity>severity</Severity>

             </event>

         </filter>

         <startTime>start-time</startTime>

         <stopTime>stop-time</stopTime>

     </create-subscription>

</rpc>

The <stream> parameter represents the event stream type supported by the device. Only NETCONF is supported.

The <event> parameter represents an event to which you have subscribed.

The <code> parameter represents a mnemonic symbol.

The <group> parameter represents the module name.

The <severity> parameter represents the severity level of the event.

The <start-time> parameter represents the start time of the subscription.

The <stop-time> argument represents the end time of the subscription.

After receiving the subscription request from the client, the device returns a response in the following format if the subscription is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="101" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">

    <ok/>

</rpc-reply>

If the subscription fails, the device returns an error message in the following format:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<rpc-error>

   <error-type>error-type</error-type>

   <error-tag>error-tag</error-tag>

   <error-severity>error-severity</error-severity>

   <error-message xml:lang="en">error-message</error-message>

</rpc-error>

</rpc-reply>

For more information about error messages, see RFC 4741.

Example for subscribing to event notifications

Network requirements

Configure a client to subscribe to all events with no time limitation. After the subscription is successful, all events on the device are sent to the client before the session between the device and client is terminated.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Subscribe to all events with no time limitation.

<rpc message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <create-subscription xmlns ="urn:ietf:params:xml:ns:netconf:notification:1.0">

        <stream>NETCONF</stream>

    </create-subscription>

</rpc>

Verifying the configuration

# If the client receives the following response, the subscription is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">

    <ok/>

</rpc-reply>

# If fan 1 on the device encounters problems, the device sends the following text to the client that has subscribed to all events:

<?xml version="1.0" encoding="UTF-8"?>

<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">

    <eventTime>2011-01-04T12:30:46</eventTime>

    <event xmlns="http://www.h3c.com/netconf/event:1.0">

        <Group>DEV</Group>

        <Code>FAN_DIRECTION_NOT_PREFERRED</Code>

        <Slot>6</Slot>

        <Severity>Alert</Severity>

        <context>Fan 1 airflow direction is not preferred on slot 6, please check it.</context>

    </event>

</notification>

# When another client (192.168.100.130) logs in to the device, the device sends a notification to the client that has subscribed to all events:

<?xml version="1.0" encoding="UTF-8"?>

<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">

    <eventTime>2011-01-04T12:30:52</eventTime>

    <event xmlns="http://www.h3c.com/netconf/event:1.0">

        <Group>SHELL</Group>

        <Code>SHELL_LOGIN</Code>

        <Slot>6</Slot>

        <Severity>Notification</Severity>

        <context>VTY logged in from 192.168.100.130.</context>

    </event>

</notification>

Locking/unlocking the configuration

The device supports multiple NETCONF sessions. Multiple users can simultaneously manage and monitor the device using NETCONF. During device configuration and maintenance or network troubleshooting, a user can lock the configuration to prevent other users from changing it. After that, only the user holding the lock can change the configuration, and other users can only read the configuration.

In addition, only the user holding the lock can release the lock. After the lock is released, other users can change the current configuration or lock the configuration. If the session of the user that holds the lock is terminated, the system automatically releases the lock.

Locking the configuration

# Copy the following text to the client to lock the configuration:

<?xml version="1.0" encoding="UTF-8"?>

  <rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <lock>

      <target>

        <running/>

      </target>

    </lock>

  </rpc>

After receiving the lock request, the device returns a response in the following format if the lock operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

  <rpc-reply message-id="101"

  xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

     <ok/>

</rpc-reply>

Unlocking the configuration

# Copy the following text to the client to unlock the configuration:

<?xml version="1.0" encoding="UTF-8"?>

  <rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <unlock>

      <target>

        <running/>

      </target>

    </unlock>

  </rpc>

After receiving the unlock request, the device returns a response in the following format if the unlock operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <ok/>

</rpc-reply>

Example for locking the configuration

Network requirements

Lock the device configuration so that other users cannot change the device configuration.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Lock the configuration.

<?xml version="1.0" encoding="UTF-8"?>

  <rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <lock>

      <target>

        <running/>

      </target>

    </lock>

  </rpc>

Verifying the configuration

If the client receives the following response, the lock operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <ok/>

</rpc-reply>

If another client sends a lock request, the device returns the following response:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<rpc-error>

  <error-type>protocol</error-type>

  <error-tag>lock-denied</error-tag>

  <error-severity>error</error-severity>

  <error-message xml:lang="en">Lock failed, lock is already held.</error-message>

  <error-info>

    <session-id>1</session-id>

  </error-info>

  </rpc-error>

</rpc-reply>

The output shows that the lock operation failed because the client with session ID 1 held the lock, and only the client holding the lock can release the lock.

Performing service operations

You can use NETCONF to perform service operations on the device, such as retrieving and modifying the specified information. The basic operations include get, get-bulk, get-config, get-bulk-config, and edit-config, which are used to retrieve all data, retrieve configuration data, and edit the data of the specified module. For more information, see the NETCONF XML API reference for the switch.

Performing the get/get-bulk operation

The get operation is used to retrieve device configuration and state information that match the conditions. In some cases, this operation leads to inefficiency.

The get-bulk operation is used to retrieve a number of data entries starting from the data entry next to the one with the specified index. One data entry contains a device configuration entry and a state information entry. The data entry quantity is defined by the count attribute, and the index is specified by the index attribute. The returned output does not include the index information. If you do not specify the index attribute, the index value starts with 1 by default.

The get-bulk operation retrieves all the rest data entries starting from the data entry next to the one with the specified index if either of the following conditions exists:

·          You do not specify the count attribute.

·          The number of matched data entries is less than the value of the count attribute.

# Copy the following text to the client to perform the get operation:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <getoperation>

    <filter>

      <top xmlns=" http://www.h3c.com/netconf/data:1.0">

          Specify the module, submodule, table name, and column name

      </top>

    </filter>

  </getoperation>

</rpc>

The <getoperation> parameter can be <get> or <get-bulk>. The <filter> element is used to filter data, and it can contain module name, submodule name, table name, and column name.

·          If the module name and the submodule name are not provided, the operation retrieves the data for all modules and submodules. If a module name or a submodule name is provided, the operation retrieves the data for the specified module or submodule.

·          If the table name is not provided, the operation retrieves the data for all tables. If a table name is provided, the operation retrieves the data for the specified table.

·          If only the index column is provided, the operation retrieves the data for all columns. If the index column and other columns are provided, the operation retrieves the data for the index column and the specified columns.

The <get> and <get-bulk> messages are similar. A <get-bulk> message carries the count and index attributes. The following is a <get-bulk> message example:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:xc="http://www.h3c.com/netconf/base:1.0">

  <get-bulk>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0" xmlns:base="http://www.h3c.com/netconf/base:1.0">

        <Syslog>

            <Logs xc:count="5">

                   <Log>

                      <Index>10</Index>

                    </Log>

             </Logs>

        </Syslog>

      </top>

    </filter>

  </get-bulk>

</rpc>

The count attribute complies with the following rules:

·          The count attribute can be placed in the module node and table node. In other nodes, it cannot be resolved.

·          When the count attribute is placed in the module node, a descendant node inherits this count attribute if the descendant node does not contain the count attribute.

Verifying the configuration

After receiving the get-bulk request, the device returns a response in the following format if the operation is successful:

<?xml version="1.0"?>

<rpc-reply message-id="100"

           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <data>

     Device state and configuration data

  </data>

</rpc-reply>

Performing the get-config/get-bulk-config operation

The get-config and get-bulk-config operations are used to retrieve all non-default configurations, which are configured by means of CLI and MIB. The <get-config> and <get-bulk-config> messages can contain the <filter> element for filtering data.

The <get-config> and <get-bulk-config> messages are similar. The following is a <get-config> message example:

<?xml version="1.0"?>

<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get-config>

    <source>

      <running/>

    </source>

    <filter>

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

          Specify the module name, submodule name, table name, and column name

      </top>

    </filter>

  </get-config>

</rpc>

Verifying the configuration

After receiving the get-config request, the device returns a response in the following format if the operation is successful:

<?xml version="1.0"?>

<rpc-reply message-id="100"   xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <data>

    All data matching the specified filter

  </data>

</rpc-reply>

Performing the edit-config operation

The edit-config operation supports the following operation attributes: merge, create, replace, remove, delete, default-operation, error-option, test-option, and incremental. For more information about these attributes, see "Appendix A Supported NETCONF operations."

# Copy the following text to perform the <edit-config> operation:

<?xml version="1.0"?>

<rpc message-id="100"  xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <edit-config>

    <target><running></running></target>

<error-option>

   Default operation when an error occurs

</error-option>

    <config>

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        Specify the module name, submodule name, table name, and column name

      </top>

    </config>

  </edit-config>

</rpc>

After receiving the edit-config request, the device returns a response in the following format if the operation is successful:

<?xml version="1.0">

<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <ok/>

</rpc-reply>

# Perform the get operation to verify that the current value of the parameter is the same as the value specified through the edit-config operation. (Details not shown.)

All-module configuration data retrieval example

Network requirements

Retrieve configuration data for all modules.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Retrieve configuration data for all modules.

<rpc message-id="100"

     xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get-config>

    <source>

      <running/>

    </source>

  </get-config>

</rpc>

Verifying the configuration

If the client receives the following text, the get-config operation is successful:

<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">

    <data>

        <top xmlns="http://www.h3c.com/netconf/config:1.0">

            <Ifmgr>

                <Interfaces>

                    <Interface>

                        <IfIndex>1307</IfIndex>

                        <Shutdown>1</Shutdown>

                    </Interface>

                    <Interface>

                        <IfIndex>1308</IfIndex>

                        <Shutdown>1</Shutdown>

                    </Interface>

                    <Interface>

                        <IfIndex>1309</IfIndex>

                        <Shutdown>1</Shutdown>

                    </Interface>

                    <Interface>

                        <IfIndex>1311</IfIndex>

 

                            <VlanType>2</VlanType>

 

                    </Interface>

                    <Interface>

                        <Index>1313</Index>

 

                            <VlanType>2</VlanType>

 

                    </Interface>

                </Interfaces>

            </Ifmgr>

            <Syslog>

                <LogBuffer>

                    <BufferSize>120</BufferSize>

                </LogBuffer>

            </Syslog>

            <System>

                <Device>

                    <SysName>H3C</SysName>

                    <TimeZone>

                        <Zone>+11:44</Zone>

                        <ZoneName>beijing</ZoneName>

                    </TimeZone>

                </Device>

            </System>

        </top>

    </data>

</rpc-reply>

Syslog configuration data retrieval example

Network requirements

Retrieve configuration data for the Syslog module.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Retrieve configuration data for the Syslog module.

<rpc message-id="100"

     xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get-config>

    <source>

      <running/>

    </source>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <Syslog/>

      </top>

    </filter>

  </get-config>

</rpc>

Verifying the configuration

If the client receives the following text, the get-config operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">

    <data>

        <top xmlns="http://www.h3c.com/netconf/config:1.0">

            <Syslog>

                    <LogBuffer>

                        <BufferSize>120</BufferSize>

                    </LogBuffer>

            </Syslog>

        </top>

    </data>

</rpc-reply>

Example for retrieving a data entry for the interface table

Network requirements

Retrieve a data entry for the interface table.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>urn:ietf:params:netconf:base:1.0</capability>

    </capabilities>

</hello>

# Retrieve a data entry for the interface table.

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0">

    <get-bulk>

        <filter type="subtree">

            <top xmlns="http://www.h3c.com/netconf/data:1.0" xmlns:web="http://www.h3c.com/netconf/base:1.0">

                <Ifmgr>

                    <Interfaces web:count="1">

                    </Interfaces>

                </Ifmgr>

            </top>

        </filter>

    </get-bulk>

</rpc>

Verifying the configuration

If the client receives the following text, the get-bulk operation is successful:

<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">

  <data>

    <top xmlns="http://www.h3c.com/netconf/data:1.0">

      <Ifmgr>

        <Interfaces>

          <Interface>

            <IfIndex>3</IfIndex>

            <Name>Ten-GigabitEthernet1/0/2</Name>

            <AbbreviatedName>XGE1/0/2</AbbreviatedName>

            <PortIndex>3</PortIndex>

            <ifTypeExt>22</ifTypeExt>

            <ifType>6</ifType>

            <Description> Ten-GigabitEthernet 1/0/2 Interface</Description>

            <AdminStatus>2</AdminStatus>

            <OperStatus>2</OperStatus>

            <ConfigSpeed>0</ConfigSpeed>

            <ActualSpeed>100000</ActualSpeed>

            <ConfigDuplex>3</ConfigDuplex>

            <ActualDuplex>1</ActualDuplex>

          </Interface>

        </Interfaces>

      </Ifmgr>

    </top>

  </data>

</rpc-reply>

Example for changing the value of a parameter

Network requirements

Change the log buffer size for the Syslog module to 512.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>urn:ietf:params:netconf:base:1.0</capability>

    </capabilities>

</hello>

# Change the log buffer size for the Syslog module to 512.

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0">

<edit-config>

    <target>

        <running/>

    </target>

    <config>

        <top xmlns="http://www.h3c.com/netconf/config:1.0" web:operation="merge">

            <Syslog>

                <LogBuffer>

                    <BufferSize>512</BufferSize>

                </LogBuffer>

            </Syslog>

        </top>

    </config>

</edit-config>

</rpc>

Verifying the configuration

If the client receives the following text, the edit-config operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <ok/>

</rpc-reply>

Saving, rolling back, and loading the configuration

Use NETCONF to save, roll back, or load the configuration.

Performing the save, rollback, or load operation consumes a lot of system resources. Do not perform these operations when the system resources are heavily occupied.

Saving the configuration

# Copy the following text to the client to save the device configuration to the specified file:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<save>

 <file>Specify the configuration file name</file>

</save>

</rpc>

The name of the specified configuration file must start with the storage media name and end with the extension .cfg. If the text does not include the <file> element, the configuration is saved to the main next-startup configuration file by default.

After receiving the save request, the device returns a response in the following format if the save operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <ok/>

</rpc-reply>

Rolling back the configuration based on a configuration file

# Copy the following text to the client to roll back the configuration:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<rollback>

 <file>Specify the configuration file name</file>

</rollback>

</rpc>

After receiving the rollback request, the device returns a response in the following format if the rollback operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <ok/>

</rpc-reply>

Rolling back the configuration based on a rollback point

You can roll back the configuration based on a rollback point when one of the following situations occurs:

·          A NETCONF client sends a rollback request.

·          The NETCONF session idle time is longer than the rollback idle timeout time.

·          A NETCONF client is unexpectedly disconnected from the device.

To roll back the configuration based on a rollback point, perform the following tasks:

1.        Lock the system.

Multiple users might simultaneously use NETCONF to configure the device. As a best practice, lock the system before rolling back the configuration to prevent other users from modifying the running configuration.

2.        Mark the beginning of a rollback operation. For more information, see "Performing the save-point/begin operation."

3.        Edit the device configuration. For more information, see "Performing the edit-config operation."

4.        Configure the rollback point. For more information, see "Performing the save-point/commit operation."

You can repeat this step to configure multiple rollback points.

5.        Roll back the configuration based on the rollback point. For more information, see "Performing the save-point/rollback operation."

The configuration can also be rolled back automatically when the NETCONF session idle time is longer than the rollback idle timeout time.

6.        End the rollback configuration. For more information, see "Performing the save-point/end operation."

7.        Release the lock.

Performing the save-point/begin operation

# Copy the following text to the client to mark the beginning of a rollback operation based on a rollback point:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <save-point>

        <begin>

          <confirm-timeout>100</confirm-timeout>

       </begin>

      </save-point>

</rpc>

The <confirm-timeout> parameter specifies the rollback idle timeout time in the range of 1 to 65536 seconds (the default is 600 seconds). This parameter is optional.

After receiving the begin request, the device returns a response in the following format if the begin operation is successful:

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <data>

    <save-point>

       <commit>

          <commit-id>1</commit-id>

       </commit>

    </save-point>

  </data>

</rpc-reply>

Performing the save-point/commit operation

The system supports a maximum of 50 rollback points. When the limit is reached, you must specify the force attribute to overwrite the earliest rollback point.

# Copy the following text to the client to configure the rollback point:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <save-point>

    <commit>

      <label>SUPPORT VLAN<label>

      <comment>vlan 1 to 100 and interfaces. Each vlan used for different custom as fllows: ……</comment>

</commit>

  </save-point>

</rpc>

The <label> and <comment> parameters are optional.

After receiving the commit request, the device returns a response in the following format if the commit operation is successful:

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <data>

    <save-point>

       <commit>

          <commit-id>2</commit-id>

       </commit>

    </save-point>

  </data>

</rpc-reply>

Performing the save-point/rollback operation

# Copy the following text to the client to roll back the configuration:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <save-point>

    <rollback>

      <commit-id/>

      <commit-index/>

      <commit-label/>

    </rollback>

  </save-point>

</rpc>

The <commit-id> parameter uniquely identifies a rollback point.

The <commit-index> parameter specifies 50 most recently configured rollback points. The value of 0 indicates the most recently configured one and 49 indicates the earliest configured one.

The <commit-label> parameter exclusively specifies a label for a rollback point. The label is not required for a rollback point.

Specify one of these parameters to roll back the specified configuration. If no parameter is specified, this operation rolls back configuration based on the most recently configured rollback point.

After receiving the rollback request, the device returns a response in the following format if the rollback operation is successful:

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <ok/>

</rpc-reply>

Performing the save-point/end operation

# Copy the following text to the client to end the rollback configuration:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <save-point>

    <end/>

  </save-point>

</rpc>

After receiving the end request, the device returns a response in the following format if the end operation is successful:

<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

   <ok/>

</rpc-reply>

Performing the save-point/get-commits operation

# Copy the following text to the client to get the rollback point configuration records:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <save-point>

    <get-commits>

      <commit-id/>

      <commit-index/>

      <commit-label/>

    </get-commits>

  </save-point>

</rpc>

Specify one of the <commit-id>, <commit-index>, and <commit-label> parameters to get the specified rollback point configuration records. If no parameter is specified, this operation gets records for all rollback point configuration. The following text is a <save-point>/<get-commits> request example:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <save-point>

    <get-commits>

      <commit-label>SUPPORT VLAN</commit-label>

    </get-commits>

  </save-point>

</rpc>

After receiving the get commits request, the device returns a response in the following format if the get commits operation is successful:

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

   <data>

      <save-point>

         <commit-information>

           <CommitID>2</CommitID>

           <TimeStamp>Thu Oct 30 11:30:28 1980</TimeStamp>

           <UserName>test</UserName>

           <Label>SUPPORT VLAN</Label>

         </commit-information>

    </save-point>

  </data>

</rpc-reply>

Performing the save-point/get-commit-information operation

# Copy the following text to the client to get the system configuration data corresponding to a rollback point:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <save-point>

    <get-commit-information>

       <commit-information>

         <commit-id/>

         <commit-index/>

         <commit-label/>

      </commit-information>

      <compare-information>

         <commit-id/>

         <commit-index/>

         <commit-label/>

      </compare-information

    </get-commit-information>

  </save-point>

</rpc>

Specify one of the <commit-id>, <commit-index>, and <commit-label> parameters to get the configuration data corresponding to the specified rollback point. The <compare-information> parameter is optional. If no parameter is specified, this operation gets the configuration data corresponding to the most recently configured rollback point. The following text is a <save-point>/< get-commit-information> request example:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <save-point>

    <get-commit-information>

               <commit-information>

                  <commit-label>SUPPORT VLAN</commit-label>

               </commit-information>

    </get-commit-information>

  </save-point>

</rpc>

After receiving the get-commit-information request, the device returns a response in the following format if the get-commit-information operation is successful:

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

   <data>

     <save-point>

        <commit-information>

           <content>

              …

              interface vlan 1

              …

           </content>

        </commit-information>

     </save-point>

   </data>

</rpc-reply>

Loading the configuration

After you perform the load operation, the loaded configurations are merged into the current configuration as follows:

·          New configurations are directly loaded.

·          Configurations that already exist in the current configuration are replaced by those loaded from the configuration file.

# Copy the following text to the client to load a configuration file for the device:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <load>

     <file>Specify the configuration file name</file>

  </load>

</rpc>

The name of the specified configuration file must start with the storage media name and end with the extension .cfg.

After receiving the load request, the device returns a response in the following format if the load operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <ok/>

</rpc-reply>

Example for saving the configuration

Network requirements

Save the current configuration to the configuration file my_config.cfg.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Save the configuration of the device to the configuration file my_config.cfg.

<?xml version="1.0" encoding="UTF-8"?>

  <rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <save>

   <file>my_config.cfg</file>

</save>

</rpc>

Verifying the configuration

If the client receives the following response, the save operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <ok/>

</rpc-reply>

Filtering data

You can define a filter to filter information when you perform a get, get-bulk, get-config, or get-bulk-config operation. Data filtering includes the following types:

·          Table-based filteringFilters table information.

·          Column-based filteringFilters information for a single column.

For table-based filtering to take effect, you must configure table-based filtering before column-based filtering.

Table-based filtering

You can specify a match criterion for the row attribute filter to implement table-based filtering, for example, IP address filtering. The namespace is http://www.h3c.com/netconf/base:1.0. For information about the support for table-based match, see NETCONF XML API documents.

# Copy the following text to the client to retrieve the longest data with VRF name vpn1, IP address 1.1.1.0, and mask length 24 from the IPv4 routing table:

<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:h3c="http://www.h3c.com/netconf/base:1.0">

  <get>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Route>

         <Ipv4Routes>

           <RouteEntry h3c:filter="vrf vpn1 IP 1.1.1.0 MaskLen 24 longer/>

         </Ipv4Routes>

        </Route>

      </top>

    </filter>

  </get>

</rpc>

Column-based filtering

Column-based filtering includes full match filtering, regular expression match filtering, and conditional match filtering. Full match filtering has the highest priority and the conditional match filtering has the lowest priority. When more than one filtering criterion is specified, the one with the highest priority takes effect.

Full match filtering

You can specify an element value in an XML message to implement full match. If multiple element values are provided, the system returns the data that matches all the specified values.

# Copy the following text to the client to retrieve the configuration data of all interfaces in UP state:

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Ifmgr>

          <Interfaces>

            <Interface>

              <AdminStatus>2</AdminStatus>

            </Interface>

          </Interfaces>

        </Ifmgr>

      </top>

    </filter>

  </get>

</rpc>

Regular expression match filtering

To implement a complex data filtering with characters, you can add a regExp attribute for a specific element.

# Copy the following text to the client to retrieve the descriptions of interfaces, of which all the characters must be upper-case letters from A to Z:

<rpc message-id="1-0" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:h3c="http://www.h3c.com/netconf/base:1.0">

  <get-config>

    <source>

      <running/>

    </source>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <Ifmgr>

          <Interfaces>

            <Interface>

              <Description h3c:regExp="^[A-Z]*$"/>

            </Interface>

          </Interfaces>

        </Ifmgr>

      </top>

    </filter>

  </get-config>

</rpc>

Conditional match filtering

To implement a complex data filtering with digits and character strings, you can add a match attribute for a specific element. Table 22 lists the conditional match operators.

Table 22 Conditional match operators

Operation

Operator

Remarks

More than

match="more:value"

More than the specified value. The supported data types include date, digit, and character string.

Less than

match="less:value"

Less than the specified value. The supported data types include date, digit, and character string.

Not less than

match="notLess:value"

Not less than the specified value. The supported data types include date, digit, and character string.

Not more than

match="notMore:value"

Not more than the specified value. The supported data types include date, digit, and character string.

Equal

match="equal:value"

Equal to the specified value. The supported data types include date, digit, character string, OID, and BOOL.

Not equal

match="notEqual:value"

Not equal to the specified value. The supported data types include date, digit, character string, OID, and BOOL.

Include

match="include:string"

Includes the specified string. The supported data types include only character string.

Not include

match="exclude:string"

Excludes the specified string. The supported data types include only character string.

Start with

match="startWith:string"

Starts with the specified string. The supported data types include character string and OID.

End with

match="endWith:string"

Ends with the specified string. The supported data types include only character string.

 

# Copy the following text to the client to retrieve extension information about the entity of which the CPU usage is more than 50%:

<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:h3c="http://www.h3c.com/netconf/base:1.0">

  <get>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Device>

          <ExtPhysicalEntities>

            <Entity>

                <CpuUsage h3c:match="more:50"></CpuUsage>

            </Entity>

          </ExtPhysicalEntities>

        </Device>

      </top>

    </filter>

  </get>

</rpc>

Example for filtering data with regular expression match

Network requirements

Retrieve all data including Ten-Gigabit in the Description column of the Interfaces table under the Ifmgr module.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Retrieve all data including Ten-Gigabit in the Description column of the Interfaces table under the Ifmgr module.

<?xml version="1.0"?>

<rpc message-id="100"

     xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:reg="http://www.h3c.com/netconf/base:1.0">

  <get>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Ifmgr>

          <Interfaces>

            <Interface>

                <Description h3c:regExp="(Ten-Gigabit)+"/>

            </Interface>

          </Interfaces>

        </Ifmgr>

      </top>

    </filter>

  </get>

</rpc>

Verifying the configuration

If the client receives the following text, the operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:reg="http://www.h3c.com/netconf/base:1.0" message-id="100">

    <data>

        <top xmlns="http://www.h3c.com/netconf/data:1.0">

            <Ifmgr>

                <Interfaces>

                    <Interface>

                        <IfIndex>2681</IfIndex>

                        <Description>Ten-GigabitEthernet1/0/1 Interface</Description>

                    </Interface>

                    <Interface>

                        <IfIndex>2682</IfIndex>

                        <Description>Ten-GigabitEthernet1/0/2 Interface</Description>

                    </Interface>

                    <Interface>

                        <IfIndex>2683</IfIndex>

                        <Description>Ten-GigabitEthernet1/0/3 Interface</Description>

                    </Interface>

                    <Interface>

                        <IfIndex>2684</IfIndex>

                        <Description>Ten-GigabitEthernet1/0/4 Interface</Description>

                    </Interface>

                </Interfaces>

            </Ifmgr>

        </top>

    </data>

</rpc-reply>

Example for filtering data by conditional match

Network requirements

Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table under the Ifmgr module.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table under the Ifmgr module.

<rpc message-id="100"

     xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="http://www.h3c.com/netconf/base:1.0">

  <get>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Ifmgr>

          <Interfaces>

            <Interface>

                <IfIndex h3c:match="notLess:5000"/>

                <Name/>

            </Interface>

          </Interfaces>

        </Ifmgr>

      </top>

    </filter>

  </get>

</rpc>

Verifying the configuration

If the client receives the following text, the operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="http://www.h3c.com/netconf/base:1.0" message-id="100">

    <data>

        <top xmlns="http://www.h3c.com/netconf/data:1.0">

            <Ifmgr>

                <Interfaces>

                    <Interface>

                        <IfIndex>7241</IfIndex>

                        <Name>NULL0</Name>

                    </Interface>

                    <Interface>

                        <IfIndex>7243</IfIndex>

                        <Name>Register-Tunnel0</Name>

                    </Interface>

                </Interfaces>

            </Ifmgr>

        </top>

    </data>

</rpc-reply>

Performing CLI operations through NETCONF

You can enclose command lines in XML messages to configure the device.

Configuration procedure

# Copy the following text to the client to execute the commands:

<?xml version="1.0" encoding="UTF-8"?>

   <rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <CLI>

        <Execution>

          Commands

        </Execution>

      </CLI>

</rpc>

The <Execution> element can contain multiple commands, with one command on one line.

After receiving the CLI operation request, the device returns a response in the following format if the CLI operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <CLI>

    <Execution>

      <![CDATA[Responses to the commands]]>

    </Execution>

  </CLI>

</rpc-reply>

CLI operation example

Configuration requirements

Send the display current-configuration command to the device.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Copy the following text to the client to execute the display current-configuration command:

<?xml version="1.0" encoding="UTF-8"?>

   <rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <CLI>

        <Execution>

          display current-configuration

        </Execution>

      </CLI>

</rpc>

Verifying the configuration

If the client receives the following text, the operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <CLI>

    <Execution><![CDATA[

<Sysname>display current-configuration

#

 version 7.1.045, ESS 2305                                                      

#                                                                              

 sysname Sysname                                                                 

#                                                                               

 telnet server enable                                                          

#                                                                              

 irf mac-address persistent timer                                              

 irf auto-update enable                                                        

 undo irf link-delay                                                           

 irf member 1 priority 1

   ]]>

   </Execution>

  </CLI>

</rpc-reply>

Retrieving NETCONF session information

You can use the get-sessions operation to retrieve NETCONF session information of the device.

# Copy the following message to the client to retrieve NETCONF session information from the device:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <get-sessions/>

</rpc>

After receiving the get-sessions request, the device returns a response in the following format if the get-sessions operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get-sessions>

    <Session>

      <SessionID>Configuration session ID</SessionID>

      <Line>Line information</Line>

      <UserName>Name of the user creating the session</UserName>

      <Since>Time when the session was created</Since>

      <LockHeld>Whether the session holds a lock</LockHeld>

    </Session>

  </get-sessions>

</rpc-reply>

For example, to get NETCONF session information:

# Enter XML view.

<Sysname> xml

# Copy the following message to the client to exchange capabilities with the device:

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Copy the following message to the client to get the current NETCONF session information on the device:

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <get-sessions/>

</rpc>

If the client receives a message as follows, the operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">

    <get-sessions>

        <Session>

            <SessionID>1</SessionID>

            <Line>vty0</Line>

            <UserName></UserName>

            <Since>2011-01-05T00:24:57</Since>

            <LockHeld>false</LockHeld>

        </Session>

    </get-sessions>

</rpc-reply>

The output shows the following information:

·          The session ID of an existing NETCONF session is 1.

·          The login user type is vty0.

·          The login time is 2011-01-05T00:24:57.

·          The user does not hold the lock of the configuration.

Terminating another NETCONF session

NETCONF allows one client to terminate the NETCONF session of another client. The client whose session is terminated returns to user view.

# Copy the following message to the client to terminate the specified NETCONF session:

<rpc message-id="101"    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

     <kill-session>

      <session-id>

        Specified session-ID

      </session-id>

     </kill-session>

   </rpc>

After receiving the kill-session request, the device returns a response in the following format if the kill-session operation is successful:

<?xml version="1.0" encoding="UTF-8"?>

    <rpc-reply message-id="101"

    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

      <ok/>

</rpc-reply>

Configuration example

Configuration requirement

The user whose session's ID is 1 terminates the session with session ID 2.

Configuration procedure

# Enter XML view.

<Sysname> xml

# Exchange capabilities.

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <capabilities>

        <capability>

            urn:ietf:params:netconf:base:1.0

        </capability>

    </capabilities>

</hello>

# Terminate the session with session ID 2.

<rpc message-id="101"    xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

     <kill-session>

       <session-id>2</session-id>

     </kill-session>

   </rpc>

Verifying the configuration

If the client receives the following text, the NETCONF session with session ID 2 has been terminated. The client with session ID 2 has returned from XML view to user view:

<?xml version="1.0" encoding="UTF-8"?>

  <rpc-reply message-id="101"  xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <ok/>

</rpc-reply>

Returning to the CLI

To return from XML view to the CLI, send the following close-session request:

<?xml version="1.0"?>

   <rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

        <close-session/>

   </rpc>

When the device receives the close-session request, it sends the following response and returns to CLI's user view:

<?xml version="1.0" encoding="UTF-8"?>

<rpc-reply message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

   <ok/>

</rpc-reply>

 


Appendix

Appendix A Supported NETCONF operations

Table 23 lists the NETCONF operations available with Comware V7.

Table 23 NETCONF operations

Operation

Description

XML example

get

Retrieves device configuration and state information.

To retrieve device configuration and state information for the Syslog module:

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:xc="http://www.h3c.com/netconf/base:1.0">

  <get>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Syslog>

        </Syslog>

      </top>

    </filter>

  </get>

</rpc>

get-config

Retrieves the non-default configuration data. If non-default configuration data does not exist, the device returns a response with empty data.

To retrieve non-default configuration data for the interface table:

<rpc message-id ="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:xc="http://www.h3c.com/netconf/base:1.0">

  <get-config>

    <source>

      <running/>

    </source>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <Ifmgr>

                   <Interfaces>

                                      <Interface/>

                   </Interfaces>

        </Ifmgr>

      </top>

    </filter>

  </get-config>

</rpc>

get-bulk

Retrieves a number of data entries (including device configuration and state information) starting from the data entry next to the one with the specified index.

To retrieve device configuration and state information for all interface:

<rpc message-id ="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get-bulk>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/data:1.0">

        <Ifmgr>

          <Interfaces xc:count=”5” xmlns:xc=” http://www.h3c.com/netconf/base:1.0”>

                 <Interface/>

          </Interfaces>

        </Ifmgr>

      </top>

    </filter>

  </get-bulk>

</rpc>

get-bulk-config

Retrieves a number of non-default configuration data entries starting from the data entry next to the one with the specified index.

To retrieve non-default configuration for all interfaces:

<rpc message-id ="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <get-bulk-config>

    <source>

      <running/>

    </source>

    <filter type="subtree">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <Ifmgr>

        </Ifmgr>

      </top>

    </filter>

  </get-bulk-config>

</rpc>

edit-config:

incremental

Adds configuration data to a column without affecting the original data.

The incremental attribute applies to a list column such as the vlan permitlist column.

You can use the incremental attribute for edit-config operations except for the replace operation.

Support for the incremental attribute varies by module. For more information, see NETCONF XML API documents.

To add VLANs 1 through 10 to an untagged VLAN list that has untagged VLANs 12 through 15:

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"

h3c:xmlns=” http://www.h3c.com/netconf/base:1.0”>

  <edit-config>

    <target>

      <running/>

    </target> 

    <config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <VLAN xc:operation="merge">

          <HybridInterfaces>

            <Interface>

              <IfIndex>262</IfIndex>

              <UntaggedVlanList  h3c: incremental=”true”>1-10</UntaggedVlanList>

               </Interface>

          </HybridInterfaces>

        </VLAN>

      </top>

    </config>

  </edit-config>

</rpc>

edit-config: merge

Changes the running configuration.

To use the merge attribute in the edit-config operation, you must specify the operation target (on a specified level):

·         If the specified target exists, the operation directly changes the configuration for the target.

·         If the specified target does not exist, the operation creates and configures the target.

·         If the specified target does not exist and it cannot be created, an error message is returned.

To change the buffer size to 120:

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"  xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">

  <edit-config>

    <target>

      <running/>

    </target>  

    <config>

      <top xmlns="http://www.h3c.com/netconf/config:1.0"><Syslog xmlns="http://www.h3c.com/netconf/config:1.0" xc:operation="merge">

    <LogBuffer>

        <BufferSize>120</BufferSize>

    </LogBuffer>

</Syslog>

      </top>

    </config>

  </edit-config>

</rpc>

edit-config: create

Creates a specified target. To use the create attribute in the edit-config operation, you must specify the operation target.

·         If the table supports target creation and the specified target does not exist, the operation creates and then configures the target.

·         If the specified target exists, a data-exist error message is returned.

The XML data format is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to create.

edit-config: replace

Replaces the specified target.

·         If the specified target exists, the operation replaces the configuration of the target with the configuration carried in the message.

·         If the specified target does not exist, the operation is not conducted and an invalid-value error message is returned.

The syntax is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to replace.

edit-config: remove

Removes the specified configuration.

·         If the specified target has only the table index, the operation removes all configuration of the specified target, and the target itself.

·         If the specified target has the table index and configuration data, the operation removes the specified configuration data of this target.

·         If the specified target does not exist, or the XML message does not specify any target, a success message is returned.

The syntax is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to remove.

edit-config: delete

Deletes the specified configuration.

·         If the specified target has only the table index, the operation removes all configuration of the specified target, and the target itself.

·         If the specified target has the table index and configuration data, the operation removes the specified configuration data of this target.

·         If the specified target does not exist, an error message is returned, showing that the target does not exist.

The syntax is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to delete.

edit-config: default-operation

Modifies the current configuration of the device using the default operation method.

If you do not specify an operation attribute for an edit-config message, NETCONF uses one of the following default operation attributes: merge, create, delete, and replace. Your setting of the value for the <default-operation> element takes effect only once. If you do not specify an operation attribute and the default operation method for an <edit-config> message, merge is always applied.

·         merge—The default value for the <default-operation> element.

·         replaceValue used when the operation attribute is not specified and the default operation method is specified as replace.

·         noneValue used when the operation attribute is not specified and the default operation method is specified as none. If this value is specified, the edit-config operation is used only for schema verification rather than issuing a configuration. If the schema verification is passed, a successful message is returned. Otherwise, an error message is returned.

To issue an empty operation for schema verification purposes:

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <edit-config>

    <target>

      <running/>

    </target>

    <default-operation>none</default-operation>

    <config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <Ifmgr>

          <Interfaces>

            <Interface>

              <Index>262</Index>

              <Description>222222</Description>

            </Interface>

          </Interfaces>

        </Ifmgr>

      </top>

    </config>

  </edit-config>

</rpc>

edit-config: error-option

Determines the action to take in case of a configuration error.

The error-option element has one of the following values:

·         stop-on-error—Stops the operation on error and returns an error message. This is the default error-option value.

·         continue-on-error—Continues the operation on error and returns an error message.

·         rollback-on-error—Rolls back the configuration.

To issue the configuration for two interfaces with the error-option element value as continue-on-error:

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <edit-config>

    <target>

      <running/>

    </target>   <error-option>continue-on-error</error-option>

    <config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <Ifmgr xc:operation="merge">

          <Interfaces>

            <Interface>

              <Index>262</Index>

              <Description>222</Description>

                <ConfigSpeed>100</ConfigSpeed>

                <ConfigDuplex>1</ConfigDuplex>

            </Interface>

            <Interface>

              <Index>263</Index>

              <Description>333</Description>

                <ConfigSpeed>100</ConfigSpeed>

                <ConfigDuplex>1</ConfigDuplex>

            </Interface>

          </Interfaces>

        </Ifmgr>

      </top>

    </config>

  </edit-config>

</rpc>

edit-config: test-option

Determines whether to issue a configuration item in the edit-configure operation. The test-option element has one of the following values:

·         test-then-set—Performs a validation test before attempting to set. If the validation test fails, the edit-config operation is not performed. This is the default test-option value.

·         set—Directly performs the set operation without the validation test.

·         test-only—Performs only a validation test without attempting to set. If the validation test succeeds, a successful message is returned. Otherwise, an error message is returned.

To issue the configuration for an interface for test purposes:

<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <edit-config>

    <target>

      <running/>

</target> 

<test-option>test-only</test-option>

    <config xmlns:xc="urn:ietf:params:xml:ns:netconf:base:1.0">

      <top xmlns="http://www.h3c.com/netconf/config:1.0">

        <Ifmgr xc:operation="merge">

          <Interfaces>

            <Interface>

              <Index>262</Index>

              <Description>222</Description>

                <ConfigSpeed>100</ConfigSpeed>

                <ConfigDuplex>1</ConfigDuplex>

               </Interface>

          </Interfaces>

        </Ifmgr>

      </top>

    </config>

  </edit-config>

</rpc>

action

Issues actions that are not for configuring data, for example, reset action.

To clear statistics information for all interfaces:

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <action>

    <top xmlns="http://www.h3c.com/netconf/action:1.0">

      <Ifmgr>

            <ClearAllIfStatistics>

                <Clear>

                </Clear>

        </ClearAllIfStatistics>

      </Ifmgr>

    </top>

  </action>

</rpc>

lock

Locks the configuration data made through NETCONF sessions. The configuration data can be changed by the edit-config operation. Other configurations are not limited by the lock operation.

This lock operation does not lock configurations made through other protocols, for example, SNMP.

To lock the configuration:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

 <lock>

    <target>

        <running/>

    </target>

</lock>

</rpc>

unlock

Unlocks the configuration, so NETCONF sessions can change device configuration.

When a NETCONF session is terminated, the related locked configuration is also unlocked.

To unlock the configuration:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<unlock>

    <target>

        <running/>

    </target>

</unlock>

</rpc>

get-sessions

Retrieves information about all NETCONF sessions in the system.

To retrieve information about all NETCONF sessions in the system:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<get-sessions/>

</rpc>

close-session

Terminates the NETCONF session for the current user, to unlock the configuration and release the resources (for example, memory) of this session. This operation logs the current user off the XML view.

To terminate the NETCONF session for the current user:

<rpc message-id="101"          xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<close-session />

</rpc>

kill-session

Terminates the NETCONF session for another user. This operation cannot terminate the NETCONF session for the current user.

To terminate the NETCONF session with session-id 1:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<kill-session>

      <session-id>1</session-id>

  </kill-session>

</rpc>

CLI

Executes CLI operations. A request message encloses commands in the <CLI> element, and a response message encloses the command output in the <CLI> element.

NETCONF supports the following views:

·         Execution—Use view.

·         Configuration—System view.

To execute a command in other views, specify the command for entering the specified view, and then the desired command.

To execute the display this command in system view:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <CLI>

        <Configuration>display this</Configuration>

  </CLI>

</rpc>

save

Saves the running configuration. You can use the <file> element to specify a file for saving the configuration. If the <file> element does not exist, the running configuration is saved to the main next-startup configuration file.

To save the running configuration to the file test.cfg:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <save>

    <file>test.cfg</file>

  </save>

</rpc>

load

Loads the configuration. After the device finishes the load operation, the configuration in the specified file is merged into the current configuration of the device.

To merge the configuration in the file a1.cfg to the current configuration of the device:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <load>

    <file>a1.cfg</file>

  </load>

</rpc>

rollback

Rolls back the configuration. To do so, you must specify the configuration file in the <file> element. After the device finishes the rollback operation, the current device configuration is totally replaced with the configuration in the specified configuration file.

To roll back the current configuration to the configuration in the file 1A.cfg:

<rpc message-id="101"

xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<rollback>

    <file>1A.cfg</file>

</rollback>

</rpc>

 



A

access control

SNMP MIB, 63

SNMP mode, 64

SNMP view-based MIB, 63

accessing

NTP access control, 10

NTP access control rights, 15

address reachability determination (ping), 1

agent

NMM sFlow configuration, 107

SNMP agent host notification, 72

Appendix A (supported NETCONF operations), 234

applying

flow mirroring QoS policy, 103

flow mirroring QoS policy (global), 103

flow mirroring QoS policy (interface), 103

flow mirroring QoS policy (VLAN), 103

architecture

NTP, 8

assigning

port mirroring monitor port to remote probe VLAN, 91

associating

NMM NTP broadcast association mode, 29

NMM NTP broadcast association mode with authentication, 36

NMM NTP client/server mode with MPLS VPN time synchronization, 38

NMM NTP multicast association mode, 31

NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

NTP association mode, 13

NTP broadcast association mode, 9, 14

NTP client/server association mode, 9, 13, 26

NTP client/server association mode+authentication, 34

NTP multicast association mode, 9, 15

NTP symmetric active/passive association mode, 9, 13, 28

authenticating

NMM NTP broadcast mode with authentication, 36

NTP, 11

NTP broadcast authentication, 20

NTP client/server mode authentication, 16

NTP client/server mode+authentication, 34

NTP configuration, 16

NTP multicast authentication, 22

NTP security, 10

NTP symmetric active/passive mode authentication, 17

SNTP authentication, 44

B

bidirectional

port mirroring, 82

broadcast

NMM NTP association mode, 29

NMM NTP broadcast association mode with authentication, 36

NTP broadcast association mode, 9, 14, 20

NTP broadcast client configuration, 14

NTP broadcast mode dynamic associations max, 25

NTP broadcast server configuration, 14

buffer

information center log buffer, 54

C

changing

NETCONF parameter value, 216

classifying

port mirroring classification, 83

CLI

EAA configuration, 112, 120

EAA monitor policy configuration, 116, 120

EAA monitor policy configuration (CLI-defined+environment variables), 123

NETCONF CLI operations, 229, 229

NETCONF return to CLI, 233

client

NQA client history record save, 150

NQA client operation (DHCP), 132

NQA client operation (DLSw), 143

NQA client operation (DNS), 133

NQA client operation (FTP), 134

NQA client operation (HTTP), 135

NQA client operation (ICMP echo), 130

NQA client operation (path jitter), 144

NQA client operation (SNMP), 137

NQA client operation (TCP), 138

NQA client operation (UDP echo), 139

NQA client operation (UDP jitter), 136

NQA client operation (UDP tracert), 140

NQA client operation (voice), 141

NQA client operation scheduling, 151

NQA client statistics collection, 150

NQA client template, 151

NQA client template (DNS), 153

NQA client template (FTP), 160

NQA client template (HTTP), 157

NQA client template (HTTPS), 159

NQA client template (ICMP), 152

NQA client template (SSL), 161

NQA client template (TCP half open), 155

NQA client template (TCP), 154

NQA client template (UDP), 156

NQA client template optional parameters, 162

NQA client threshold monitoring, 128, 147

NQA client+Track collaboration, 146

NQA collaboration configuration, 188

NQA collaboration configuration (on router), 186

NQA enable, 129

NQA operation, 130

NQA operation configuration (DHCP), 168

NQA operation configuration (DLSw), 183

NQA operation configuration (DNS), 169

NQA operation configuration (FTP), 170

NQA operation configuration (HTTP), 171

NQA operation configuration (ICMP echo), 163

NQA operation configuration (path jitter), 184

NQA operation configuration (SNMP), 175

NQA operation configuration (TCP), 176

NQA operation configuration (UDP echo), 177

NQA operation configuration (UDP jitter), 172

NQA operation configuration (UDP tracert), 179

NQA operation configuration (voice), 180

NTP broadcast client configuration, 14

NTP multicast client configuration, 15

SNTP configuration, 12, 12, 43, 43, 45

client/server

NMM NTP client/server mode MPLS VPN time synchronization, 38

NTP association mode, 9, 13

NTP client/server association mode, 16, 26

NTP client/server association mode+authentication, 34

NTP client/server mode dynamic associations max, 25

clock

NTP local clock as reference source, 26

collaborating

NQA client+Track function, 146

NQA+Track collaboration, 127

collector

NMM sFlow configuration, 107

common

information center common logs, 47

conditional match NETCONF data filtering, 223, 228

conditional match NETCONF data filtering (column-based), 225

configuring

EAA, 112, 120

EAA environment variable (user-defined), 116

EAA monitor policy, 116

EAA monitor policy (CLI), 116

EAA monitor policy (CLI-defined), 120

EAA monitor policy (CLI-defined+environment variables), 123

EAA monitor policy (Tcl), 118

EAA monitor policy (Tcl-defined), 124

flow mirroring, 101, 101, 104

flow mirroring match criteria, 102

flow mirroring QoS policy, 102

flow mirroring traffic behavior, 102

information center, 47, 52, 59

information center trace log file max size, 56

Layer 2 remote port mirroring, 89, 98

local port mirroring, 85

local port mirroring (in source CPU mode), 96

local port mirroring (in source port mode), 95

local port mirroring group monitor port, 87

local port mirroring group monitor port (interface view), 87

local port mirroring group monitor port (system view), 87

local port mirroring group source CPUs, 86

local port mirroring group source ports, 86

local port mirroring group source ports (interface view), 86

local port mirroring group source ports (system view), 86

local port mirroring with multiple monitor ports, 97

NETCONF, 199, 202

NMM NTP broadcast association mode, 29

NMM NTP broadcast mode with authentication, 36

NMM NTP client/server mode with MPLS VPN time synchronization, 38

NMM NTP multicast association mode, 31

NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

NMM sFlow, 106, 106, 109

NMM sFlow agent, 107

NMM sFlow collector information, 107

NMM sFlow counter sampling, 108

NMM sFlow flow sampling, 107

NMM SNMPv3 (RBAC mode ), 77

NMM SNMPv3 (VACM mode ), 76

NQA, 126, 128, 163

NQA client history record save, 150

NQA client operation, 130

NQA client operation (DHCP), 132

NQA client operation (DLSw), 143

NQA client operation (DNS), 133

NQA client operation (FTP), 134

NQA client operation (HTTP), 135

NQA client operation (ICMP echo), 130

NQA client operation (path jitter), 144

NQA client operation (SNMP), 137

NQA client operation (TCP), 138

NQA client operation (UDP echo), 139

NQA client operation (UDP jitter), 136

NQA client operation (UDP tracert), 140

NQA client operation (voice), 141

NQA client operation optional parameters, 145

NQA client statistics collection, 150

NQA client template, 151

NQA client template (DNS), 153

NQA client template (FTP), 160

NQA client template (HTTP), 157

NQA client template (HTTPS), 159

NQA client template (ICMP), 152

NQA client template (SSL), 161

NQA client template (TCP half open), 155

NQA client template (TCP), 154

NQA client template (UDP), 156

NQA client template optional parameters, 162

NQA client threshold monitoring, 147

NQA client+Track collaboration, 146

NQA collaboration, 188

NQA collaboration (on router), 186

NQA operation (DHCP), 168

NQA operation (DLSw), 183

NQA operation (DNS), 169

NQA operation (FTP), 170

NQA operation (HTTP), 171

NQA operation (ICMP echo), 163

NQA operation (path jitter), 184

NQA operation (SNMP), 175

NQA operation (TCP), 176

NQA operation (UDP echo), 177

NQA operation (UDP jitter), 172

NQA operation (UDP tracert), 179

NQA operation (voice), 180

NQA server, 128

NQA template (DNS), 191

NQA template (FTP), 196

NQA template (HTTP), 195

NQA template (HTTPS), 195

NQA template (ICMP), 190

NQA template (SSL), 197

NQA template (TCP half open), 193

NQA template (TCP), 192

NQA template (UDP), 194

NTP, 7, 12, 26

NTP access control rights, 15

NTP association mode, 13

NTP broadcast association mode, 14

NTP broadcast client, 14

NTP broadcast mode authentication, 20

NTP broadcast server, 14

NTP client/server association mode, 13, 26

NTP client/server mode authentication, 16

NTP client/server mode+authentication, 34

NTP dynamic associations max, 25

NTP local clock as reference source, 26

NTP multicast association mode, 15

NTP multicast client, 15

NTP multicast mode authentication, 22

NTP multicast server, 15

NTP optional parameters, 24

NTP symmetric active/passive association mode, 13, 28

NTP symmetric active/passive mode authentication, 17

port mirroring, 95

port mirroring remote destination group monitor port, 90

port mirroring remote destination group remote probe VLAN, 91

remote port mirroring destination group, 90

remote port mirroring source group, 92

remote port mirroring source group egress port, 93

remote port mirroring source group remote probe VLAN, 94

remote port mirroring source group source CPUs, 93

remote port mirroring source group source ports, 92

sampler, 81

SNMP, 63

SNMP agent host notification, 72

SNMP basics, 65

SNMP logging, 71

SNMP notification, 71

SNMPv1, 74

SNMPv1 agent host notification, 72

SNMPv1 basics, 65

SNMPv2c, 74

SNMPv2c agent host notification, 72

SNMPv2c basics, 65

SNMPv3 agent host notification, 72

SNMPv3 basics, 67

SNTP, 12, 12, 43, 43, 45

SNTP authentication, 44

console

information center log output, 52, 59

controlling

NTP access control rights, 15

CPU

flow mirroring configuration, 101, 101, 104

creating

local port mirroring group, 85

port mirroring remote destination group, 90

port mirroring remote source group, 92

sampler, 81

D

data

NETCONF configuration data retrieval (all modules), 212

NETCONF configuration data retrieval (Syslog module), 214

NETCONF data entry retrieval (interface table), 215

NETCONF filtering (conditional match), 223, 228

NETCONF filtering (full match), 223

NETCONF filtering (regex match), 223, 226

NMM NETCONF filtering (column-based), 224

NMM NETCONF filtering (column-based) (conditional match), 225

NMM NETCONF filtering (column-based) (full match), 224

NMM NETCONF filtering (column-based) (regex match), 225

NMM NETCONF filtering (table-based), 224

debugging

feature module, 6

information control module debugging switch, 5

information control screen output switch, 5

system maintenance, 1

default

information center log default output rules, 48

system information diagnostic log output rules, 48

system information hidden log output rules, 49

system information trace log output rules, 49

destination

destination device, 82

information center system logs, 48

port mirroring, 82

determining

ping address reachability determination, 1

device

information center configuration, 47, 52, 59

information center log output (console), 59, 59

information center log output (Linux log host), 61

information center log output (UNIX log host), 59

information center system log types, 47

Layer 2 remote port mirroring, 98

Layer 2 remote port mirroring configuration, 89

local port mirroring configuration, 85

local port mirroring configuration (in source CPU mode), 96

local port mirroring configuration (in source port mode), 95

local port mirroring configuration with multiple monitor ports, 97

local port mirroring group monitor port, 87

NETCONF capability exchange, 204

NETCONF CLI operations, 229, 229

NETCONF configuration, 199, 202

NETCONF configuration lock/unlock, 207, 208

NETCONF edit-config operation, 212

NETCONF get/get-bulk operation, 210

NETCONF get-config/get-bulk-config operation, 211

NETCONF save-point/begin operation, 219

NETCONF save-point/commit operation, 219

NETCONF save-point/end operation, 220

NETCONF save-point/get-commit-information operation, 221

NETCONF save-point/get-commits operation, 220

NETCONF save-point/rollback operation, 220

NETCONF service operations, 209

NETCONF session information retrieval, 230

NETCONF session termination, 232

NMM NTP broadcast association mode, 29

NMM NTP broadcast mode with authentication, 36

NMM NTP client/server mode with MPLS VPN time synchronization, 38

NMM NTP MPLS L3VPN support, 11

NMM NTP multicast association mode, 31

NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

NMM SNMPv3 configuration (RBAC mode ), 77

NMM SNMPv3 configuration (VACM mode ), 76

NQA client operation, 130

NQA collaboration configuration, 188

NQA collaboration configuration (on router), 186

NQA operation configuration (DHCP), 168

NQA operation configuration (DNS), 169

NQA server, 128

NTP architecture, 8

port mirroring configuration, 82, 95

port mirroring remote destination group, 90

port mirroring remote source group, 92

port mirroring remote source group egress port, 93

port mirroring remote source group remote probe VLAN, 94

port mirroring remote source group source CPUs, 93

port mirroring remote source group source ports, 92

SNMP basics configuration, 65

SNMP configuration, 63

SNMP MIB, 63

SNMP notification, 71

SNMP view-based MIB access control, 63

SNMPv1 basics configuration, 65

SNMPv1 configuration, 74

SNMPv2c basics configuration, 65

SNMPv2c configuration, 74

SNMPv3 basics configuration, 67

DHCP

NQA client operation, 132

NQA operation configuration, 168

diagnosing

information center diagnostic log, 47

information center diagnostic log save (log file), 55

direction

port mirroring (bidirectional), 82

port mirroring (inbound), 82

port mirroring (outbound), 82

disabling

information center interface link up/link down log generation, 57

NTP message processing, 25

displaying

EAA settings, 120

information center, 58

NMM sFlow, 108

NQA, 163

NTP, 26

port mirroring, 94

sampler, 81

SNMP settings, 73

SNTP, 45

DLSw

NQA client operation, 143

NQA operation configuration, 183

DNS

NQA client operation, 133

NQA client template, 153

NQA operation configuration, 169

NQA template configuration, 191

DSCP

NTP packet value setting, 25

duplicate log suppression, 57

dynamic

NTP dynamic associations max, 25

E

EAA

configuration, 112, 120

environment variable configuration (user-defined), 116

event monitor, 112

event monitor policy action, 114

event monitor policy element, 113

event monitor policy environment variable, 114

event monitor policy event types, 113

event monitor policy runtime, 114

event monitor policy user role, 114

event source, 112

how it works, 112

monitor policy, 113

monitor policy configuration, 116

monitor policy configuration (CLI), 116

monitor policy configuration (CLI-defined), 120

monitor policy configuration (CLI-defined+environment variables), 123

monitor policy configuration (Tcl), 118

monitor policy configuration (Tcl-defined), 124

monitor policy configuration restrictions, 116

monitor policy suspension, 119

RTM, 113

settings display, 120

echo

NQA client operation (ICMP echo), 130

NQA operation configuration (ICMP echo), 163

egress port

Layer 2 remote port mirroring, 83, 98

port mirroring remote source group egress port, 93

Embedded Automation Architecture. Use EAA

enabling

information center duplicate log suppression, 57

information center synchronous output, 56

NETCONF logging, 203

NETCONF over SOAP, 202

NETCONF over SSH, 203

NQA client, 129

NTP, 13

SNMP notification, 71

SNTP, 43

entering

NETCONF XML view, 204

environment

EAA environment variable configuration (user-defined), 116

EAA event monitor policy environment variable, 114

establishing

NETCONF session, 203

Ethernet

Layer 2 remote port mirroring configuration, 89

NMM sFlow configuration, 106, 106, 109

port mirroring configuration, 82, 95

sampler configuration, 81

event

EAA configuration, 112, 120

EAA environment variable configuration (user-defined), 116

EAA event monitor, 112

EAA event monitor policy element, 113

EAA event monitor policy environment variable, 114

EAA event source, 112

EAA monitor policy, 113

NETCONF event notification subscription, 205, 206

exchanging

NETCONF capabilities, 204

F

file

information center diagnostic log output destination, 55

information center log save (log file), 54

filtering

NETCONF data (conditional match), 228

NETCONF data (regex match), 226

NMM NETCONF data filtering (column-based), 224

NMM NETCONF data filtering (table-based), 224

FIPS compliance

information center, 52

NETCONF, 202

SNMP, 63

fixed mode (NMM sampler), 81

flow

mirroring. See flow mirroring

NMM sFlow configuration, 106, 106, 109

Sampled Flow. Use sFlow

flow mirroring

configuration, 101, 101, 104

match criteria configuration, 102

QoS policy application, 103

QoS policy application (global), 103

QoS policy application (interface), 103

QoS policy application (VLAN), 103

QoS policy configuration, 102

traffic behavior configuration, 102

format

information center system logs, 49

NETCONF message, 200

FTP

NQA client operation, 134

NQA client template, 160

NQA operation configuration, 170

NQA template configuration, 196

full match NETCONF data filtering, 223

full match NETCONF data filtering (column-based), 224

G

generating

information center interface link up/link down log generation, 57

get operation

NETCONF get/get-bulk, 210

NETCONF get-config/get-bulk-config, 211

SNMP, 64

SNMP logging, 71

group

local port mirroring group monitor port, 87

local port mirroring group source CPU, 86

local port mirroring group source port, 86

port mirroring group, 83

H

hidden log (information center), 47

history

NQA client history record save, 150

host

information center log output (log host), 54

SNMP agent host notification, 72

HTTP

NETCONF over SOAP (HTTP-based), 202

NETCONF over SOAP (HTTPS-based), 202

NQA client operation, 135

NQA client template, 157

NQA operation configuration, 171

NQA template configuration, 195

HTTPS

NQA client template (HTTPS), 159

NQA template configuration, 195

I

ICMP

NQA client operation (ICMP echo), 130

NQA client template, 152

NQA collaboration configuration, 188

NQA collaboration configuration (on router), 186

NQA operation configuration (ICMP echo), 163

NQA template configuration, 190

ping command, 1, 1

identifying

node failure with tracert, 4

implementing

local port mirroring, 83

port mirroring, 83

remote port mirroring, 84

inbound

port mirroring, 82

information center

configuration, 47, 52, 59

diagnostic log default output rules, 48

diagnostic log save (log file), 55

display, 58

duplicate log suppression, 57

FIPS compliance, 52

hidden log default output rules, 49

interface link up/link down log generation, 57

log default output rules, 48

log output (console), 52, 59

log output (Linux log host), 61

log output (log buffer), 54

log output (log host), 54

log output (monitor terminal), 53

log output (UNIX log host), 59

log save (log file), 54

maintain, 58

synchronous log output, 56

system information log types, 47

system log destinations, 48

system log formats, 49

system log levels, 47

trace log default output rules, 49

trace log file max size, 56

Internet

NMM SNMPv3 configuration (RBAC mode ), 77

NMM SNMPv3 configuration (VACM mode ), 76

NQA configuration, 126, 128, 163

SNMP basics configuration, 65

SNMP configuration, 63

SNMP MIB, 63

SNMPv1 basics configuration, 65

SNMPv2c basics configuration, 65

SNMPv3 basics configuration, 67

interval

sampler creation, 81

IP addressing

tracert, 3, 4

tracert node failure identification, 4

IP services

NQA client history record save, 150

NQA client operation (DHCP), 132

NQA client operation (DLSw), 143

NQA client operation (DNS), 133

NQA client operation (FTP), 134

NQA client operation (HTTP), 135

NQA client operation (ICMP echo), 130

NQA client operation (path jitter), 144

NQA client operation (SNMP), 137

NQA client operation (TCP), 138

NQA client operation (UDP echo), 139

NQA client operation (UDP jitter), 136

NQA client operation (UDP tracert), 140

NQA client operation (voice), 141

NQA client operation optional parameters, 145

NQA client operation scheduling, 151

NQA client statistics collection, 150

NQA client template (DNS), 153

NQA client template (FTP), 160

NQA client template (HTTP), 157

NQA client template (HTTPS), 159

NQA client template (ICMP), 152

NQA client template (SSL), 161

NQA client template (TCP half open), 155

NQA client template (TCP), 154

NQA client template (UDP), 156

NQA client template optional parameters, 162

NQA client threshold monitoring, 147

NQA client+Track collaboration, 146

NQA collaboration configuration, 188

NQA collaboration configuration (on router), 186

NQA configuration, 126, 128, 163

NQA operation configuration (DHCP), 168

NQA operation configuration (DLSw), 183

NQA operation configuration (DNS), 169

NQA operation configuration (FTP), 170

NQA operation configuration (HTTP), 171

NQA operation configuration (ICMP echo), 163

NQA operation configuration (path jitter), 184

NQA operation configuration (SNMP), 175

NQA operation configuration (TCP), 176

NQA operation configuration (UDP echo), 177

NQA operation configuration (UDP jitter), 172

NQA operation configuration (UDP tracert), 179

NQA operation configuration (voice), 180

NQA template configuration (DNS), 191

NQA template configuration (FTP), 196

NQA template configuration (HTTP), 195

NQA template configuration (HTTPS), 195

NQA template configuration (ICMP), 190

NQA template configuration (SSL), 197

NQA template configuration (TCP half open), 193

NQA template configuration (TCP), 192

NQA template configuration (UDP), 194

L

Layer 2

port mirroring configuration, 82, 95

remote port mirroring, 98

remote port mirroring configuration, 89

Layer 3

tracert, 3, 4

tracert node failure identification, 4

level

information center system logs, 47

link

information center interface link up/link down log generation, 57

Linux

information center log host output, 61

loading

NETCONF configuration, 217, 222

local

NTP local clock as reference source, 26

port mirroring, 83

port mirroring configuration, 85

port mirroring group creation, 85

port mirroring group monitor port, 87

port mirroring group source CPU, 86

port mirroring group source port, 86

local port mirroring

configuration, 97

configuration (in source CPU mode), 96

configuration (in source port mode), 95

locking

NETCONF configuration, 207, 208

logging

information center common logs, 47

information center configuration, 47, 52, 59

information center diagnostic log save (log file), 55

information center diagnostic logs, 47

information center duplicate log suppression, 57

information center hidden logs, 47

information center interface link up/link down log generation, 57

information center log default output rules, 48

information center log output (console), 52, 59

information center log output (Linux log host), 61

information center log output (log buffer), 54

information center log output (log host), 54

information center log output (monitor terminal), 53

information center log output (UNIX log host), 59

information center log save (log file), 54

information center synchronous log output, 56

information center system log destinations, 48

information center system log formats, 49

information center system log levels, 47

information center trace log file max size, 56

NETCONF logging enable, 203

SNMP configuration, 71

system information diagnostic log default output rules, 48

system information hidden log default output rules, 49

system information trace log default output rules, 49

M

maintaining

information center, 58

Management Information Base. Use MIB

matching

flow mirroring match criteria, 102

NETCONF data filtering (conditional match), 223, 228

NETCONF data filtering (full match), 223

NETCONF data filtering (regex match), 223, 226

NETCONF data filtering (table-based match), 223

NMM NETCONF data filtering (column-based), 224

NMM NETCONF data filtering (column-based) (conditional match), 225

NMM NETCONF data filtering (column-based) (full match), 224

NMM NETCONF data filtering (column-based) (regex match), 225

NMM NETCONF data filtering (table-based), 224

message

NETCONF format, 200

NTP message processing disable, 25

NTP message source interface, 24

MIB

SNMP, 63, 63

SNMP get operation, 64

SNMP set operation, 64

SNMP view-based access control, 63

mirroring

flow. See flow mirroring

port. See port mirroring

mode

NTP association, 13

NTP broadcast association, 9, 14

NTP client/server association, 9, 13

NTP multicast association, 9, 15

NTP symmetric active/passive association, 9, 13

sampler fixed, 81

SNMP access control (rule-based), 64

SNMP access control (view-based), 64

module

feature module debug, 6

information center configuration, 47, 52, 59

NETCONF configuration data retrieval (all modules), 212

NETCONF configuration data retrieval (Syslog module), 214

NETCONF data entry retrieval (interface table), 215

module debugging switch, 5

monitor terminal

information center log output, 53

monitoring

configuring local mirroring to support multiple monitor ports, 88

EAA configuration, 112

EAA environment variable configuration (user-defined), 116

NQA client threshold monitoring, 147

NQA threshold monitoring, 128

MPLS L3VPN

NMM NTP support, 11

multicast

NMM NTP multicast association mode, 31

NTP multicast association mode, 9, 15

NTP multicast client configuration, 15

NTP multicast mode authentication, 22

NTP multicast mode dynamic associations max, 25

NTP multicast server configuration, 15

N

NETCONF

capability exchange, 204

CLI operations, 229, 229

CLI return, 233

configuration, 199, 202

configuration data retrieval (all modules), 212

configuration data retrieval (Syslog module), 214

configuration load, 217, 222

configuration lock/unlock, 207, 208

configuration rollback, 217

configuration rollback (configuration-file-based), 218

configuration rollback (rollback-point-based), 218

configuration save, 217, 217, 223

data entry retrieval (interface table), 215

data filtering, 223

data filtering (conditional match), 228

data filtering (regex match), 226

edit-config operation, 212

event notification subscription, 205, 206

FIPS compliance, 202

get/get-bulk operation, 210

get-config/get-bulk-config operation, 211

message format, 200

NETCONF logging enable, 203

NETCONF over SSH enable, 203

over SOAP, 200

over SOAP enable, 202

parameter value change, 216

protocols and standards, 201

save-point/begin operation, 219

save-point/commit operation, 219

save-point/end operation, 220

save-point/get-commit-information operation, 221

save-point/get-commits operation, 220

save-point/rollback operation, 220

service operations, 209

session establishment, 203

session idle timeout time set, 204

session information retrieval, 230

session termination, 232

supported operations, 234

XML view, 204

NetStream

sampler configuration, 81

sampler creation, 81

network

feature module debug, 6

flow mirroring match criteria, 102

flow mirroring QoS policy, 102

flow mirroring QoS policy application, 103

flow mirroring QoS policy application (global), 103

flow mirroring QoS policy application (interface), 103

flow mirroring QoS policy application (VLAN), 103

flow mirroring traffic behavior, 102

information center diagnostic log save (log file), 55

information center duplicate log suppression, 57

information center interface link up/link down log generation, 57

information center log output (console), 59

information center log output (Linux log host), 61

information center log output (UNIX log host), 59

information center synchronous log output, 56

information center system log types, 47

information center trace log file max size, 56

Layer 2 remote port mirroring, 98

Layer 2 remote port mirroring configuration, 89

local port mirroring configuration, 85

local port mirroring configuration (in source CPU mode), 96

local port mirroring configuration (in source port mode), 95

local port mirroring configuration with multiple monitor ports, 97

local port mirroring group monitor port, 87

local port mirroring group source CPU, 86

local port mirroring group source port, 86

Network Configuration Protocol. Use NETCONF

NMM NTP MPLS L3VPN support, 11

NMM sFlow counter sampling configuration, 108

NMM sFlow flow sampling configuration, 107

NQA client history record save, 150

NQA client operation, 130

NQA client operation (DHCP), 132

NQA client operation (DLSw), 143

NQA client operation (DNS), 133

NQA client operation (FTP), 134

NQA client operation (HTTP), 135

NQA client operation (ICMP echo), 130

NQA client operation (path jitter), 144

NQA client operation (SNMP), 137

NQA client operation (TCP), 138

NQA client operation (UDP echo), 139

NQA client operation (UDP jitter), 136

NQA client operation (UDP tracert), 140

NQA client operation (voice), 141

NQA client operation optional parameters, 145

NQA client operation scheduling, 151

NQA client statistics collection, 150

NQA client template, 151

NQA client threshold monitoring, 147

NQA client+Track collaboration, 146

NQA collaboration configuration, 188

NQA collaboration configuration (on router), 186

NQA operation configuration (DHCP), 168

NQA operation configuration (DLSw), 183

NQA operation configuration (DNS), 169

NQA operation configuration (FTP), 170

NQA operation configuration (HTTP), 171

NQA operation configuration (ICMP echo), 163

NQA operation configuration (path jitter), 184

NQA operation configuration (SNMP), 175

NQA operation configuration (TCP), 176

NQA operation configuration (UDP echo), 177

NQA operation configuration (UDP jitter), 172

NQA operation configuration (UDP tracert), 179

NQA operation configuration (voice), 180

NQA server, 128

NQA template configuration (DNS), 191

NQA template configuration (FTP), 196

NQA template configuration (HTTP), 195

NQA template configuration (HTTPS), 195

NQA template configuration (ICMP), 190

NQA template configuration (SSL), 197

NQA template configuration (TCP half open), 193

NQA template configuration (TCP), 192

NQA template configuration (UDP), 194

NTP authentication, 16

NTP dynamic associations max, 25

ping address reachability determination, 1

ping connectivity test, 1

port mirroring remote destination group, 90

port mirroring remote source group, 92

port mirroring remote source group egress port, 93

port mirroring remote source group remote probe VLAN, 94

port mirroring remote source group source CPUs, 93

port mirroring remote source group source ports, 92

quality analyzer. See NQA

SNMPv1 basics configuration, 65

SNMPv2c basics configuration, 65

SNMPv3 basics configuration, 67

SNTP NTP server specification, 43

tracert node failure identification, 4, 4

network management

EAA configuration, 112, 120

flow mirroring configuration, 101, 101, 104

information center configuration, 47, 52, 59

NETCONF configuration, 199

NMM NTP broadcast association mode, 29

NMM NTP broadcast mode with authentication, 36

NMM NTP client/server mode with MPLS VPN time synchronization, 38

NMM NTP multicast association mode, 31

NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

NMM sFlow configuration, 106, 106, 109

NMM SNMPv3 configuration (RBAC mode ), 77

NMM SNMPv3 configuration (VACM mode ), 76

NQA configuration, 126, 128, 163

NTP configuration, 7, 12, 26

ping, 1, 1

port mirroring configuration, 82, 95

sampler configuration, 81

sampler creation, 81

SNMP configuration, 63

SNMPv1 configuration, 74

SNMPv2c configuration, 74

system debugging, 1

tracert, 1

Network Time Protocol. Use NTP

NMM

EAA configuration, 112, 120

EAA environment variable configuration (user-defined), 116

EAA event monitor, 112

EAA event monitor policy element, 113

EAA event monitor policy environment variable, 114

EAA event source, 112

EAA monitor policy, 113

EAA monitor policy configuration, 116

EAA monitor policy configuration (CLI), 116

EAA monitor policy configuration (CLI-defined), 120

EAA monitor policy configuration (CLI-defined+environment variables), 123

EAA monitor policy configuration (Tcl), 118

EAA monitor policy configuration (Tcl-defined), 124

EAA monitor policy suspension, 119

EAA RTM, 113

EAA settings display, 120

feature module debug, 6

flow mirroring configuration, 101, 101, 104

flow mirroring match criteria, 102

flow mirroring QoS policy, 102

flow mirroring QoS policy application, 103

flow mirroring QoS policy application (global), 103

flow mirroring QoS policy application (interface), 103

flow mirroring QoS policy application (VLAN), 103

flow mirroring traffic behavior, 102

information center configuration, 47, 52, 59

information center diagnostic log save (log file), 55

information center display, 58

information center duplicate log suppression, 57

information center interface link up/link down log generation, 57

information center log default output rules, 48

information center log destinations, 48

information center log formats, 49

information center log levels, 47

information center log output (console), 52, 59

information center log output (Linux log host), 61

information center log output (log buffer), 54

information center log output (log host), 54

information center log output (monitor terminal), 53

information center log output (UNIX log host), 59

information center log save (log file), 54

information center maintain, 58

information center synchronous log output, 56

information center system log types, 47

information center trace log file max size, 56

Layer 2 remote port mirroring, 98

Layer 2 remote port mirroring configuration, 89

local port mirroring configuration, 85

local port mirroring group, 85

local port mirroring group monitor port, 87

local port mirroring group source CPU, 86

local port mirroring group source port, 86

NETCONF capability exchange, 204

NETCONF CLI operations, 229, 229

NETCONF CLI return, 233

NETCONF configuration, 199, 202

NETCONF configuration data retrieval (all modules), 212

NETCONF configuration data retrieval (Syslog module), 214

NETCONF configuration load, 217

NETCONF configuration lock/unlock, 207, 208

NETCONF configuration rollback, 217

NETCONF configuration save, 217

NETCONF data entry retrieval (interface table), 215

NETCONF data filtering, 223

NETCONF edit-config operation, 212

NETCONF event notification subscription, 205, 206

NETCONF get/get-bulk operation, 210

NETCONF get-config/get-bulk-config operation, 211

NETCONF over SOAP enable, 202

NETCONF parameter value change, 216

NETCONF save-point/begin operation, 219

NETCONF save-point/commit operation, 219

NETCONF save-point/end operation, 220

NETCONF save-point/get-commit-information operation, 221

NETCONF save-point/get-commits operation, 220

NETCONF save-point/rollback operation, 220

NETCONF service operations, 209

NETCONF session establishment, 203

NETCONF session information retrieval, 230

NETCONF session termination, 232

NETCONF supported operations, 234

NQA client history record save, 150

NQA client operation, 130

NQA client operation (DHCP), 132

NQA client operation (DLSw), 143

NQA client operation (DNS), 133

NQA client operation (FTP), 134

NQA client operation (HTTP), 135

NQA client operation (ICMP echo), 130

NQA client operation (path jitter), 144

NQA client operation (SNMP), 137

NQA client operation (TCP), 138

NQA client operation (UDP echo), 139

NQA client operation (UDP jitter), 136

NQA client operation (UDP tracert), 140

NQA client operation (voice), 141

NQA client operation optional parameters, 145

NQA client operation scheduling, 151

NQA client statistics collection, 150

NQA client template, 151

NQA client template (DNS), 153

NQA client template (FTP), 160

NQA client template (HTTP), 157

NQA client template (HTTPS), 159

NQA client template (ICMP), 152

NQA client template (SSL), 161

NQA client template (TCP half open), 155

NQA client template (TCP), 154

NQA client template (UDP), 156

NQA client template optional parameters, 162

NQA client threshold monitoring, 147

NQA client+Track collaboration, 146

NQA collaboration configuration, 188

NQA collaboration configuration (on router), 186

NQA configuration, 126, 128, 163

NQA display, 163

NQA operation configuration (DHCP), 168

NQA operation configuration (DLSw), 183

NQA operation configuration (DNS), 169

NQA operation configuration (FTP), 170

NQA operation configuration (HTTP), 171

NQA operation configuration (ICMP echo), 163

NQA operation configuration (path jitter), 184

NQA operation configuration (SNMP), 175

NQA operation configuration (TCP), 176

NQA operation configuration (UDP echo), 177

NQA operation configuration (UDP jitter), 172

NQA operation configuration (UDP tracert), 179

NQA operation configuration (voice), 180

NQA server, 128

NQA template configuration (DNS), 191

NQA template configuration (FTP), 196

NQA template configuration (HTTP), 195

NQA template configuration (HTTPS), 195

NQA template configuration (ICMP), 190

NQA template configuration (SSL), 197

NQA template configuration (TCP half open), 193

NQA template configuration (TCP), 192

NQA template configuration (UDP), 194

NQA threshold monitoring, 128

NQA+Track collaboration, 127

NTP access control rights, 15

NTP architecture, 8

NTP association mode, 13

NTP authentication configuration, 16

NTP broadcast association mode configuration, 14, 29

NTP broadcast mode authentication configuration, 20

NTP broadcast mode with authentication, 36

NTP client/server association mode configuration, 26

NTP client/server mode authentication configuration, 16

NTP client/server mode with MPLS VPN time synchronization, 38

NTP client/server mode+authentication, 34

NTP configuration, 7, 12, 26

NTP display, 26

NTP dynamic associations max, 25

NTP enable, 13

NTP local clock as reference source, 26

NTP message processing disable, 25

NTP message source interface specification, 24

NTP multicast association mode, 15

NTP multicast association mode configuration, 31

NTP multicast mode authentication configuration, 22

NTP optional parameter configuration, 24

NTP packet DSCP value setting, 25

NTP protocols and standards, 12

NTP security, 10

NTP symmetric active/passive association mode configuration, 28

NTP symmetric active/passive mode authentication configuration, 17

NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

ping address reachability determination, 1

ping connectivity test, 1

port mirroring classification, 83

port mirroring configuration, 82, 95

port mirroring display, 94

port mirroring implementation, 83

port mirroring remote destination group, 90

port mirroring remote source group, 92

sampler configuration, 81

sampler creation, 81

sFlow agent configuration, 107

sFlow collector information configuration, 107

sFlow configuration, 106, 106, 109

sFlow counter sampling configuration, 108

sFlow flow sampling configuration, 107

SNMP access control mode, 64

SNMP agent host notification, 72

SNMP basics configuration, 65

SNMP configuration, 63

SNMP framework, 63

SNMP get operation, 64

SNMP logging configuration, 71

SNMP MIB, 63

SNMP notification, 71

SNMP protocol versions, 64

SNMP settings display, 73

SNMP view-based MIB access control, 63

SNMPv1 configuration, 74

SNMPv2c configuration, 74

SNMPv3 configuration (RBAC mode ), 77

SNMPv3 configuration (VACM mode ), 76

SNTP authentication, 44

SNTP configuration, 12, 12, 43, 43, 45

SNTP display, 45

SNTP enable, 43

SNTP NTP server specification, 43

system debugging, 1, 5

system information diagnostic log default output rules, 48

system information hidden log default output rules, 49

system information trace log default output rules, 49

system maintenance, 1

tracert, 3, 4

tracert node failure identification, 4

NMS

SNMP Notification operation, 64

SNMP protocol versions, 64

SNMP set operation, 64, 64

notifying

NETCONF event notification subscription, 205, 206

SNMP agent host notification, 72

SNMP configuration, 63

SNMP notification, 71

SNMP Notification operation, 64

NQA

client enable, 129

client history record save, 150

client operation, 130

client operation (DHCP), 132

client operation (DLSw), 143

client operation (DNS), 133

client operation (FTP), 134

client operation (HTTP), 135

client operation (ICMP echo), 130

client operation (path jitter), 144

client operation (SNMP), 137

client operation (TCP), 138

client operation (UDP echo), 139

client operation (UDP jitter), 136

client operation (UDP tracert), 140

client operation (voice), 141

client operation optional parameters, 145

client operation scheduling, 151

client statistics collection, 150

client template (DNS), 153

client template (FTP), 160

client template (HTTP), 157

client template (HTTPS), 159

client template (ICMP), 152

client template (SSL), 161

client template (TCP half open), 155

client template (TCP), 154

client template (UDP), 156

client template configuration, 151

client template optional parameters, 162

client threshold monitoring, 147

client+Track collaboration, 146

collaboration configuration, 188

collaboration configuration (on router), 186

configuration, 126, 128, 163

display, 163

how it works, 126

operation configuration (DHCP), 168

operation configuration (DLSw), 183

operation configuration (DNS), 169

operation configuration (FTP), 170

operation configuration (HTTP), 171

operation configuration (ICMP echo), 163

operation configuration (path jitter), 184

operation configuration (SNMP), 175

operation configuration (TCP), 176

operation configuration (UDP echo), 177

operation configuration (UDP jitter), 172

operation configuration (UDP tracert), 179

operation configuration (voice), 180

server configuration, 128

supported operations, 126

template configuration (DNS), 191

template configuration (FTP), 196

template configuration (HTTP), 195

template configuration (HTTPS), 195

template configuration (ICMP), 190

template configuration (SSL), 197

template configuration (TCP half open), 193

template configuration (TCP), 192

template configuration (UDP), 194

threshold monitoring, 128

Track collaboration function, 127

NTP

access control, 10

access control rights configuration, 15

architecture, 8

association mode configuration, 13

authentication, 11

authentication configuration, 16

broadcast association mode, 9

broadcast association mode configuration, 14, 29

broadcast client configuration, 14

broadcast mode authentication configuration, 20

broadcast mode dynamic associations max, 25

broadcast mode with authentication, 36

broadcast server configuration, 14

client/server association mode, 9

client/server association mode configuration, 13, 26

client/server mode authentication configuration, 16

client/server mode dynamic associations max, 25

client/server mode with MPLS VPN time synchronization, 38

client/server mode+authentication, 34

configuration, 7, 12, 26

configuration restrictions, 12

display, 26

enable, 13

how it works, 7

local clock as reference source, 26

message processing disable, 25

message source interface specification, 24

MPLS L3VPN support, 11

multicast association mode, 9

multicast association mode configuration, 15, 31

multicast client configuration, 15

multicast mode authentication configuration, 22

multicast mode dynamic associations max, 25

multicast server configuration, 15

optional parameter configuration, 24

packet DSCP value setting, 25

protocols and standards, 12

security, 10

SNTP authentication, 44

SNTP configuration, 12, 12, 43, 43, 45

SNTP server specification, 43

symmetric active/passive association mode, 9

symmetric active/passive association mode configuration, 13, 28

symmetric active/passive mode authentication configuration, 17

symmetric active/passive mode dynamic associations max, 25

symmetric active/passive mode with MPLS VPN time synchronization, 40

O

outbound

port mirroring, 82

outputting

information center log default output rules, 48

information center logs (Linux log host), 61

information center logs (log buffer), 54

information center logs (UNIX log host), 59

information center synchronous log output, 56

information logs (console), 52, 59

information logs (log host), 54

information logs (monitor terminal), 53

P

packet

flow mirroring configuration, 101, 101, 104

flow mirroring match criteria, 102

flow mirroring QoS policy, 102

flow mirroring QoS policy application, 103

flow mirroring QoS policy application (global), 103

flow mirroring QoS policy application (interface), 103

flow mirroring QoS policy application (VLAN), 103

flow mirroring traffic behavior, 102

NTP DSCP value setting, 25

port mirroring configuration, 82, 95

sampler configuration, 81

sampler creation, 81

SNTP configuration, 12, 12, 43, 43, 45

parameter

NETCONF parameter value change, 216

NQA client history record save, 150

NQA client operation optional parameters, 145

NQA client template optional parameters, 162

NTP dynamic associations max, 25

NTP local clock as reference source, 26

NTP message processing disable, 25

NTP message source interface, 24

NTP optional parameter configuration, 24

SNMP basics configuration, 65

SNMPv1 basics configuration, 65

SNMPv2c basics configuration, 65

SNMPv3 basics configuration, 67

path

NQA client operation (path jitter), 144

NQA operation configuration, 184

performing

NETCONF CLI operations, 229, 229

NETCONF edit-config operation, 212

NETCONF get/get-bulk operation, 210

NETCONF get-config/get-bulk-config operation, 211

NETCONF save-point/begin operation, 219

NETCONF save-point/commit operation, 219

NETCONF save-point/end operation, 220

NETCONF save-point/get-commit-information operation, 221

NETCONF save-point/get-commits operation, 220

NETCONF save-point/rollback operation, 220

NETCONF service operations, 209

ping

address reachability determination, 1, 1

network connectivity test, 1

system maintenance, 1

policy

EAA configuration, 112, 120

EAA environment variable configuration (user-defined), 116

EAA event monitor policy element, 113

EAA event monitor policy environment variable, 114

EAA monitor policy, 113

EAA monitor policy configuration, 116

EAA monitor policy configuration (CLI), 116

EAA monitor policy configuration (CLI-defined), 120

EAA monitor policy configuration (CLI-defined+environment variables), 123

EAA monitor policy configuration (Tcl), 118

EAA monitor policy configuration (Tcl-defined), 124

EAA monitor policy suspension, 119

flow mirroring QoS policy, 102

flow mirroring QoS policy application, 103

flow mirroring QoS policy application (global), 103

flow mirroring QoS policy application (interface), 103

flow mirroring QoS policy application (VLAN), 103

port

configuring local mirroring to support multiple monitor ports, 88

mirroring. See port mirroring

NMM NTP broadcast association mode, 29

NMM NTP broadcast mode with authentication, 36

NMM NTP client/server mode with MPLS VPN time synchronization, 38

NMM NTP multicast association mode, 31

NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

NTP association mode, 13

NTP client/server association mode, 26

NTP client/server mode+authentication, 34

NTP configuration, 7, 12, 26

NTP symmetric active/passive association mode, 28

SNTP configuration, 12, 12, 43, 43, 45

port mirroring

classification, 83

configuration, 82, 95

configuring local mirroring to support multiple monitor ports, 88

destination, 82

destination device, 82

direction (bidirectional), 82

direction (inbound), 82

direction (outbound), 82

display, 94

egress port, 83

implementation, 83

Layer 2 remote configuration, 89

Layer 2 remote port mirroring configuration, 98

local, 83

local configuration, 85

local group creation, 85

local group monitor port, 87

local group monitor port configuration restrictions, 87

local group source CPU, 86

local group source port, 86

local group source port configuration restrictions, 86

local mirroring configuration (in source CPU mode), 96

local mirroring configuration (in source port mode), 95

local mirroring configuration with multiple monitor ports, 97

local mirroring supporting multiple monitors configuration restrictions, 88

mirroring group, 83

monitor port to remote probe VLAN assignment, 91

reflector port, 83

remote, 84

remote destination group configuration, 90

remote destination group creation, 90

remote destination group monitor port, 90

remote destination group remote probe VLAN, 91

remote probe VLAN, 83

remote source group configuration, 92

source, 82

source device, 82

terminology, 82

procedure

applying flow mirroring QoS policy, 103

applying flow mirroring QoS policy (global), 103

applying flow mirroring QoS policy (interface), 103

applying flow mirroring QoS policy (VLAN), 103

changing NETCONF parameter value, 216

configuring EAA environment variable (user-defined), 116

configuring EAA monitor policy, 116

configuring EAA monitor policy (CLI), 116

configuring EAA monitor policy (CLI-defined), 120

configuring EAA monitor policy (CLI-defined+environment variables), 123

configuring EAA monitor policy (Tcl), 118

configuring EAA monitor policy (Tcl-defined), 124

configuring flow mirroring, 101, 104

configuring flow mirroring match criteria, 102

configuring flow mirroring QoS policy, 102

configuring flow mirroring traffic behavior, 102

configuring information center, 52

configuring information center trace log file max size, 56

configuring Layer 2 remote port mirroring, 89, 98

configuring local mirroring to support multiple monitor ports, 88

configuring local port mirroring, 85

configuring local port mirroring (in source CPU mode), 96

configuring local port mirroring (in source port mode), 95

configuring local port mirroring group monitor port, 87

configuring local port mirroring group monitor port (interface view), 87

configuring local port mirroring group monitor port (system view), 87

configuring local port mirroring group source CPUs, 86

configuring local port mirroring group source ports, 86

configuring local port mirroring group source ports (interface view), 86

configuring local port mirroring group source ports (system view), 86

configuring local port mirroring with multiple monitor ports, 97

configuring NETCONF, 202

configuring NMM NTP broadcast association mode, 29

configuring NMM NTP broadcast mode with authentication, 36

configuring NMM NTP client/server mode with MPLS VPN time synchronization, 38

configuring NMM NTP multicast association mode, 31

configuring NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

configuring NMM sFlow, 106, 109

configuring NMM sFlow agent, 107

configuring NMM sFlow collector information, 107

configuring NMM sFlow counter sampling, 108

configuring NMM sFlow flow sampling, 107

configuring NMM SNMPv3 (RBAC mode ), 77

configuring NMM SNMPv3 (VACM mode ), 76

configuring NQA, 128

configuring NQA client history record save, 150

configuring NQA client operation, 130

configuring NQA client operation (DHCP), 132

configuring NQA client operation (DLSw), 143

configuring NQA client operation (DNS), 133

configuring NQA client operation (FTP), 134

configuring NQA client operation (HTTP), 135

configuring NQA client operation (ICMP echo), 130

configuring NQA client operation (path jitter), 144

configuring NQA client operation (SNMP), 137

configuring NQA client operation (TCP), 138

configuring NQA client operation (UDP echo), 139

configuring NQA client operation (UDP jitter), 136

configuring NQA client operation (UDP tracert), 140

configuring NQA client operation (voice), 141

configuring NQA client operation optional parameters, 145

configuring NQA client statistics collection, 150

configuring NQA client template, 151

configuring NQA client template (DNS), 153

configuring NQA client template (FTP), 160

configuring NQA client template (HTTP), 157

configuring NQA client template (HTTPS), 159

configuring NQA client template (ICMP), 152

configuring NQA client template (SSL), 161

configuring NQA client template (TCP half open), 155

configuring NQA client template (TCP), 154

configuring NQA client template (UDP), 156

configuring NQA client template optional parameters, 162

configuring NQA client threshold monitoring, 147

configuring NQA client+Track collaboration, 146

configuring NQA collaboration, 188

configuring NQA collaboration (on router), 186

configuring NQA operation (DHCP), 168

configuring NQA operation (DLSw), 183

configuring NQA operation (DNS), 169

configuring NQA operation (FTP), 170

configuring NQA operation (HTTP), 171

configuring NQA operation (path jitter), 184

configuring NQA operation (SNMP), 175

configuring NQA operation (TCP), 176

configuring NQA operation (UDP echo), 177

configuring NQA operation (UDP jitter), 172

configuring NQA operation (UDP tracert), 179

configuring NQA operation (voice), 180

configuring NQA server, 128

configuring NQA template (DNS), 191

configuring NQA template (FTP), 196

configuring NQA template (HTTP), 195

configuring NQA template (HTTPS), 195

configuring NQA template (ICMP), 190

configuring NQA template (SSL), 197

configuring NQA template (TCP half open), 193

configuring NQA template (TCP), 192

configuring NQA template (UDP), 194

configuring NTP, 12

configuring NTP access control rights, 15

configuring NTP association mode, 13

configuring NTP broadcast association mode, 14

configuring NTP broadcast client, 14

configuring NTP broadcast mode authentication, 20

configuring NTP broadcast server, 14

configuring NTP client/server association mode, 13, 26

configuring NTP client/server mode authentication, 16

configuring NTP client/server mode+authentication, 34

configuring NTP dynamic associations max, 25

configuring NTP local clock as reference source, 26

configuring NTP multicast association mode, 15

configuring NTP multicast client, 15

configuring NTP multicast mode authentication, 22

configuring NTP multicast server, 15

configuring NTP optional parameters, 24

configuring NTP symmetric active/passive association mode, 13, 28

configuring NTP symmetric active/passive mode authentication, 17

configuring port mirroring monitor port to remote probe VLAN assignment, 91

configuring port mirroring remote destination group monitor port, 90

configuring port mirroring remote destination group on the destination device, 90

configuring port mirroring remote destination group remote probe VLAN, 91

configuring port mirroring remote source group egress port, 93

configuring port mirroring remote source group on source device, 92

configuring port mirroring remote source group remote probe VLAN, 94

configuring port mirroring remote source group source CPUs, 93

configuring port mirroring remote source group source ports, 92

configuring SNMP basic parameters, 65

configuring SNMP logging, 71

configuring SNMP notification, 71

configuring SNMPv1, 74

configuring SNMPv1 agent host notification, 72

configuring SNMPv1 basics, 65

configuring SNMPv2c, 74

configuring SNMPv2c agent host notification, 72

configuring SNMPv2c basics, 65

configuring SNMPv3 agent host notification, 72

configuring SNMPv3 basic parameters, 67

configuring SNTP, 12, 12, 43, 45

configuring SNTP authentication, 44

creating local port mirroring group, 85

creating port mirroring remote destination group, 90

creating port mirroring remote source group, 92

creating sampler, 81

debugging feature module, 6

determining address reachability with ping, 1

disabling information center interface link up/link down log generation, 57

disabling NTP message interface processing, 25

displaying EAA settings, 120

displaying information center, 58

displaying NMM sFlow, 108

displaying NQA, 163

displaying NTP, 26

displaying port mirroring, 94

displaying sampler, 81

displaying SNMP settings, 73

displaying SNTP, 45

enabling information center duplicate log suppression, 57

enabling information center synchronous log output, 56

enabling NETCONF logging, 203

enabling NETCONF over SOAP, 202

enabling NETCONF over SSH, 203

enabling NQA client, 129

enabling NTP, 13

enabling SNMP notification, 71

enabling SNTP, 43

entering NETCONF XML view, 204

establishing NETCONF session, 203

exchanging NETCONF capabilities, 204

filtering NETCONF data, 223

filtering NETCONF data (conditional match), 228

filtering NETCONF data (regex match), 226

identifying node failure with tracert, 4, 4

loading NETCONF configuration, 217, 222

locking NETCONF configuration, 207, 208

maintaining information center, 58

outputting information center logs (console), 52, 59

outputting information center logs (Linux log host), 61

outputting information center logs (log buffer), 54

outputting information center logs (log host), 54

outputting information center logs (monitor terminal), 53

outputting information center logs (UNIX log host), 59

performing NETCONF CLI operations, 229, 229

performing NETCONF edit-config operation, 212

performing NETCONF get/get-bulk operation, 210

performing NETCONF get-config/get-bulk-config operation, 211

performing NETCONF save-point/begin operation, 219

performing NETCONF save-point/commit operation, 219

performing NETCONF save-point/end operation, 220

performing NETCONF save-point/get-commit-information operation, 221

performing NETCONF save-point/get-commits operation, 220

performing NETCONF save-point/rollback operation, 220

performing NETCONF service operations, 209

retrieving NETCONF configuration data (all modules), 212

retrieving NETCONF configuration data (Syslog module), 214

retrieving NETCONF data entry (interface table), 215

retrieving NETCONF session information, 230

returning to NETCONF CLI, 233

rolling back NETCONF configuration, 217

rolling back NETCONF configuration (configuration-file-based), 218

rolling back NETCONF configuration (rollback-point-based), 218

saving information center diagnostic logs (log file), 55

saving information center log (log file), 54

saving NETCONF configuration, 217, 217, 223

scheduling NQA client operation, 151

setting NMM NETCONF session idle timeout time, 204

setting NTP packet DSCP value, 25

specifying NTP message source interface, 24

specifying SNTP NTP server, 43

subscribing to NETCONF event notifications, 205, 206

suspending EAA monitor policy, 119

terminating NETCONF session, 232

testing connectivity with ping, 1

troubleshooting NMM sFlow, 110

troubleshooting NMM sFlow remote collector cannot receive packets, 110

unlocking NETCONF configuration, 207, 208

protocols and standards

NETCONF, 199, 201

NMM sFlow, 106

NTP, 12

SNMP configuration, 63

SNMP versions, 64

Q

QoS

flow mirroring configuration, 101, 101, 104

flow mirroring match criteria, 102

flow mirroring QoS policy, 102

flow mirroring QoS policy application, 103

flow mirroring QoS policy application (global), 103

flow mirroring QoS policy application (interface), 103

flow mirroring QoS policy application (VLAN), 103

flow mirroring traffic behavior, 102

R

real-time

event manager. See RTM

reflector port

Layer 2 remote port mirroring, 83

regex match NETCONF data filtering, 223, 226

regex match NETCONF data filtering (column-based), 225

regular expression. Use regex

remote

Layer 2 remote port mirroring, 89

port mirroring, 84

port mirroring destination group, 90

port mirroring destination group creation, 90

port mirroring destination group monitor port, 90

port mirroring destination group remote probe VLAN, 91

port mirroring monitor port to remote probe VLAN assignment, 91

port mirroring source group, 92

port mirroring source group creation, 92

port mirroring source group egress port, 93

port mirroring source group remote probe VLAN, 94

port mirroring source group source CPUs, 93

port mirroring source group source ports, 92

remote probe VLAN

Layer 2 remote port mirroring, 83

port mirroring monitor port to remote probe VLAN assignment, 91

port mirroring remote destination group, 91

port mirroring remote source group remote probe VLAN, 94

restrictions

EAA monitor policy configuration, 116

local mirroring supporting multiple monitor ports configuration, 88

local port mirroring group monitor port configuration, 87

local port mirroring group source port configuration, 86

NTP configuration, 12

SNTP configuration, 12

SNTP configuration restrictions, 12, 12, 43

retrieving

NETCONF configuration data (all modules), 212

NETCONF configuration data (Syslog module), 214

NETCONF data entry (interface table), 215

NETCONF session information, 230

returning

NETCONF CLI return, 233

rollback

NETCONF save-point/begin operation, 219

NETCONF save-point/commit operation, 219

NETCONF save-point/end operation, 220

NETCONF save-point/get-commit-information operation, 221

NETCONF save-point/get-commits operation, 220

NETCONF save-point/rollback operation, 220

rolling back

NETCONF configuration, 217

NETCONF configuration (configuration-file-based), 218

NETCONF configuration (rollback-point-based), 218

router

NQA collaboration configuration, 186

routing

NMM NTP broadcast association mode, 29

NMM NTP broadcast mode with authentication, 36

NMM NTP client/server mode with MPLS VPN time synchronization, 38

NMM NTP multicast association mode, 31

NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

NTP association mode, 13

NTP client/server association mode, 26

NTP client/server mode+authentication, 34

NTP configuration, 7, 12, 26

NTP symmetric active/passive association mode, 28

SNTP configuration, 12, 12, 43, 43, 45

RTM

EAA, 113

EAA configuration, 112, 120

rule

information center log default output rules, 48

SNMP access control (rule-based), 64

system information default diagnostic log output rules, 48

system information default hidden log output, 49

system information default trace log output, 49

runtime

EAA event monitor policy runtime, 114

S

sampler

configuration, 81

creation, 81

displaying, 81

sampling

NMM sFlow, 107

NMM sFlow counter sampling, 108

Sampled Flow. Use sFlow

saving

information center diagnostic logs (log file), 55

information center log (log file), 54

NETCONF configuration, 217, 217, 223

NQA client history records, 150

scheduling

NQA client operation, 151

screen output switch, 5

security

NTP, 10

NTP access control rights, 15

NTP authentication, 11, 16

NTP broadcast mode authentication, 20

NTP client/server mode authentication, 16

NTP multicast mode authentication, 22

NTP symmetric active/passive mode authentication, 17

SNTP authentication, 44

server

NQA configuration, 128

NTP broadcast server configuration, 14

NTP multicast server configuration, 15

SNTP configuration, 12, 12, 43, 43, 45

SNTP NTP server specification, 43

service

NETCONF configuration data retrieval (all modules), 212

NETCONF configuration data retrieval (Syslog module), 214

NETCONF configuration load, 217

NETCONF configuration rollback, 217

NETCONF configuration save, 217

NETCONF data entry retrieval (interface table), 215

NETCONF edit-config operation, 212

NETCONF get/get-bulk operation, 210

NETCONF get-config/get-bulk-config operation, 211

NETCONF operations, 209

NETCONF parameter value change, 216

session

NETCONF session establishment, 203

NETCONF session information retrieval, 230

NETCONF session termination, 232

NMM NETCONF session idle timeout time, 204

set operation

SNMP, 64

SNMP logging, 71

setting

NMM NETCONF session idle timeout time, 204

NTP packet DSCP value, 25

severity level (system information), 47

sFlow

agent configuration, 107

collector information configuration, 107

configuration, 106, 106, 109

counter sampling configuration, 108

displaying, 108

flow sampling configuration, 107

protocols and standards, 106

troubleshooting, 110

troubleshooting remote collector cannot receive packets, 110

Simple Network Management Protocol. Use SNMP

Simplified NTP. See SNTP

SNMP

access control mode, 64

agent, 63

agent notification, 71

basic parameter configuration, 65

configuration, 63

FIPS compliance, 63

framework, 63

get operation, 64, 71

logging configuration, 71

manager, 63

MIB, 63, 63

MIB view-based access control, 63

notification configuration, 71

notification enable, 71

Notification operation, 64

NQA client operation, 137

NQA operation configuration, 175

protocol versions, 64

set operation, 64, 71

settings display, 73

SNMPv1 basic parameter configuration, 65

SNMPv1 configuration, 74

SNMPv2c basic parameter configuration, 65

SNMPv2c configuration, 74

SNMPv3 basic parameter configuration, 67

SNMPv3 configuration (RBAC mode ), 77

SNMPv3 configuration (VACM mode ), 76

SNMPv1

agent host notification, 72

basic parameter configuration, 65

configuration, 74

Notification operation, 64

protocol version, 64

SNMPv2c

agent host notification, 72

basic parameter configuration, 65

configuration, 74

Notification operation, 64

protocol version, 64

SNMPv3

agent host notification, 72

basic parameter configuration, 67

configuration (RBAC mode ), 77

configuration (VACM mode ), 76

Notification operation, 64

protocol version, 64

SNTP

authentication, 44

configuration, 12, 12, 43, 43, 45

configuration restrictions, 12, 12, 12, 43

display, 45

enable, 43

NTP server specification, 43

SOAP

NETCONF message format, 200

NETCONF over SOAP enable, 202

source

port mirroring, 82

source device, 82

specifying

NTP message source interface, 24

SNTP NTP server, 43

SSH

NETCONF over SSH enable, 203

SSL

NQA client template (SSL), 161

NQA template configuration, 197

statistics

NMM sFlow agent configuration, 107

NMM sFlow collector information configuration, 107

NMM sFlow configuration, 106, 106, 109

NMM sFlow counter sampling configuration, 108

NMM sFlow flow sampling configuration, 107

NQA client statistics collection, 150

sampler configuration, 81

sampler creation, 81

subscribing

NETCONF event notification subscription, 205, 206

suppressing

information center duplicate log suppression, 57

suspending

EAA monitor policy, 119

switch

module debug, 5

NQA collaboration configuration, 188

NQA operation configuration (DHCP), 168

screen output, 5

symmetric

NMM NTP symmetric active/passive mode MPLS VPN time synchronization, 40

NTP symmetric active/passive association mode, 9, 13, 17, 28

NTP symmetric active/passive mode dynamic associations max, 25

synchronizing

information center synchronous log output, 56

NMM NTP client/server mode, 38

NMM NTP symmetric active/passive mode, 40

NTP configuration, 7, 12, 26

SNTP configuration, 12, 12, 43, 43, 45

Syslog

NETCONF configuration data retrieval (Syslog module), 214

system

diagnostic log default output rules, 48

hidden log default output rules, 49

information center duplicate log suppression, 57

information center interface link up/link down log generation, 57

information center log destinations, 48

information center log levels, 47

information center log output (console), 52, 59

information center log output (Linux log host), 61

information center log output (log buffer), 54

information center log output (log host), 54

information center log output (monitor terminal), 53

information center log output (UNIX log host), 59

information center log save (log file), 54

information center log types, 47

information center synchronous log output, 56

information log formats, 49

log default output rules, 48

trace log default output rules, 49

system administration

debugging, 1, 5

feature module debug, 6

ping, 1, 1

ping connectivity test, 1

tracert, 1, 3, 4

tracert node failure identification, 4

system information

information center configuration, 47, 52, 59

T

table-based match NETCONF data filtering, 223

Tcl

EAA configuration, 112, 120

EAA monitor policy configuration, 118, 124

TCP

NQA client operation, 138

NQA client template, 154

NQA client template (TCP half open), 155

NQA operation configuration, 176

NQA template configuration, 192

NQA template configuration (half open), 193

template

NQA client template, 151

NQA client template (DNS), 153

NQA client template (FTP), 160

NQA client template (HTTP), 157

NQA client template (HTTPS), 159

NQA client template (ICMP), 152

NQA client template (SSL), 161

NQA client template (TCP half open), 155

NQA client template (TCP), 154

NQA client template (UDP), 156

NQA client template optional parameters, 162

NQA template configuration (DNS), 191

NQA template configuration (FTP), 196

NQA template configuration (HTTP), 195

NQA template configuration (HTTPS), 195

NQA template configuration (ICMP), 190

NQA template configuration (SSL), 197

NQA template configuration (TCP half open), 193

NQA template configuration (TCP), 192

NQA template configuration (UDP), 194

terminating

NETCONF session, 232

testing

ping network connectivity test, 1

threshold

NQA client threshold monitoring, 128, 147

NQA operation reaction entry, 147

NQA operation support accumulate type, 147

NQA operation support average type, 147

NQA operation support consecutive type, 147

NQA operation triggered action none, 147

NQA operation triggered action trap-only, 147

NQA operation triggered action trigger-only, 147

time

NTP configuration, 7, 12, 26

NTP local clock as reference source, 26

SNTP configuration, 12, 12, 43, 43, 45

timeout

NMM NETCONF session idle timeout time, 204

traceroute. See tracert

tracert

IP address retrieval, 3

node failure detection, 3, 4

node failure identification, 4

NQA client operation (UDP tracert), 140

NQA operation configuration (UDP tracert), 179

system maintenance, 1

tracing

information center trace log file max size, 56

Track

NQA client+Track collaboration, 146

NQA collaboration, 127

NQA collaboration configuration, 188

NQA collaboration configuration (on router), 186

traffic

NMM sFlow agent configuration, 107

NMM sFlow collector information configuration, 107

NMM sFlow configuration, 106, 106, 109

NMM sFlow counter sampling configuration, 108

NMM sFlow flow sampling configuration, 107

NQA client operation (voice), 141

sampler configuration, 81

sampler creation, 81

trapping

SNMP notification, 71

triggering

NQA operation threshold triggered action none, 147

NQA operation threshold triggered action trap-only, 147

NQA operation threshold triggered action trigger-only, 147

troubleshooting

NMM sFlow, 110

NMM sFlow remote collector cannot receive packets, 110

U

UDP

NMM NTP broadcast association mode, 29

NMM NTP broadcast mode with authentication, 36

NMM NTP client/server mode with MPLS VPN time synchronization, 38

NMM NTP multicast association mode, 31

NMM NTP symmetric active/passive mode with MPLS VPN time synchronization, 40

NMM sFlow configuration, 106, 106, 109

NQA client operation (UDP echo), 139

NQA client operation (UDP jitter), 136

NQA client operation (UDP tracert), 140

NQA client template, 156

NQA operation configuration (UDP echo), 177

NQA operation configuration (UDP jitter), 172

NQA operation configuration (UDP tracert), 179

NQA template configuration, 194

NTP association mode, 13

NTP client/server association mode, 26

NTP client/server mode+authentication, 34

NTP configuration, 7, 12, 26

NTP symmetric active/passive association mode, 28

UNIX

information center log host output, 59

unlocking

NETCONF configuration, 207, 208

V

value

NETCONF parameter value change, 216

variable

EAA environment variable configuration (user-defined), 116

EAA event monitor policy environment (user-defined), 115

EAA event monitor policy environment system-defined (event-specific), 115

EAA event monitor policy environment system-defined (public), 115

EAA event monitor policy environment variable, 114

EAA monitor policy configuration (CLI-defined+environment variables), 123

view

SNMP access control (view-based), 64

VLAN

flow mirroring configuration, 101, 101, 104

flow mirroring QoS policy application, 103

Layer 2 remote port mirroring configuration, 89

local port mirroring configuration, 85

local port mirroring group monitor port, 87

local port mirroring group source CPU, 86

local port mirroring group source port, 86

port mirroring configuration, 82, 95

port mirroring remote probe VLAN, 83

port mirroring remote source group remote probe VLAN, 94

voice

NQA client operation, 141

NQA operation configuration, 180

VPN

NMM NTP MPLS L3VPN support, 11

X

XML

NETCONF capability exchange, 204

NETCONF configuration, 199, 202

NETCONF data filtering, 223

NETCONF data filtering (conditional match), 228

NETCONF data filtering (regex match), 226

NETCONF message format, 200

NETCONF structure, 199

NETCONF XML view, 204

XSD

NETCONF message format, 200

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网