- Table of Contents
- Related Documents
-
01-Text
Download Book (4.12 MB)Using ping, tracert, and system debugging
Using a ping command to test network connectivity
Using a tracert command to identify failed or all nodes in a path
Debugging information control switches
Configuring NQA operations on the NQA client
NQA operation configuration task list
Configuring the ICMP echo operation
Configuring the ICMP jitter operation
Configuring the DHCP operation
Configuring the HTTP operation
Configuring the UDP jitter operation
Configuring the SNMP operation
Configuring the UDP echo operation
Configuring the UDP tracert operation
Configuring the voice operation
Configuring the DLSw operation
Configuring the path jitter operation
Configuring optional parameters for the NQA operation
Configuring the collaboration feature
Configuring threshold monitoring
Configuring the NQA statistics collection feature
Configuring the saving of NQA history records
Scheduling the NQA operation on the NQA client
Configuring NQA templates on the NQA client
NQA template configuration task list
Configuring the TCP half open template
Configuring the RADIUS template
Configuring optional parameters for the NQA template
Displaying and maintaining NQA
ICMP echo operation configuration example
ICMP jitter operation configuration example
DHCP operation configuration example
DNS operation configuration example
FTP operation configuration example
HTTP operation configuration example
UDP jitter operation configuration example
SNMP operation configuration example
TCP operation configuration example
UDP echo operation configuration example
UDP tracert operation configuration example
Voice operation configuration example
DLSw operation configuration example
Path jitter operation configuration example
NQA collaboration configuration example
ICMP template configuration example
DNS template configuration example
TCP template configuration example
TCP half open template configuration example
UDP template configuration example
HTTP template configuration example
FTP template configuration example
RADIUS template configuration example
Configuration restrictions and guidelines
Configuring NTP association mode
Configuring NTP in client/server mode
Configuring NTP in symmetric active/passive mode
Configuring NTP in broadcast mode
Configuring NTP in multicast mode
Configuring access control rights
Configuring NTP authentication
Configuring NTP authentication in client/server mode
Configuring NTP authentication in symmetric active/passive mode
Configuring NTP authentication in broadcast mode
Configuring NTP authentication in multicast mode
Configuring NTP optional parameters
Specifying the source interface for NTP messages
Disabling an interface from receiving NTP messages
Configuring the maximum number of dynamic associations
Setting a DSCP value for NTP packets
Configuring the local clock as a reference source
Displaying and maintaining NTP
NTP client/server mode configuration example
IPv6 NTP client/server mode configuration example
NTP symmetric active/passive mode configuration example
IPv6 NTP symmetric active/passive mode configuration example
NTP broadcast mode configuration example
NTP multicast mode configuration example
IPv6 NTP multicast mode configuration example
Configuration example for NTP client/server mode with authentication
Configuration example for NTP broadcast mode with authentication
Configuration example for MPLS L3VPN network time synchronization in client/server mode
Configuration example for MPLS L3VPN network time synchronization in symmetric active/passive mode
Configuration restrictions and guidelines
Specifying an NTP server for the device
Configuring SNTP authentication
Displaying and maintaining SNTP
MIB and view-based MIB access control
Configuring SNMP basic parameters
Configuring SNMPv1 or SNMPv2c basic parameters
Configuring SNMPv3 basic parameters
Configuring SNMP notifications
Configuring the SNMP agent to send notifications to a host
SNMPv1/SNMPv2c configuration example
Sample types for the alarm group and the private alarm group
Configuring the RMON statistics function
Creating an RMON Ethernet statistics entry
Creating an RMON history control entry
Configuring the RMON alarm function
Displaying and maintaining RMON settings
Ethernet statistics group configuration example
History group configuration example
Alarm function configuration example
Event MIB configuration task list
Configuring Event MIB sampling
Configuring Event MIB object lists
Configuring a set action for an event
Configuring a notification action for an event
Configuring a Boolean trigger test
Configuring an existence trigger test
Configuring a threshold trigger test
Enabling SNMP notifications for Event MIB
Displaying and maintaining the Event MIB
Event MIB configuration examples
Existence trigger test configuration example
Boolean trigger test configuration example
Threshold trigger test configuration example
NETCONF configuration task list
Establishing a NETCONF session
Setting the NETCONF session idle timeout time
Subscribing to event notifications
Example for subscribing to event notifications
Locking/unlocking the configuration
Example for locking the configuration
Performing the <get>/<get-bulk> operation
Performing the <get-config>/<get-bulk-config> operation
Performing the <edit-config> operation
All-module configuration data retrieval example
Syslog configuration data retrieval example
Example for retrieving a data entry for the interface table
Example for changing the value of a parameter
Saving, rolling back, and loading the configuration
Rolling back the configuration based on a configuration file
Rolling back the configuration based on a rollback point
Example for saving the configuration
Example for filtering data with regular expression match
Example for filtering data by conditional match
Performing CLI operations through NETCONF
Retrieving NETCONF information
Retrieving NETCONF session information
Terminating another NETCONF session
Appendix A Supported NETCONF operations
Configuring a user-defined EAA environment variable
Configuration restrictions and guidelines
Configuring a monitor policy from the CLI
Configuring a monitor policy by using Tcl
Displaying and maintaining EAA settings
CLI event monitor policy configuration example
Track event monitor policy configuration example
CLI-defined policy with EAA environment variables configuration example
Tcl-defined policy configuration example
Displaying and maintaining a sampler
Sampler configuration example for IPv4 NetStream
Port mirroring classification and implementation
Configuring local port mirroring
Local port mirroring configuration task list
Creating a local mirroring group
Configuring source ports for the local mirroring group
Configuring source CPUs for the local mirroring group
Configuring the monitor port for the local mirroring group
Configuring Layer 2 remote port mirroring
Layer 2 remote port mirroring with reflector port configuration task list
Layer 2 remote port mirroring with egress port configuration task list
Configuring a remote destination group on the destination device
Configuring a remote source group on the source device
Configuring Layer 3 remote port mirroring
Layer 3 remote port mirroring configuration task list
Configuring local mirroring groups
Configuring source ports for a local mirroring group
Configuring source CPUs for a local mirroring group
Configuring the monitor port for a local mirroring group
Displaying and maintaining port mirroring
Port mirroring configuration examples
Local port mirroring configuration example (in source port mode)
Local port mirroring configuration example (in source CPU mode)
Layer 2 remote port mirroring configuration example (reflector port)
Layer 2 remote port mirroring configuration example (egress port)
Layer 3 remote port mirroring configuration example
Flow mirroring configuration task list
Configuring a traffic behavior
Applying a QoS policy to an interface
Applying a QoS policy to a VLAN
Applying a QoS policy globally
Applying a QoS policy to the control plane
Flow mirroring configuration example
Feature and hardware compatibility
NetStream configuration task list
Configuring NetStream filtering
Configuring NetStream sampling
Configuring attributes of the NetStream data export
Configuring the NetStream data export format
Configuring the refresh rate for NetStream version 9 or version 10 template
Configuring MPLS-aware NetStream
Configuring VXLAN-aware NetStream
Configuring NetStream flow aging
Configuring the NetStream data export
Configuring the NetStream traditional data export
Configuring the NetStream aggregation data export
Displaying and maintaining NetStream
NetStream configuration examples
NetStream traditional data export configuration example
NetStream aggregation data export configuration example
Feature and hardware compatibility
IPv6 NetStream configuration task list
Configuring IPv6 NetStream filtering
Configuring IPv6 NetStream sampling
Configuring attributes of the IPv6 NetStream data export
Configuring the IPv6 NetStream data export format
Configuring the refresh rate for IPv6 NetStream version 9 or version 10 template
Configuring MPLS-aware IPv6 NetStream
Configuring IPv6 NetStream flow aging
Configuring the IPv6 NetStream data export
Configuring the IPv6 NetStream traditional data export
Configuring the IPv6 NetStream aggregation data export
Displaying and maintaining IPv6 NetStream
IPv6 NetStream configuration examples
IPv6 NetStream traditional data export configuration example
IPv6 NetStream aggregation data export configuration example
Configuring the sFlow agent and sFlow collector information
Displaying and maintaining sFlow
Troubleshooting sFlow configuration
The remote sFlow collector cannot receive sFlow packets
Configuring the information center
Default output rules for diagnostic logs
Default output rules for security logs
Default output rules for hidden logs
Default output rules for trace logs
Information center configuration task list
Outputting logs to the console
Outputting logs to the monitor terminal
Outputting logs to the log buffer
Saving security logs to the security log file
Managing the security log file
Saving diagnostic logs to the diagnostic log file
Configuring the maximum size of the trace log file
Setting the minimum storage period for log files and logs in the log buffer
Enabling synchronous information output
Enabling duplicate log suppression
Configuring log suppression for a module
Disabling an interface from generating link up or link down logs
Enabling SNMP notifications for system logs
Displaying and maintaining information center
Information center configuration examples
Configuration example for outputting logs to the console
Configuration example for outputting logs to a UNIX log host
Configuration example for outputting logs to a Linux log host
Configuring monitoring diagnostics
Configuring on-demand diagnostics
Starting a device startup check by using on-demand diagnostics
Starting on-demand diagnostics during device operation
Configuring the log buffer size
Displaying and maintaining GOLD
GOLD configuration example (in standalone mode)
Configuring the packet capture
Packet capture configuration task list
Configuring local packet capture
Configuring remote packet capture
Configuring feature image-based packet capture
Saving captured packets to a file
Filtering packet data to display
Displaying the contents in a packet file
Displaying and maintaining packet capture
Packet capture configuration examples
Remote packet capture configuration example
Filtering packet data to display configuration example
Saving captured packets to a file configuration example
Neutron concepts and components
Automated VCF fabric provisioning and deployment
Automated underlay network provisioning
Automated overlay network deployment
Configuration restrictions and guidelines
VCF fabric configuration task list
Enabling VCF fabric topology discovery
Configuration restrictions and guidelines
Configuring automated underlay network provisioning
Configuration restrictions and guidelines
Configuring automated overlay network deployment
Configuration restrictions and guidelines
Displaying and maintaining VCF fabric
Automated VCF fabric configuration example
Using ping, tracert, and system debugging
This chapter covers ping, tracert, and information about debugging the system.
Ping
Use the ping utility to determine if an address is reachable.
Ping sends ICMP echo requests (ECHO-REQUEST) to the destination device. Upon receiving the requests, the destination device responds with ICMP echo replies (ECHO-REPLY) to the source device. The source device outputs statistics about the ping operation, including the number of packets sent, number of echo replies received, and the round-trip time. You can measure the network performance by analyzing these statistics.
Using a ping command to test network connectivity
Execute ping commands in any view.
Task |
Command |
Determine if an address in an IP network is reachable. |
When you configure the ping command for a low-speed network, set a larger value for the timeout timer (indicated by the -t keyword in the command). ·
For IPv4 networks: ·
For IPv6 networks: |
Ping example
Network requirements
As shown in Figure 1, determine if Device A and Device C can reach each other. If they can reach each other, get detailed information about routes from Device A to Device C.
Configuration procedure
# Use the ping command on Device A to test connectivity to Device C.
Ping 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break
56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=2.137 ms
56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=2.051 ms
56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=1.996 ms
56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=1.963 ms
56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=1.991 ms
--- Ping statistics for 1.1.2.2 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 1.963/2.028/2.137/0.062 ms
The output shows the following information:
· Device A sends five ICMP packets to Device C and Device A receives five ICMP packets.
· No ICMP packet is lost.
· The route is reachable.
# Get detailed information about routes from Device A to Device C.
<DeviceA> ping -r 1.1.2.2
Ping 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break
56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=4.685 ms
RR: 1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1
56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=4.834 ms (same route)
56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=4.770 ms (same route)
56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=4.812 ms (same route)
56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=4.704 ms (same route)
--- Ping statistics for 1.1.2.2 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 4.685/4.761/4.834/0.058 ms
The test procedure of ping –r is as shown in Figure 1:
1. The source device (Device A) sends an ICMP echo request to the destination device (Device C) with the RR option blank.
2. The intermediate device (Device B) adds the IP address of its outbound interface (1.1.2.1) to the RR option of the ICMP echo request, and forwards the packet.
3. Upon receiving the request, the destination device copies the RR option in the request and adds the IP address of its outbound interface (1.1.2.2) to the RR option. Then the destination device sends an ICMP echo reply.
4. The intermediate device adds the IP address of its outbound interface (1.1.1.2) to the RR option in the ICMP echo reply, and then forwards the reply.
5. Upon receiving the reply, the source device adds the IP address of its inbound interface (1.1.1.1) to the RR option. The detailed information of routes from Device A to Device C is formatted as: 1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.
Tracert
Tracert (also called Traceroute) enables retrieval of the IP addresses of Layer 3 devices in the path to a destination. In the event of network failure, use tracert to test network connectivity and identify failed nodes.
Figure 2 Tracert operation
Tracert uses received ICMP error messages to get the IP addresses of devices. Tracert works as shown in Figure 2:
1. The source device sends a UDP packet with a TTL value of 1 to the destination device. The destination UDP port is not used by any application on the destination device.
2. The first hop (Device B, the first Layer 3 device that receives the packet) responds by sending a TTL-expired ICMP error message to the source, with its IP address (1.1.1.2) encapsulated. This way, the source device can get the address of the first Layer 3 device (1.1.1.2).
3. The source device sends a packet with a TTL value of 2 to the destination device.
4. The second hop (Device C) responds with a TTL-expired ICMP error message, which gives the source device the address of the second Layer 3 device (1.1.2.2).
5. This process continues until a packet sent by the source device reaches the ultimate destination device. Because no application uses the destination port specified in the packet, the destination device responds with a port-unreachable ICMP message to the source device, with its IP address encapsulated. This way, the source device gets the IP address of the destination device (1.1.3.2).
6. The source device determines that:
? The packet has reached the destination device after receiving the port-unreachable ICMP message.
? The path to the destination device is 1.1.1.2 to 1.1.2.2 to 1.1.3.2.
Prerequisites
Before you use a tracert command, perform the tasks in this section.
For an IPv4 network:
· Enable sending of ICMP timeout packets on the intermediate devices (devices between the source and destination devices). If the intermediate devices are H3C devices, execute the ip ttl-expires enable command on the devices. For more information about this command, see Layer 3—IP Services Command Reference.
· Enable sending of ICMP destination unreachable packets on the destination device. If the destination device is an H3C device, execute the ip unreachables enable command. For more information about this command, see Layer 3—IP Services Command Reference.
For an IPv6 network:
· Enable sending of ICMPv6 timeout packets on the intermediate devices (devices between the source and destination devices). If the intermediate devices are H3C devices, execute the ipv6 hoplimit-expires enable command on the devices. For more information about this command, see Layer 3—IP Services Command Reference.
· Enable sending of ICMPv6 destination unreachable packets on the destination device. If the destination device is an H3C device, execute the ipv6 unreachables enable command. For more information about this command, see Layer 3—IP Services Command Reference.
Using a tracert command to identify failed or all nodes in a path
Execute tracert commands in any view.
Task |
Command
|
Display the routes from source to destination. |
·
For IPv4 networks: ·
For IPv6 networks: |
Tracert example
Network requirements
As shown in Figure 3, Device A failed to Telnet to Device C.
Test the network connectivity between Device A and Device C. If they cannot reach each other, locate the failed nodes in the network.
Configuration procedure
1. Configure the IP addresses for devices as shown in Figure 3.
2. Configure a static route on Device A.
<DeviceA> system-view
[DeviceA] ip route-static 0.0.0.0 0.0.0.0 1.1.1.2
[DeviceA] quit
3. Use the ping command to test connectivity between Device A and Device C.
<DeviceA> ping 1.1.2.2
Ping 1.1.2.2(1.1.2.2): 56 -data bytes, press CTRL_C to break
Request time out
Request time out
Request time out
Request time out
Request time out
--- Ping statistics for 1.1.2.2 ---
5 packet(s) transmitted,0 packet(s) received,100.0% packet loss
The output shows that Device A and Device C cannot reach each other.
4. Use the tracert command to identify failed nodes:
# Enable sending of ICMP timeout packets on Device B.
<DeviceB> system-view
[DeviceB] ip ttl-expires enable
# Enable sending of ICMP destination unreachable packets on Device C.
<DeviceC> system-view
[DeviceC] ip unreachables enable
# Execute the tracert command on Device A.
<DeviceA> tracert 1.1.2.2
traceroute to 1.1.2.2 (1.1.2.2) 30 hops at most,40 bytes each packet, press CTRL_C to break
1 1.1.1.2 (1.1.1.2) 1 ms 2 ms 1 ms
2 * * *
3 * * *
4 * * *
5
<DeviceA>
The output shows that Device A can reach Device B but cannot reach Device C. An error has occurred on the connection between Device B and Device C.
5. To identify the cause of the problem, execute the following commands on Device A and Device C:
? Execute the debugging ip icmp command and verify that Device A and Device C can send and receive the correct ICMP packets.
? Execute the display ip routing-table command to verify that Device A and Device C have a route to each other.
System debugging
The device supports debugging for the majority of protocols and features, and provides debugging information to help users diagnose errors.
Debugging information control switches
The following switches control the display of debugging information:
· Module debugging switch—Controls whether to generate the module-specific debugging information.
· Screen output switch—Controls whether to display the debugging information on a certain screen. Use terminal monitor and terminal logging level commands to turn on the screen output switch. For more information about these two commands, see Network Management and Monitoring Command Reference.
As shown in Figure 4, the device can provide debugging for the three modules 1, 2, and 3. The debugging information can be output on a terminal only when both the module debugging switch and the screen output switch are turned on.
Debugging information is typically displayed on a console. You can also send debugging information to other destinations. For more information, see "Configuring the information center."
Figure 4 Relationship between the module and screen output switch
Debugging a feature module
Output of debugging commands is memory intensive. To guarantee system performance, enable debugging only for modules that are in an exceptional condition. When debugging is complete, use the undo debugging all command to disable all the debugging functions.
To debug a feature module:
Step |
Command |
Remarks |
1. Enable debugging for a module in user view. |
debugging module-name [ option ] |
By default, all debugging functions are disabled. |
2. (Optional.) Display the enabled debugging in any view. |
display debugging [ module-name ] |
N/A |
Configuring NQA
Overview
Network quality analyzer (NQA) allows you to measure network performance, verify the service levels for IP services and applications, and troubleshoot network problems. It provides the following types of operations:
· ICMP echo.
· ICMP jitter.
· DHCP.
· DLSw.
· DNS.
· FTP.
· HTTP.
· Path jitter.
· SNMP.
· TCP.
· UDP echo.
· UDP jitter.
· UDP tracert.
· Voice.
As shown in Figure 5, the NQA source device (NQA client) sends data to the NQA destination device by simulating IP services and applications to measure network performance. The obtained performance metrics include the one-way latency, jitter, packet loss, voice quality, application performance, and server response time.
All types of NQA operations require the NQA client, but only the TCP, UDP echo, UDP jitter, and voice operations require the NQA server. The NQA operations for services that are already provided by the destination device such as FTP do not need the NQA server.
You can configure the NQA server to listen and respond to specific IP addresses and ports to meet various test needs.
NQA operation
The following describes how NQA performs different types of operations:
· A TCP or DLSw operation sets up a connection.
· An ICMP jitter, UDP jitter, or voice operation sends a number of probe packets. The number of probe packets is set by using the probe packet-number command.
· An FTP operation uploads or downloads a file.
· An HTTP operation gets a Web page.
· A DHCP operation gets an IP address through DHCP.
· A DNS operation translates a domain name to an IP address.
· An ICMP echo operation sends an ICMP echo request.
· A UDP echo operation sends a UDP packet.
· An SNMP operation sends one SNMPv1 packet, one SNMPv2c packet, and one SNMPv3 packet.
· A path jitter operation is accomplished in the following steps:
a. The operation uses tracert to obtain the path from the NQA client to the destination. A maximum of 64 hops can be detected.
b. The NQA client sends ICMP echo requests to each hop along the path. The number of ICMP echo requests is set by using the probe packet-number command.
· A UDP tracert operation determines the routing path from the source to the destination. The number of the probe packets sent to each hop is set by using the probe count command.
Collaboration
NQA can collaborate with the Track module to notify application modules of state or performance changes so that the application modules can take predefined actions.
The following describes how a static route destined for 192.168.0.88 is monitored through collaboration:
1. NQA monitors the reachability to 192.168.0.88.
2. When 192.168.0.88 becomes unreachable, NQA notifies the Track module of the change.
3. The Track module notifies the static routing module of the state change.
4. The static routing module sets the static route to invalid according to a predefined action.
For more information about collaboration, see High Availability Configuration Guide.
Threshold monitoring
Threshold monitoring enables the NQA client to take a predefined action when the NQA operation performance metrics violate the specified thresholds.
Table 1 describes the relationships between performance metrics and NQA operation types.
Table 1 Performance metrics and NQA operation types
Performance metric |
NQA operation types that can gather the metric |
Probe duration |
All NQA operation types except UDP jitter, UDP tracert, path jitter, and voice |
Number of probe failures |
All NQA operation types except UDP jitter, UDP tracert, path jitter, and voice |
Round-trip time |
ICMP jitter, UDP jitter, and voice |
Number of discarded packets |
ICMP jitter, UDP jitter, and voice |
One-way jitter (source-to-destination or destination-to-source) |
ICMP jitter, UDP jitter, and voice |
One-way delay (source-to-destination or destination-to-source) |
ICMP jitter, UDP jitter, and voice |
Calculated Planning Impairment Factor (ICPIF) (see "Configuring the voice operation") |
Voice |
Mean Opinion Scores (MOS) (see "Configuring the voice operation") |
Voice |
NQA configuration task list
Tasks at a glance |
Remarks |
Required for TCP, UDP echo, UDP jitter, and voice operations. |
|
(Required.) Enabling the NQA client |
N/A |
(Required.) Perform at least one of the following tasks: |
When you configure an NQA template to analyze network performance, the feature that uses the template performs the NQA operation. |
Configuring the NQA server
To perform TCP, UDP echo, UDP jitter, and voice operations, you must enable the NQA server on the destination device. The NQA server listens and responds to requests on the specified IP addresses and ports.
You can configure multiple TCP or UDP listening services on an NQA server, where each corresponds to a specific IP address and port number. The IP address and port number for a listening service must be unique on the NQA server and match the configuration on the NQA client.
To configure the NQA server:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the NQA server. |
nqa server enable |
By default, the NQA server is disabled. |
3. Configure a TCP or UDP listening service. |
·
TCP listening service: ·
UDP listening service: |
The default ToS value is 0. You can set the ToS value in the IP header of reply packets sent by the NQA server. |
Enabling the NQA client
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the NQA client. |
nqa agent enable |
By default, the NQA client is enabled. The NQA client configuration takes effect after you enable the NQA client. |
Configuring NQA operations on the NQA client
NQA operation configuration task list
Tasks at a glance |
(Required.) Perform at least one of the following tasks: · Configuring the ICMP echo operation · Configuring the ICMP jitter operation · Configuring the DHCP operation · Configuring the DNS operation · Configuring the FTP operation · Configuring the HTTP operation · Configuring the UDP jitter operation · Configuring the SNMP operation · Configuring the TCP operation · Configuring the UDP echo operation · Configuring the UDP tracert operation · Configuring the voice operation |
(Optional.) Configuring optional parameters for the NQA operation |
(Optional.) Configuring the collaboration feature |
(Optional.) Configuring threshold monitoring |
(Optional.) Configuring the NQA statistics collection feature |
(Optional.) Configuring the saving of NQA history records |
(Required.) Scheduling the NQA operation on the NQA client |
Configuring the ICMP echo operation
The ICMP echo operation measures the reachability of a destination device. It has the same function as the ping command, but provides more output information. In addition, if multiple paths exist between the source and destination devices, you can specify the next hop for the ICMP echo operation.
To configure the ICMP echo operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the ICMP echo type and enter its view. |
type icmp-echo |
N/A |
4. Specify the destination IP address for ICMP echo requests. |
·
IPv4 address: ·
IPv6 address: |
By default, no destination IP address is specified. |
5. (Optional.) Set the payload size for each ICMP echo request. |
data-size size |
The default setting is 100 bytes. |
6. (Optional.) Specify the payload fill string for ICMP echo requests. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
7. (Optional.) Specify the output interface for ICMP echo requests. |
out interface interface-type interface-number |
By default, the output interface for ICMP echo requests is not specified. The NQA client determines the output interface based on the routing table lookup. |
8. (Optional.) Specify the source IP address for ICMP echo requests. |
·
Use the IP address of the specified interface
as the source IP address: ·
Specify the source IPv4 address: ·
Specify the source IPv6 address: |
By default, the requests take the primary IP address of the output interface as their source IP address. If you execute the source interface, source ip, and source ipv6 commands multiple times, the most recent configuration takes effect. The specified source interface must be up. The specified source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
9. (Optional.) Specify the next hop IP address for ICMP echo requests. |
·
IPv4 address: ·
IPv6 address: |
By default, no next hop IP address is configured. |
Configuring the ICMP jitter operation
The ICMP jitter operation measures unidirectional and bidirectional jitters. The operation result helps you to determine whether the network can carry jitter-sensitive services such as real-time voice and video services.
The ICMP jitter operation works as follows:
1. The NQA client sends ICMP packets to the destination device.
2. The destination device time stamps each packet it receives, and then sends the packet back to the NQA client.
3. Upon receiving the responses, the NQA client calculates the jitter according to the timestamps.
Before starting the operation, make sure the network devices are time synchronized by using NTP. For more information about NTP, see "Configuring NTP."
To configure the ICMP jitter operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the ICMP jitter type and enter its view. |
type icmp-jitter |
N/A |
4. Specify the destination address of ICMP packets. |
destination ip ip-address |
By default, no destination IP address is specified. |
5. (Optional.) Set the number of ICMP packets sent in one ICMP jitter operation. |
probe packet-number packet-number |
The default setting is 10. |
6. (Optional.) Set the interval for sending ICMP packets. |
probe packet-interval interval |
The default setting is 20 milliseconds. |
7. (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out. |
probe packet-timeout timeout |
The default setting is 3000 milliseconds. |
8. (Optional.) Specify the source IP address for ICMP packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no ICMP packets can be sent out. |
|
NOTE: Use the display nqa result or display nqa statistics command to verify the ICMP jitter operation. The display nqa history command does not display the ICMP jitter operation results or statistics. |
Configuring the DHCP operation
The DHCP operation measures whether or not the DHCP server can respond to client requests. DHCP also measures the amount of time it takes the NQA client to obtain an IP address from a DHCP server.
The NQA client simulates the DHCP relay agent to forward DHCP requests for IP address acquisition from the DHCP server. The interface that performs the DHCP operation does not change its IP address. When the DHCP operation completes, the NQA client sends a packet to release the obtained IP address.
To configure the DHCP operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the DHCP type and enter its view. |
type dhcp |
N/A |
4. Specify the IP address of the DHCP server as the destination IP address of DHCP packets. |
destination ip ip-address |
By default, no destination IP address is specified. |
5. (Optional.) Specify an output interface for DHCP request packets. |
out interface interface-type interface-number |
By default, the output interface for DHCP request packets is not specified. The NQA client determines the output interface based on the routing table lookup. |
6. (Optional.) Specify the source IP address of DHCP request packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The specified source IP address must be the IP address of a local interface, and the local interface must be up. Otherwise, no probe packets can be sent out. The NQA client adds the source IP address to the giaddr field in DHCP requests to be sent to the DHCP server. For more information about the giaddr field, see Layer 3—IP Services Configuration Guide. |
Configuring the DNS operation
The DNS operation measures the time for the NQA client to translate a domain name into an IP address through a DNS server.
A DNS operation simulates domain name resolution and does not save the obtained DNS entry.
To configure the DNS operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the DNS type and enter its view. |
type dns |
N/A |
4. Specify the IP address of the DNS server as the destination address of DNS packets. |
destination ip ip-address |
By default, no destination IP address is specified. |
5. Specify the domain name to be translated. |
resolve-target domain-name |
By default, no domain name is specified. |
Configuring the FTP operation
The FTP operation measures the time for the NQA client to transfer a file to or download a file from an FTP server.
When you configure the FTP operation, follow these restrictions and guidelines:
· When you perform the put operation with the filename command configured, make sure the file exists on the NQA client.
· If you get a file from the FTP server, make sure the file specified in the URL exists on the FTP server.
· The NQA client does not save the file obtained from the FTP server.
· Use a small file for the FTP operation. A big file might result in transfer failure because of timeout, or might affect other services for occupying much network bandwidth.
To configure the FTP operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the FTP type and enter its view. |
type ftp |
N/A |
4. Specify the URL of the destination FTP server. |
url url |
By default, no URL is specified for the destination FTP server. Enter the URL in one of the following formats: · ftp://host/filename. · ftp://host:port/filename. When you perform the get operation, the file name is required. |
5. (Optional.) Specify the source IP address of FTP request packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no FTP requests can be sent out. |
6. (Optional.) Specify the FTP operation type. |
operation { get | put } |
By default, the FTP operation type is get, which means obtaining files from the FTP server. |
7. Specify an FTP login username. |
username username |
By default, no FTP login username is configured. |
8. Specify an FTP login password. |
password { cipher | simple } string |
By default, no FTP login password is configured. |
9. (Optional.) Specify the name of a file to be transferred. |
filename file-name |
By default, no file is specified. This step is required if you perform the put operation. |
10. Set the data transmission mode. |
mode { active | passive } |
The default mode is active. |
Configuring the HTTP operation
An HTTP operation measures the time for the NQA client to obtain data from an HTTP server.
To configure an HTTP operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the HTTP type and enter its view. |
type http |
N/A |
4. Specify the URL of the destination HTTP server. |
url url |
By default, no URL is specified for the destination HTTP server. Enter the URL in one of the following formats: · http://host/resource. · http://host:port/resource. |
5. Specify an HTTP login username. |
username username |
By default, no HTTP login username is specified. |
6. Specify an HTTP login password. |
password { cipher | simple } string |
By default, no HTTP login password is specified. |
7. (Optional.) Specify the source IP address of request packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no request packets can be sent out. |
8. (Optional.) Specify the HTTP version. |
By default, HTTP 1.0 is used. |
|
9. (Optional.) Specify the HTTP operation type. |
operation { get | post | raw } |
By default, the HTTP operation type is get, which means obtaining data from the HTTP server. If you set the HTTP operation type to raw, configure the content of the HTTP request to be sent to the HTTP server in raw request view. |
10. (Optional.) Enter raw request view. |
raw-request |
Every time you enter raw request view, the previously configured content of the HTTP request is removed. |
11. (Optional.) Specify the HTTP request content. |
Enter or paste the content. |
By default, no contents are specified. This step is required for the raw operation. |
12. Save the input and return to HTTP operation view. |
quit |
N/A |
Configuring the UDP jitter operation
|
CAUTION: To ensure successful UDP jitter operations and avoid affecting existing services, do not perform the operations on well-known ports from 1 to 1023. |
Jitter means inter-packet delay variance. A UDP jitter operation measures unidirectional and bidirectional jitters. The operation result helps you to determine whether the network can carry jitter-sensitive services such as real-time voice and video services through the UDP jitter operation.
The UDP jitter operation works as follows:
1. The NQA client sends UDP packets to the destination port.
2. The destination device time stamps each packet it receives, and then sends the packet back to the NQA client.
3. Upon receiving the responses, the NQA client calculates the jitter according to the timestamps.
The UDP jitter operation requires both the NQA server and the NQA client. Before you perform the UDP jitter operation, configure the UDP listening service on the NQA server. For more information about UDP listening service configuration, see "Configuring the NQA server."
Before starting the operation, make sure the network devices are time synchronized by using NTP. For more information about NTP, see "Configuring NTP."
To configure a UDP jitter operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the UDP jitter type and enter its view. |
type udp-jitter |
N/A |
4. Specify the destination address of UDP packets. |
destination ip ip-address |
By default, no destination IP address is specified. The destination IP address must be the same as the IP address of the listening service on the NQA server. |
5. Specify the destination port of UDP packets. |
destination port port-number |
By default, no destination port number is specified. The destination port number must be the same as the port number of the listening service on the NQA server. |
6. (Optional.) Specify the source IP address for UDP packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no UDP packets can be sent out. |
7. (Optional.) Specify the source port number of UDP packets. |
source port port-number |
By default, no source port number is specified. |
8. (Optional.) Set the payload size for each UDP packet. |
data-size size |
The default setting is 100 bytes. |
9. (Optional.) Specify the payload fill string for UDP packets. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
10. (Optional.) Set the number of UDP packets sent in one UDP jitter operation. |
probe packet-number packet-number |
The default setting is 10. |
11. (Optional.) Set the interval for sending UDP packets. |
probe packet-interval interval |
The default setting is 20 milliseconds. |
12. (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out. |
probe packet-timeout timeout |
The default setting is 3000 milliseconds. |
|
NOTE: Use the display nqa result or display nqa statistics command to verify the UDP jitter operation. The display nqa history command does not display the UDP jitter operation results or statistics. |
Configuring the SNMP operation
The SNMP operation measures the time for the NQA client to get a response packet from an SNMP agent.
To configure the SNMP operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the SNMP type and enter its view. |
type snmp |
N/A |
4. Specify the destination address of SNMP packets. |
destination ip ip-address |
By default, no destination IP address is specified. |
5. (Optional.) Specify the source port of SNMP packets. |
source port port-number |
By default, no source port number is specified. |
6. (Optional.) Specify the source IP address of SNMP packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no SNMP packets can be sent out. |
7. (Optional.) Specify the community name for the SNMP operation if the operation uses the SNMPv1 or SNMPv2c agent. |
community read { cipher | simple } community-name |
By default, the SNMP operation uses community name public. Make sure the specified community name is the same as the community name configured on the SNMP agent. |
Configuring the TCP operation
The TCP operation measures the time for the NQA client to establish a TCP connection to a port on the NQA server.
The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP operation, configure a TCP listening service on the NQA server. For more information about the TCP listening service configuration, see "Configuring the NQA server."
To configure the TCP operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the TCP type and enter its view. |
type tcp |
N/A |
4. Specify the destination address of TCP packets. |
destination ip ip-address |
By default, no destination IP address is specified. The destination address must be the same as the IP address of the listening service configured on the NQA server. |
5. Specify the destination port of TCP packets. |
destination port port-number |
By default, no destination port number is configured. The destination port number must be the same as the port number of the listening service on the NQA server. |
6. (Optional.) Specify the source IP address of TCP packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no TCP packets can be sent out. |
Configuring the UDP echo operation
The UDP echo operation measures the round-trip time between the client and a UDP port on the NQA server.
The UDP echo operation requires both the NQA server and the NQA client. Before you perform a UDP echo operation, configure a UDP listening service on the NQA server. For more information about the UDP listening service configuration, see "Configuring the NQA server."
To configure the UDP echo operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the UDP echo type and enter its view. |
type udp-echo |
N/A |
4. Specify the destination address of UDP packets. |
destination ip ip-address |
By default, no destination IP address is specified. The destination address must be the same as the IP address of the listening service configured on the NQA server. |
5. Specify the destination port of UDP packets. |
destination port port-number |
By default, no destination port number is specified. The destination port number must be the same as the port number of the listening service on the NQA server. |
6. (Optional.) Set the payload size for each UDP packet. |
data-size size |
The default setting is 100 bytes. |
7. (Optional.) Specify the payload fill string for UDP packets. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
8. (Optional.) Specify the source port of UDP packets. |
source port port-number |
By default, no source port number is specified. |
9. (Optional.) Specify the source IP address of UDP packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no UDP packets can be sent out. |
Configuring the UDP tracert operation
The UDP tracert operation determines the routing path from the source device to the destination device.
Before you configure the UDP tracert operation, perform the following tasks:
· Enable sending ICMP time exceeded messages on the intermediate devices between the source and destination devices. If the intermediate devices are H3C devices, use the ip ttl-expires enable command.
· Enable sending ICMP destination unreachable messages on the destination device. If the destination device is an H3C device, use the ip unreachables enable command.
For more information about the ip ttl-expires enable and ip unreachables enable commands, see Layer 3—IP Services Command Reference.
The UDP tracert operation is not supported in IPv6 networks. To determine the routing path that the IPv6 packets traverse from the source to the destination, use the tracert ipv6 command. For more information about the command, see Network Management and Monitoring Command Reference.
To configure the UDP tracert operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the UDP tracert operation type and enter its view. |
type udp-tracert |
N/A |
4. Specify the destination device for the operation. |
·
Specify the destination device by its
host name: ·
Specify the destination device by its
IP address: |
By default, no destination IP address or host name is specified. |
5. (Optional.) Specify the destination port of UDP packets. |
destination port port-number |
By default, the destination port number is 33434. This port number must be an unused number on the destination device, so that the destination device can reply with ICMP port unreachable messages. |
6. (Optional.) Set the payload size for each UDP packet. |
data-size size |
The default setting is 100 bytes. |
7. (Optional.) Enable the no-fragmentation feature. |
no-fragment enable |
By default, the no-fragmentation feature is disabled. |
8. (Optional.) Set the maximum number of consecutive probe failures. |
max-failure times |
The default setting is 5. |
9. (Optional.) Set the TTL value for UDP packets in the start round of the UDP tracert operation. |
init-ttl value |
The default setting is 1. |
10. (Optional.) Specify an output interface for UDP packets. |
out interface interface-type interface-number |
By default, the output interface for UDP packets is not specified. The NQA client determines the output interface based on the routing table lookup. |
11. (Optional.) Specify the source port of UDP packets. |
source port port-number |
By default, no source port number is specified. |
12. (Optional.) Specify the source IP address of UDP packets. |
·
Specify the IP address of the specified
interface as the source IP address: ·
Specify the source IP address: |
By default, the packets take the primary IP address of the output interface as their source IP address. If you execute the source ip and source interface commands multiple times, the most recent configuration takes effect. The specified source interface must be up. The source IP address must be the IP address of a local interface, and the local interface must be up. Otherwise, no probe packets can be sent out. |
Configuring the voice operation
|
CAUTION: To ensure successful voice operations and avoid affecting existing services, do not perform the operations on well-known ports from 1 to 1023. |
The voice operation measures VoIP network performance.
The voice operation works as follows:
1. The NQA client sends voice packets at sending intervals to the destination device (NQA server).
The voice packets are of one of the following codec types:
? G.711 A-law.
? G.711 μ-law.
? G.729 A-law.
2. The destination device time stamps each voice packet it receives and sends it back to the source.
3. Upon receiving the packet, the source device calculates the jitter and one-way delay based on the timestamp.
The following parameters that reflect VoIP network performance can be calculated by using the metrics gathered by the voice operation:
· Calculated Planning Impairment Factor (ICPIF)—Measures impairment to voice quality in a VoIP network. It is decided by packet loss and delay. A higher value represents a lower service quality.
· Mean Opinion Scores (MOS)—A MOS value can be evaluated by using the ICPIF value, in the range of 1 to 5. A higher value represents a higher service quality.
The evaluation of voice quality depends on users' tolerance for voice quality. For users with higher tolerance for voice quality, use the advantage-factor command to set an advantage factor. When the system calculates the ICPIF value, it subtracts the advantage factor to modify ICPIF and MOS values for voice quality evaluation.
The voice operation requires both the NQA server and the NQA client. Before you perform a voice operation, configure a UDP listening service on the NQA server. For more information about UDP listening service configuration, see "Configuring the NQA server."
The voice operation cannot repeat.
Before starting the operation, make sure the network devices are time synchronized by using NTP. For more information about NTP, see "Configuring NTP."
To configure the voice operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the voice type and enter its view. |
type voice |
N/A |
4. Specify the destination address of voice packets. |
destination ip ip-address |
By default, no destination IP address is configured. The destination IP address must be the same as the IP address of the listening service on the NQA server. |
5. Specify the destination port of voice packets. |
destination port port-number |
By default, no destination port number is configured. The destination port number must be the same as the port number of the listening service on the NQA server. |
6. (Optional.) Specify the codec type. |
codec-type { g711a | g711u | g729a } |
By default, the codec type is G.711 A-law. |
7. (Optional.) Set the advantage factor for calculating MOS and ICPIF values. |
advantage-factor factor |
By default, the advantage factor is 0. |
8. (Optional.) Specify the source IP address of voice packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no voice packets can be sent out. |
9. (Optional.) Specify the source port number of voice packets. |
source port port-number |
By default, no source port number is specified. |
10. (Optional.) Set the payload size for each voice packet. |
data-size size |
By default, the voice packet size varies by codec type. The default packet size is 172 bytes for G.711A-law and G.711 μ-law codec type, and 32 bytes for G.729 A-law codec type. |
11. (Optional.) Specify the payload fill string for voice packets. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
12. (Optional.) Set the number of voice packets to be sent in a voice probe. |
probe packet-number packet-number |
The default setting is 1000. |
13. (Optional.) Set the interval for sending voice packets. |
probe packet-interval interval |
The default setting is 20 milliseconds. |
14. (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out. |
probe packet-timeout timeout |
The default setting is 5000 milliseconds. |
|
NOTE: Use the display nqa result or display nqa statistics command to verify the voice operation. The display nqa history command does not display the voice operation results or statistics. |
Configuring the DLSw operation
The DLSw operation measures the response time of a DLSw device.
To configure the DLSw operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the DLSw type and enter its view. |
type dlsw |
N/A |
4. Specify the destination IP address of probe packets. |
destination ip ip-address |
By default, no destination IP address is specified. |
5. (Optional.) Specify the source IP address of probe packets. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
Configuring the path jitter operation
The path jitter operation measures the jitter, negative jitters, and positive jitters from the NQA client to each hop on the path to the destination.
Before you configure the path jitter operation, perform the following tasks:
· Enable sending ICMP time exceeded messages on the intermediate devices between the source and destination devices. If the intermediate devices are H3C devices, use the ip ttl-expires enable command.
· Enable sending ICMP destination unreachable messages on the destination device. If the destination device is an H3C device, use the ip unreachables enable command.
For more information about the ip ttl-expires enable and ip unreachables enable commands, see Layer 3—IP Services Command Reference.
To configure the path jitter operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify the path jitter type and enter its view. |
type path-jitter |
N/A |
4. Specify the destination address of ICMP echo requests. |
destination ip ip-address |
By default, no destination IP address is specified. |
5. (Optional.) Set the payload size for each ICMP echo request. |
data-size size |
The default setting is 100 bytes. |
6. (Optional.) Specify the payload fill string for ICMP echo requests. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
7. Specify the source IP address of ICMP echo requests. |
source ip ip-address |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no ICMP echo requests can be sent out. |
8. (Optional.) Set the number of ICMP echo requests to be sent in a path jitter operation. |
probe packet-number packet-number |
The default setting is 10. |
9. (Optional.) Set the interval for sending ICMP echo requests. |
probe packet-interval interval |
The default setting is 20 milliseconds. |
10. (Optional.) Specify how long the NQA client waits for a response from the server before it regards the response times out. |
probe packet-timeout timeout |
The default setting is 3000 milliseconds. |
11. (Optional.) Specify an LSR path. |
lsr-path ip-address&<1-8> |
By default, no LSR path is specified. The path jitter operation uses the tracert to detect the LSR path to the destination, and sends ICMP echo requests to each hop on the LSR. |
12. (Optional.) Perform the path jitter operation only on the destination address. |
target-only |
By default, the path jitter operation is performed on each hop on the path to the destination. |
Configuring optional parameters for the NQA operation
Unless otherwise specified, the following optional parameters apply to all types of NQA operations.
The parameter settings take effect only on the current operation.
To configure optional parameters for an NQA operation:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify an NQA operation type and enter its view. |
type { dhcp | dlsw | dns | ftp | http | icmp-echo | icmp-jitter | path-jitter | snmp | tcp | udp-echo | udp-jitter | udp-tracert | voice } |
N/A |
4. Configure a description. |
description text |
By default, no description is configured. |
5. Set the interval at which the NQA operation repeats. |
frequency interval |
For a voice or path jitter operation, the default setting is 60000 milliseconds. For other types of operations, the default setting is 0 milliseconds, and only one operation is performed. If the operation is not completed when the interval expires, the next operation does not start. |
6. Specify the probe times. |
probe count times |
By default: · In an UDP tracert operation, the NQA client performs three probes to each hop to the destination. · In other types of operations, the NQA client performs one probe to the destination per operation. This command is not available for the path jitter and voice operations. Each of these operations performs only one probe. |
7. Set the probe timeout time. |
probe timeout timeout |
The default setting is 3000 milliseconds. This command is not available for the ICMP jitter, path jitter, UDP jitter, or voice operations. |
8. Set the maximum number of hops that the probe packets can traverse. |
ttl value |
The default setting is 30 for probe packets of the UDP tracert operation, and is 20 for probe packets of other types of operations. This command is not available for the DHCP or path jitter operation. |
9. Set the ToS value in the IP header of probe packets. |
tos value |
The default setting is 0. |
10. Enable the routing table bypass feature. |
route-option bypass-route |
By default, the routing table bypass feature is disabled. This command is not available for the DHCP and path jitter operations. This command does not take effect if the destination address of the NQA operation is an IPv6 address. |
11. Specify the VPN instance where the operation is performed. |
vpn-instance vpn-instance-name |
By default, the operation is performed on the public network. |
Configuring the collaboration feature
Collaboration is implemented by associating a reaction entry of an NQA operation with a track entry. The reaction entry monitors the NQA operation. If the number of operation failures reaches the specified threshold, the configured action is triggered.
To configure the collaboration feature:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify an NQA operation type and enter its view. |
type { dhcp | dlsw | dns | ftp | http | icmp-echo | snmp | tcp | udp-echo } |
The collaboration feature is not available for the ICMP jitter, path jitter, UDP tracert, UDP jitter, or voice operations. |
4. Configure a reaction entry. |
reaction item-number checked-element probe-fail threshold-type consecutive consecutive-occurrences action-type trigger-only |
By default, no reaction entry is configured. You cannot modify the content of an existing reaction entry. |
5. Return to system view. |
quit |
N/A |
6. Associate Track with NQA. |
See High Availability Configuration Guide. |
N/A |
7. Associate Track with an application module. |
See High Availability Configuration Guide. |
N/A |
Configuring threshold monitoring
This feature allows you to monitor the NQA operation running status.
Threshold types
An NQA operation supports the following threshold types:
· average—If the average value for the monitored performance metric either exceeds the upper threshold or goes below the lower threshold, a threshold violation occurs.
· accumulate—If the total number of times that the monitored performance metric is out of the specified value range reaches or exceeds the specified threshold, a threshold violation occurs.
· consecutive—If the number of consecutive times that the monitored performance metric is out of the specified value range reaches or exceeds the specified threshold, a threshold violation occurs.
Threshold violations for the average or accumulate threshold type are determined on a per NQA operation basis. The threshold violations for the consecutive type are determined from the time the NQA operation starts.
Triggered actions
The following actions might be triggered:
· none—NQA displays results only on the terminal screen. It does not send traps to the NMS.
· trap-only—NQA displays results on the terminal screen, and meanwhile it sends traps to the NMS.
· trigger-only—NQA displays results on the terminal screen, and meanwhile triggers other modules for collaboration.
The DNS operation does not support the action of sending trap messages.
Reaction entry
In a reaction entry, configure a monitored element, a threshold type, and an action to be triggered to implement threshold monitoring.
The state of a reaction entry can be invalid, over-threshold, or below-threshold.
· Before an NQA operation starts, the reaction entry is in invalid state.
· If the threshold is violated, the state of the entry is set to over-threshold. Otherwise, the state of the entry is set to below-threshold.
If the action is trap-only for a reaction entry, a trap message is sent to the NMS when the state of the entry changes.
Configuration procedure
Before you configure threshold monitoring, configure the destination address of the trap messages by using the snmp-agent target-host command. For more information about the command, see Network Management and Monitoring Command Reference.
To configure threshold monitoring:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Enter NQA operation view. |
type { dhcp | dlsw | dns | ftp | http | icmp-echo | icmp-jitter | snmp | tcp | udp-echo | udp-jitter | udp-tracert | voice } |
The threshold monitoring feature is not available for path jitter operations. |
4. Enable sending traps to the NMS when specific conditions are met. |
reaction trap { path-change | probe-failure consecutive-probe-failures | test-complete | test-failure [ accumulate-probe-failures ] } |
By default, no traps are sent to the NMS. The ICMP jitter, UDP jitter, and voice operations support only the test-complete keyword. The following parameters are not available for the UDP tracert operation: · The probe-failure consecutive-probe-failures option. · The accumulate-probe-failures argument. |
5. Configure threshold monitoring. |
·
Monitor the operation duration (not
supported in the ICMP jitter, UDP jitter, UDP tracert, or voice operations): ·
Monitor failure times (not supported in the
ICMP jitter, UDP jitter, UDP tracert, or voice operations): ·
Monitor the round-trip time (only for the ICMP
jitter, UDP jitter, and voice operations): ·
Monitor packet loss (only for the ICMP jitter,
UDP jitter, and voice operations): ·
Monitor the one-way jitter (only for the ICMP
jitter, UDP jitter, and voice operations): ·
Monitor the one-way delay (only for the ICMP
jitter, UDP jitter, and voice operations): ·
Monitor the ICPIF value (only for the voice
operation): ·
Monitor the MOS value (only for the voice
operation): |
N/A |
Configuring the NQA statistics collection feature
NQA forms statistics within the same collection interval as a statistics group. To display information about the statistics groups, use the display nqa statistics command.
If you use the frequency command to set the interval to 0 milliseconds for an NQA operation, NQA does not generate any statistics group for the operation.
To configure the NQA statistics collection feature:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Specify an NQA operation type and enter its view. |
type { dhcp | dlsw | dns | ftp | http | icmp-echo | icmp-jitter | path-jitter | snmp | tcp | udp-echo | udp-jitter | voice } |
The NQA statistics collection feature is not available for UDP tracert operations. |
4. (Optional.) Set the interval for collecting the statistics. |
statistics interval interval |
The default setting is 60 minutes. |
5. (Optional.) Set the maximum number of statistics groups that can be saved. |
statistics max-group number |
The default setting is two groups. To disable the NQA statistics collection feature, set the maximum number to 0. When the maximum number of statistics groups is reached, to save a new statistics group, the oldest statistics group is deleted. |
6. (Optional.) Set the hold time of statistics groups. |
statistics hold-time hold-time |
The default setting is 120 minutes. A statistics group is deleted when its hold time expires. |
Configuring the saving of NQA history records
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA operation and enter NQA operation view. |
nqa entry admin-name operation-tag |
By default, no NQA operations exist. |
3. Enter NQA operation type view. |
type { dhcp | dlsw | dns | ftp | http | icmp-echo | snmp | tcp | udp-echo | udp-tracert } |
The history record saving feature is not available for the ICMP jitter, UDP jitter, path jitter, or voice operations. |
4. Enable the saving of history records for the NQA operation. |
history-record enable |
By default, this feature is enabled only for the UDP tracert operation. |
5. (Optional.) Set the lifetime of history records. |
history-record keep-time keep-time |
The default setting is 120 minutes. A record is deleted when its lifetime is reached. |
6. (Optional.) Set the maximum number of history records that can be saved. |
history-record number number |
The default setting is 50. If the maximum number of history records for an NQA operation is reached, the earliest history records are deleted. |
7. (Optional.) Display NQA history records. |
display nqa history |
N/A |
Scheduling the NQA operation on the NQA client
The NQA operation works between the specified start time and the end time (the start time plus operation duration). If the specified start time is ahead of the system time, the operation starts immediately. If both the specified start and end time are ahead of the system time, the operation does not start. To display the current system time, use the display clock command.
When you schedule an NQA operation, follow these restrictions and guidelines:
· You cannot enter the operation type view or the operation view of a scheduled NQA operation.
· A system time adjustment does not affect started or completed NQA operations. It affects only the NQA operations that have not started.
To schedule the NQA operation on the NQA client:
Step |
Command |
1. Enter system view. |
system-view |
2. Specify the scheduling parameters for an NQA operation. |
nqa schedule admin-name operation-tag start-time { hh:mm:ss [ yyyy/mm/dd | mm/dd/yyyy ] | now } lifetime { lifetime | forever } [ recurring ] |
Configuring NQA templates on the NQA client
An NQA template is a set of operation parameters, such as the destination address, the destination port number, and the destination server URL. You can use an NQA template in load balancing, health monitoring, and other feature modules to provide statistics. You can create multiple templates on a device, and each template must be uniquely named.
NQA template supports the ICMP, DNS, TCP, TCP half open, UDP, HTTP, FTP, and RADIUS operation types.
Some operation parameters for an NQA template can be specified by the template configuration or the feature that uses the template. When both are specified, the parameters in the template configuration take effect. For example, the server load balancing uses the NQA ICMP template for health monitoring. If the destination IP address in the template is different from the real server address, the destination IP address in the template takes effect.
NQA template configuration task list
Tasks at a glance |
(Required.) Perform at least one of the following tasks: · Configuring the ICMP template · Configuring the DNS template · Configuring the TCP template · Configuring the TCP half open template · Configuring the UDP template · Configuring the HTTP template |
(Optional.) Configuring optional parameters for the NQA template |
Configuring the ICMP template
A feature that uses the ICMP template performs the ICMP operation to measure the reachability of a destination device. The ICMP template is supported in both IPv4 and IPv6 networks.
To configure the ICMP template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an ICMP template and enter its view. |
nqa template icmp name |
By default, no ICMP templates exist. |
3. (Optional.) Specify the destination IP address of the operation. |
·
IPv4 address: ·
IPv6 address: |
By default, no destination IP address is configured. |
4. (Optional.) Set the payload size for each ICMP request. |
data-size size |
The default setting is 100 bytes. |
5. (Optional.) Specify the payload fill string for requests. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
6. (Optional.) Specify the source IP address for ICMP echo requests. |
·
Use the IPv4 address of the specified
interface as the source IP address: ·
Specify the source IPv4 address: ·
Specify the source IPv6 address: |
By default, the requests take the primary IP address of the output interface as their source IP address. If you execute the source interface, source ip, and source ipv6 commands multiple times, the most recent configuration takes effect. The specified source interface must be up. The specified source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
7. (Optional.) Specify the next hop IP address for ICMP echo requests. |
·
IPv4 address: ·
IPv6 address: |
By default, no IP address of the next hop is configured. |
8. (Optional.) Configure the probe result sending on a per-probe basis. |
reaction trigger per-probe |
By default, the probe result is sent to the feature that uses the template after three consecutive failed or successful probes. If you execute the reaction trigger per-probe and reaction trigger probe-pass commands multiple times, the most recent configuration takes effect. If you execute the reaction trigger per-probe and reaction trigger probe-fail commands multiple times, the most recent configuration takes effect. |
Configuring the DNS template
A feature that uses the DNS template performs the DNS operation to determine the status of the server. It is supported in both IPv4 and IPv6 networks.
In DNS template view, you can specify the address expected to be returned. If the returned IP addresses include the expected address, the DNS server is valid and the operation succeeds. Otherwise, the operation fails.
Create a mapping between the domain name and an address before you perform the DNS operation. For information about configuring the DNS server, see Layer 3—IP Services Configuration Guide.
To configure the DNS template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a DNS template and enter DNS template view. |
nqa template dns name |
By default, no DNS templates exist. |
3. (Optional.) Specify the destination IP address of DNS packets. |
·
IPv4 address: ·
IPv6 address: |
By default, no destination address is specified. |
4. (Optional.) Specify the destination port number for the operation. |
destination port port-number |
By default, the destination port number is 53. |
5. Specify the domain name to be translated. |
resolve-target domain-name |
By default, no domain name is specified. |
6. Specify the domain name resolution type. |
resolve-type { A | AAAA } |
By default, the type is type A. A type A query resolves a domain name to a mapped IPv4 address, and a type AAAA query to a mapped IPv6 address. |
7. (Optional.) Specify the source IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
8. (Optional.) Specify the source port for probe packets. |
source port port-number |
By default, no source port number is specified. |
9. (Optional.) Specify the IP address that is expected to be returned. |
·
IPv4 address: ·
IPv6 address: |
By default, no expected IP address is specified. |
Configuring the TCP template
A feature that uses the TCP template performs the TCP operation to test whether the NQA client can establish a TCP connection to a specific port on the server.
In TCP template view, you can specify the expected data to be returned. If you do not specify the expected data, the TCP operation tests only whether the client can establish a TCP connection to the server.
The TCP operation requires both the NQA server and the NQA client. Before you perform a TCP operation, configure a TCP listening service on the NQA server. For more information about the TCP listening service configuration, see "Configuring the NQA server."
To configure the TCP template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a TCP template and enter its view. |
nqa template tcp name |
By default, no TCP templates exist. |
3. (Optional.) Specify the destination IP address of the operation. |
·
IPv4 address: ·
IPv6 address: |
By default, no destination address is specified. The destination address must be the same as the IP address of the listening service configured on the NQA server. |
4. (Optional.) Specify the destination port number for the operation. |
destination port port-number |
By default, no destination port number is specified. The destination port number must be the same as the port number of the listening service on the NQA server. |
5. (Optional.) Specify the payload fill string for requests. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
6. (Optional.) Specify the source IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
7. (Optional.) Configure the expected data. |
expect data expression [ offset number ] |
By default, no expected data is configured. The NQA client performs expect data check only when you configure both the data-fill and expect-data commands. |
Configuring the TCP half open template
A feature that uses the TCP half open template performs the TCP half open operation to test whether the TCP service is available on the server. The TCP half open operation is used when the feature cannot get a response from the TCP server through an existing TCP connection.
In the TCP half open operation, the NQA client sends a TCP ACK packet to the server. If the client receives an RST packet, it considers that the TCP service is available on the server.
To configure the TCP half open template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a TCP half open template and enter its view. |
nqa template tcphalfopen name |
By default, no TCP half open templates exist. |
3. (Optional.) Specify the destination IP address of the operation. |
·
IPv4 address: ·
IPv6 address: |
By default, no destination address is specified. |
4. (Optional.) Specify the source IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
5. (Optional.) Specify the next hop IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the IP address of the next hop is configured. |
6. (Optional.) Configure the probe result sending on a per-probe basis. |
reaction trigger per-probe |
By default, the probe result is sent to the feature that uses the template after three consecutive failed or successful probes. If you execute the reaction trigger per-probe and reaction trigger probe-pass commands multiple times, the most recent configuration takes effect. If you execute the reaction trigger per-probe and reaction trigger probe-fail commands multiple times, the most recent configuration takes effect. |
Configuring the UDP template
A feature that uses the UDP template performs the UDP operation to test the following items:
· Reachability of a specific port on the NQA server.
· Availability of the requested service on the NQA server.
In UDP template view, you can specify the expected data to be returned. If you do not specify the expected data, the UDP operation tests only whether the client can receive the response packet from the server.
The UDP operation requires both the NQA server and the NQA client. Before you perform a UDP operation, configure a UDP listening service on the NQA server. For more information about the UDP listening service configuration, see "Configuring the NQA server."
To configure the UDP template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a UDP template and enter its view. |
nqa template udp name |
By default, no UDP templates exist. |
3. (Optional.) Specify the destination IP address of the operation. |
·
IPv4 address: ·
IPv6 address: |
By default, no destination address is specified. The destination address must be the same as the IP address of the listening service configured on the NQA server. |
4. (Optional.) Specify the destination port number for the operation. |
destination port port-number |
By default, no destination port number is specified. The destination port number must be the same as the port number of the listening service on the NQA server. |
5. (Optional.) Specify the payload fill string for the probe packets. |
data-fill string |
The default payload fill string is hexadecimal string 00010203040506070809. |
6. (Optional.) Set the payload size for the probe packets. |
data-size size |
The default setting is 100 bytes. |
7. (Optional.) Specify the source IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
8. (Optional.) Configure the expected data. |
expect data expression [ offset number ] |
By default, no expected data is configured. If you want to configure this command, make sure the data-fill command is already executed. |
Configuring the HTTP template
A feature that uses the HTTP template performs the HTTP operation to measure the time it takes the NQA client to obtain data from an HTTP server.
The expected data is checked only when the data is configured and the HTTP response contains the Content-Length field in the HTTP header.
The status code of the HTTP packet is a three-digit field in decimal notation, and it includes the status information for the HTTP server. The first digit defines the class of response.
Configure the HTTP server before you perform the HTTP operation.
To configure the HTTP template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an HTTP template and enter its view. |
nqa template http name |
By default, no HTTP templates exist. |
3. Specify the URL of the destination HTTP server. |
url url |
By default, no URL is specified for the destination HTTP server. Enter the URL in one of the following formats: · http://host/resource. · http://host:port/resource. |
4. Specify an HTTP login username. |
username username |
By default, no HTTP login username is specified. |
5. Specify an HTTP login password. |
password { cipher | simple } string |
By default, no HTTP login password is specified. |
6. (Optional.) Specify the HTTP version. |
version { v1.0 | v1.1 } |
By default, HTTP 1.0 is used. |
7. (Optional.) Specify the HTTP operation type. |
operation { get | post | raw } |
By default, the HTTP operation type is get, which means obtaining data from the HTTP server. If you set the HTTP operation type to raw, use the raw-request command to specify the content of the HTTP request to be sent to the HTTP server. |
8. (Optional.) Enter raw request view. |
raw-request |
This step is required for the raw operation. Every time you enter the raw request view, the existing request content configuration is removed. |
9. (Optional.) Enter or paste the content of the HTTP request for the HTTP operation. |
N/A |
This step is required for the raw operation. By default, the HTTP request content is not specified. |
10. (Optional.) Return to HTTP template view. |
quit |
The system automatically saves the configuration in raw request view before it returns to HTTP template view. |
11. (Optional.) Specify the source IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
12. (Optional.) Configure the expected status codes. |
expect status status-list |
By default, no expected status code is configured. |
13. (Optional.) Configure the expected data. |
expect data expression [ offset number ] |
By default, no expected data is configured. |
Configuring the FTP template
A feature that uses the FTP template performs the FTP operation. The operation measures the time it takes the NQA client to transfer a file to or download a file from an FTP server.
Configure the username and password for the FTP client to log in to the FTP server before you perform an FTP operation. For information about configuring the FTP server, see Fundamentals Configuration Guide.
To configure the FTP template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an FTP template and enter its view. |
nqa template ftp name |
By default, no FTP templates exist. |
3. Specify the URL of the destination FTP server. |
url url |
By default, no URL is specified for the destination FTP server. Enter the URL in one of the following formats: · ftp://host/filename. · ftp://host:port/filename. When you perform the get operation, the file name is required. When you perform the put operation, the filename argument does not take effect, even if it is specified. The file name for the put operation is determined by the filename command. |
4. (Optional.) Specify the FTP operation type. |
operation { get | put } |
By default, the FTP operation type is get, which means obtaining files from the FTP server. |
5. Specify an FTP login username. |
username username |
By default, no FTP login username is specified. |
6. Specify an FTP login password. |
password { cipher | simple } sting |
By default, no FTP login password is specified. |
7. (Optional.) Specify the name of a file to be transferred. |
filename filename |
This step is required if you perform the put operation. This configuration does not take effect for the get operation. By default, no file is specified. |
8. Set the data transmission mode. |
mode { active | passive } |
The default mode is active. |
9. (Optional.) Specify the source IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
Configuring the RADIUS template
A feature that uses the RADIUS template performs the RADIUS operation to check the availability of the authentication service on the RADIUS server.
The RADIUS operation authentication workflow is as follows:
1. The NQA client sends an authentication request (Access-Request) to the RADIUS server. The request includes the username and the user's password. The password has been encrypted by the MD5 algorithm and the shared key.
2. The RADIUS server authenticates the username and password.
? If the authentication succeeds, the server sends an Access-Accept packet to the NQA client.
? If the authentication fails, the server sends an Access-Reject packet to the NQA client.
If the NQA client can receive the Access-Accept packet from the RADIUS server, the authentication service is available on the RADIUS server. Otherwise, the authentication service is not available on the RADIUS server.
Before you configure the RADIUS template, specify a username, password, and shared key on the RADIUS server. For more information about configuring the RADIUS server, see Security Configuration Guide.
To configure the RADIUS template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a RADIUS template and enter its view. |
nqa template radius name |
By default, no RADIUS templates exist. |
3. (Optional.) Specify the destination IP address of the operation. |
·
IPv4 address: ·
IPv6 address: |
By default, no destination IP address is configured. |
4. (Optional.) Specify the destination port number for the operation. |
destination port port-number |
By default, the destination port number is 1812. |
5. Specify a username. |
username username |
By default, no username is specified. |
6. Specify a password. |
password { cipher | simple } string |
By default, no password is specified. |
7. Specify a shared key for secure RADIUS authentication. |
key { cipher | simple } string |
By default, no shared key is specified for RADIUS authentication. |
8. (Optional.) Specify the source IP address for the probe packets. |
·
IPv4 address: ·
IPv6 address: |
By default, the packets take the primary IP address of the output interface as their source IP address. The source IP address must be the IP address of a local interface, and the interface must be up. Otherwise, no probe packets can be sent out. |
Configuring optional parameters for the NQA template
Unless otherwise specified, the following optional parameters apply to all types of NQA templates.
The parameter settings take effect only on the current NQA template.
To configure optional parameters for an NQA template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an NQA template and enter its view. |
nqa template { dns | ftp | http | icmp | tcp | tcphalfopen | udp } name |
By default, no NQA templates exist. |
3. Configure a description. |
description text |
By default, no description is configured. |
4. Set the interval at which the NQA operation repeats. |
frequency interval |
The default setting is 5000 milliseconds. If the operation is not completed when the interval expires, the next operation does not start. |
5. Set the probe timeout time. |
probe timeout timeout |
The default setting is 3000 milliseconds. |
6. Set the TTL for probe packets. |
ttl value |
The default setting is 20. |
7. Set the ToS value in the IP header of probe packets. |
tos value |
The default setting is 0. |
8. Specify the VPN instance where the operation is performed. |
vpn-instance vpn-instance-name |
By default, the operation is performed on the public network. |
9. Set the number of consecutive successful probes to determine a successful operation event. |
reaction trigger probe-pass count |
The default setting is 3. If the number of consecutive successful probes for an NQA operation is reached, the NQA client notifies the feature that uses the template of the successful operation event. |
10. Set the number of consecutive probe failures to determine an operation failure. |
reaction trigger probe-fail count |
The default setting is 3. If the number of consecutive probe failures for an NQA operation is reached, the NQA client notifies the feature that uses the NQA template of the operation failure. |
Displaying and maintaining NQA
Execute display commands in any view.
Task |
Command |
Display history records of NQA operations. |
display nqa history [ admin-name operation-tag ] |
Display the current monitoring results of reaction entries. |
display nqa reaction counters [ admin-name operation-tag [ item-number ] ] |
Display the most recent result of the NQA operation. |
display nqa result [ admin-name operation-tag ] |
Display NQA statistics. |
display nqa statistics [ admin-name operation-tag ] |
Display NQA server status. |
display nqa server status |
NQA configuration examples
For configuration examples of using an NQA template for a feature, see High Availability Configuration Guide.
ICMP echo operation configuration example
Network requirements
As shown in Figure 7, configure an ICMP echo operation on the NQA client (Device A) to test the round-trip time to Device B. The next hop of Device A is Device C.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 7. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create an ICMP echo operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type icmp-echo
# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.
[DeviceA-nqa-admin-test1-icmp-echo] destination ip 10.2.2.2
# Specify 10.1.1.2 as the next hop. The ICMP echo requests are sent through Device C to Device B.
[DeviceA-nqa-admin-test1-icmp-echo] next-hop ip 10.1.1.2
# Configure the ICMP echo operation to perform 10 probes.
[DeviceA-nqa-admin-test1-icmp-echo] probe count 10
# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.
[DeviceA-nqa-admin-test1-icmp-echo] probe timeout 500
# Configure the ICMP echo operation to repeat every 5000 milliseconds.
[DeviceA-nqa-admin-test1-icmp-echo] frequency 5000
# Enable saving history records.
[DeviceA-nqa-admin-test1-icmp-echo] history-record enable
# Set the maximum number of history records to 10.
[DeviceA-nqa-admin-test1-icmp-echo] history-record number 10
[DeviceA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP echo operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the ICMP echo operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the ICMP echo operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 2/5/3
Square-Sum of round trip time: 96
Last succeeded probe time: 2011-08-23 15:00:01.2
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the ICMP echo operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test) history records:
Index Response Status Time
370 3 Succeeded 2007-08-23 15:00:01.2
369 3 Succeeded 2007-08-23 15:00:01.2
368 3 Succeeded 2007-08-23 15:00:01.2
367 5 Succeeded 2007-08-23 15:00:01.2
366 3 Succeeded 2007-08-23 15:00:01.2
365 3 Succeeded 2007-08-23 15:00:01.2
364 3 Succeeded 2007-08-23 15:00:01.1
363 2 Succeeded 2007-08-23 15:00:01.1
362 3 Succeeded 2007-08-23 15:00:01.1
361 2 Succeeded 2007-08-23 15:00:01.1
The output shows that the packets sent by Device A can reach Device B through Device C. No packet loss occurs during the operation. The minimum, maximum, and average round-trip times are 2, 5, and 3 milliseconds, respectively.
ICMP jitter operation configuration example
Network requirements
As shown in Figure 8, configure an ICMP jitter operation to test the jitter between Device A and Device B.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 8. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device A:
# Create an ICMP jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type icmp-jitter
# Specify 10.2.2.2 as the destination address for the operation.
[DeviceA-nqa-admin-test1-icmp-jitter] destination ip 10.2.2.2
# Configure the operation to repeat every 1000 milliseconds.
[DeviceA-nqa-admin-test1-icmp-jitter] frequency 1000
[DeviceA-nqa-admin-test1-icmp-jitter] quit
# Start the ICMP jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the ICMP jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the ICMP jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 1/2/1
Square-Sum of round trip time: 13
Last packet received time: 2015-03-09 17:40:29.8
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
ICMP-jitter results:
RTT number: 10
Min positive SD: 0 Min positive DS: 0
Max positive SD: 0 Max positive DS: 0
Positive SD number: 0 Positive DS number: 0
Positive SD sum: 0 Positive DS sum: 0
Positive SD average: 0 Positive DS average: 0
Positive SD square-sum: 0 Positive DS square-sum: 0
Min negative SD: 1 Min negative DS: 2
Max negative SD: 1 Max negative DS: 2
Negative SD number: 1 Negative DS number: 1
Negative SD sum: 1 Negative DS sum: 2
Negative SD average: 1 Negative DS average: 2
Negative SD square-sum: 1 Negative DS square-sum: 4
One way results:
Max SD delay: 1 Max DS delay: 2
Min SD delay: 1 Min DS delay: 2
Number of SD delay: 1 Number of DS delay: 1
Sum of SD delay: 1 Sum of DS delay: 2
Square-Sum of SD delay: 1 Square-Sum of DS delay: 4
Lost packets for unknown reason: 0
# Display the statistics of the ICMP jitter operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Start time: 2015-03-09 17:42:10.7
Life time: 156 seconds
Send operation times: 1560 Receive response times: 1560
Min/Max/Average round trip time: 1/2/1
Square-Sum of round trip time: 1563
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
ICMP-jitter results:
RTT number: 1560
Min positive SD: 1 Min positive DS: 1
Max positive SD: 1 Max positive DS: 2
Positive SD number: 18 Positive DS number: 46
Positive SD sum: 18 Positive DS sum: 49
Positive SD average: 1 Positive DS average: 1
Positive SD square-sum: 18 Positive DS square-sum: 55
Min negative SD: 1 Min negative DS: 1
Max negative SD: 1 Max negative DS: 2
Negative SD number: 24 Negative DS number: 57
Negative SD sum: 24 Negative DS sum: 58
Negative SD average: 1 Negative DS average: 1
Negative SD square-sum: 24 Negative DS square-sum: 60
One way results:
Max SD delay: 1 Max DS delay: 2
Min SD delay: 1 Min DS delay: 1
Number of SD delay: 4 Number of DS delay: 4
Sum of SD delay: 4 Sum of DS delay: 5
Square-Sum of SD delay: 4 Square-Sum of DS delay: 7
Lost packets for unknown reason: 0
DHCP operation configuration example
Network requirements
As shown in Figure 9, configure a DHCP operation to test the time required for Switch A to obtain an IP address from the DHCP server (Switch B).
Configuration procedure
# Create a DHCP operation.
<SwitchA> system-view
[SwitchA] nqa entry admin test1
[SwitchA-nqa-admin-test1] type dhcp
# Specify the DHCP server address 10.1.1.2 as the destination address.
[SwitchA-nqa-admin-test1-dhcp] destination ip 10.1.1.2
# Enable the saving of history records.
[SwitchA-nqa-admin-test1-dhcp] history-record enable
[SwitchA-nqa-admin-test1-dhcp] quit
# Start the DHCP operation.
[SwitchA] nqa schedule admin test1 start-time now lifetime forever
# After the DHCP operation runs for a period of time, stop the operation.
[SwitchA] undo nqa schedule admin test1
# Display the most recent result of the DHCP operation.
[SwitchA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 512/512/512
Square-Sum of round trip time: 262144
Last succeeded probe time: 2011-11-22 09:56:03.2
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the DHCP operation.
[SwitchA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 512 Succeeded 2011-11-22 09:56:03.2
The output shows that it took Switch A 512 milliseconds to obtain an IP address from the DHCP server.
DNS operation configuration example
Network requirements
As shown in Figure 10, configure a DNS operation to test whether Device A can perform address resolution through the DNS server and test the resolution time.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 10. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create a DNS operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dns
# Specify the IP address of the DNS server 10.2.2.2 as the destination address.
[DeviceA-nqa-admin-test1-dns] destination ip 10.2.2.2
# Specify host.com as the domain name to be translated.
[DeviceA-nqa-admin-test1-dns] resolve-target host.com
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-dns] history-record enable
[DeviceA-nqa-admin-test1-dns] quit
# Start the DNS operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the DNS operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the DNS operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 62/62/62
Square-Sum of round trip time: 3844
Last succeeded probe time: 2011-11-10 10:49:37.3
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the DNS operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test) history records:
Index Response Status Time
1 62 Succeeded 2011-11-10 10:49:37.3
The output shows that it took Device A 62 milliseconds to translate domain name host.com into an IP address.
FTP operation configuration example
Network requirements
As shown in Figure 11, configure an FTP operation to test the time required for Device A to upload a file to the FTP server. The login username and password are admin and systemtest, respectively. The file to be transferred to the FTP server is config.txt.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 11. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create an FTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type ftp
# Specify the URL of the FTP server.
[DeviceA-nqa-admin-test-ftp] url ftp://10.2.2.2
# Specify 10.1.1.1 as the source IP address.
[DeviceA-nqa-admin-test1-ftp] source ip 10.1.1.1
# Configure the device to upload file config.txt to the FTP server.
[DeviceA-nqa-admin-test1-ftp] operation put
[DeviceA-nqa-admin-test1-ftp] filename config.txt
# Set the username to admin for the FTP operation.
[DeviceA-nqa-admin-test1-ftp] username admin
# Set the password to systemtest for the FTP operation.
[DeviceA-nqa-admin-test1-ftp] password simple systemtest
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-ftp] history-record enable
[DeviceA-nqa-admin-test1-ftp] quit
# Start the FTP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the FTP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the FTP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 173/173/173
Square-Sum of round trip time: 29929
Last succeeded probe time: 2011-11-22 10:07:28.6
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the FTP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 173 Succeeded 2011-11-22 10:07:28.6
The output shows that it took Device A 173 milliseconds to upload a file to the FTP server.
HTTP operation configuration example
Network requirements
As shown in Figure 12, configure an HTTP operation on the NQA client to test the time required to obtain data from the HTTP server.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 12. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create an HTTP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type http
# Specify the URL of the HTTP server.
[DeviceA-nqa-admin-test-http] url http://10.2.2.2/index.htm
# Configure the HTTP operation to get data from the HTTP server.
[DeviceA-nqa-admin-test1-http] operation get
# Configure the operation to use HTTP version 1.0.
[DeviceA-nqa-admin-test1-http] version v1.0
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-http] history-record enable
[DeviceA-nqa-admin-test1-http] quit
# Start the HTTP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the HTTP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the HTTP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 64/64/64
Square-Sum of round trip time: 4096
Last succeeded probe time: 2011-11-22 10:12:47.9
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the HTTP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 64 Succeeded 2011-11-22 10:12:47.9
The output shows that it took Device A 64 milliseconds to obtain data from the HTTP server.
UDP jitter operation configuration example
Network requirements
As shown in Figure 13, configure a UDP jitter operation to test the jitter, delay, and round-trip time between Device A and Device B.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 13. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen on the IP address 10.2.2.2 and UDP port 9000.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create a UDP jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-jitter
# Specify 10.2.2.2 as the destination address of the operation.
[DeviceA-nqa-admin-test1-udp-jitter] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-udp-jitter] destination port 9000
# Configure the operation to repeat every 1000 milliseconds.
[DeviceA-nqa-admin-test1-udp-jitter] frequency 1000
[DeviceA-nqa-admin-test1-udp-jitter] quit
# Start the UDP jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the UDP jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 15/32/17
Square-Sum of round trip time: 3235
Last packet received time: 2011-05-29 13:56:17.6
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
UDP-jitter results:
RTT number: 10
Min positive SD: 4 Min positive DS: 1
Max positive SD: 21 Max positive DS: 28
Positive SD number: 5 Positive DS number: 4
Positive SD sum: 52 Positive DS sum: 38
Positive SD average: 10 Positive DS average: 10
Positive SD square-sum: 754 Positive DS square-sum: 460
Min negative SD: 1 Min negative DS: 6
Max negative SD: 13 Max negative DS: 22
Negative SD number: 4 Negative DS number: 5
Negative SD sum: 38 Negative DS sum: 52
Negative SD average: 10 Negative DS average: 10
Negative SD square-sum: 460 Negative DS square-sum: 754
One way results:
Max SD delay: 15 Max DS delay: 16
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 10 Number of DS delay: 10
Sum of SD delay: 78 Sum of DS delay: 85
Square-Sum of SD delay: 666 Square-Sum of DS delay: 787
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
# Display the statistics of the UDP jitter operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Start time: 2011-05-29 13:56:14.0
Life time: 47 seconds
Send operation times: 410 Receive response times: 410
Min/Max/Average round trip time: 1/93/19
Square-Sum of round trip time: 206176
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
UDP-jitter results:
RTT number: 410
Min positive SD: 3 Min positive DS: 1
Max positive SD: 30 Max positive DS: 79
Positive SD number: 186 Positive DS number: 158
Positive SD sum: 2602 Positive DS sum: 1928
Positive SD average: 13 Positive DS average: 12
Positive SD square-sum: 45304 Positive DS square-sum: 31682
Min negative SD: 1 Min negative DS: 1
Max negative SD: 30 Max negative DS: 78
Negative SD number: 181 Negative DS number: 209
Negative SD sum: 181 Negative DS sum: 209
Negative SD average: 13 Negative DS average: 14
Negative SD square-sum: 46994 Negative DS square-sum: 3030
One way results:
Max SD delay: 46 Max DS delay: 46
Min SD delay: 7 Min DS delay: 7
Number of SD delay: 410 Number of DS delay: 410
Sum of SD delay: 3705 Sum of DS delay: 3891
Square-Sum of SD delay: 45987 Square-Sum of DS delay: 49393
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
SNMP operation configuration example
Network requirements
As shown in Figure 14, configure an SNMP operation to test the time the NQA client uses to get a response from the SNMP agent.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 14. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure the SNMP agent (Device B):
# Set the SNMP version to all.
<DeviceB> system-view
[DeviceB] snmp-agent sys-info version all
# Set the read community to public.
[DeviceB] snmp-agent community read public
# Set the write community to private.
[DeviceB] snmp-agent community write private
4. Configure Device A:
# Create an SNMP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type snmp
# Specify 10.2.2.2 as the destination IP address of the SNMP operation.
[DeviceA-nqa-admin-test1-snmp] destination ip 10.2.2.2
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-snmp] history-record enable
[DeviceA-nqa-admin-test1-snmp] quit
# Start the SNMP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the SNMP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the SNMP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 50/50/50
Square-Sum of round trip time: 2500
Last succeeded probe time: 2011-11-22 10:24:41.1
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the SNMP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 50 Succeeded 2011-11-22 10:24:41.1
The output shows that it took Device A 50 milliseconds to receive a response from the SNMP agent.
TCP operation configuration example
Network requirements
As shown in Figure 15, configure a TCP operation to test the time required for Device A to establish a TCP connection with Device B.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 15. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen on the IP address 10.2.2.2 and TCP port 9000.
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
4. Configure Device A:
# Create a TCP operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type tcp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-tcp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-tcp] destination port 9000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-tcp] history-record enable
[DeviceA-nqa-admin-test1-tcp] quit
# Start the TCP operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the TCP operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the TCP operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 13/13/13
Square-Sum of round trip time: 169
Last succeeded probe time: 2011-11-22 10:27:25.1
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the TCP operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 13 Succeeded 2011-11-22 10:27:25.1
The output shows that it took Device A 13 milliseconds to establish a TCP connection to port 9000 on the NQA server.
UDP echo operation configuration example
Network requirements
As shown in Figure 16, configure a UDP echo operation on the NQA client to test the round-trip time to Device B. The destination port number is 8000.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 16. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen on the IP address 10.2.2.2 and UDP port 8000.
[DeviceB] nqa server udp-echo 10.2.2.2 8000
4. Configure Device A:
# Create a UDP echo operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-echo
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-udp-echo] destination ip 10.2.2.2
# Set the destination port number to 8000.
[DeviceA-nqa-admin-test1-udp-echo] destination port 8000
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-udp-echo] history-record enable
[DeviceA-nqa-admin-test1-udp-echo] quit
# Start the UDP echo operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP echo operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the UDP echo operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 25/25/25
Square-Sum of round trip time: 625
Last succeeded probe time: 2011-11-22 10:36:17.9
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the UDP echo operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 25 Succeeded 2011-11-22 10:36:17.9
The output shows that the round-trip time between Device A and port 8000 on Device B is 25 milliseconds.
UDP tracert operation configuration example
Network requirements
As shown in Figure 17, configure a UDP tracert operation to determine the routing path from Device A to Device B.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 17. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Execute the ip ttl-expires enable command on the intermediate devices and execute the ip unreachables enable command on Device B.
4. Configure Device A:
# Create a UDP tracert operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type udp-tracert
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-udp-tracert] destination ip 10.2.2.2
# Set the destination port number to 33434.
[DeviceA-nqa-admin-test1-udp-tracert] destination port 33434
# Configure Device A to perform three probes to each hop.
[DeviceA-nqa-admin-test1-udp-tracert] probe count 3
# Set the probe timeout time to 500 milliseconds.
[DeviceA-nqa-admin-test1-udp-tracert] probe timeout 500
# Configure the UDP tracert operation to repeat every 5000 milliseconds.
[DeviceA-nqa-admin-test1-udp-tracert] frequency 5000
# Specify HundredGigE 1/0/1 as the output interface for UDP packets.
[DeviceA-nqa-admin-test1-udp-tracert] out interface hundredgige 1/0/1
# Enable the no-fragmentation feature.
[DeviceA-nqa-admin-test1-udp-tracert] no-fragment enable
# Set the maximum number of consecutive probe failures to 6.
[DeviceA-nqa-admin-test1-udp-tracert] max-failure 6
# Set the TTL value to 1 for UDP packets in the start round of the UDP tracert operation.
[DeviceA-nqa-admin-test1-udp-tracert] init-ttl 1
# Start the UDP tracert operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the UDP tracert operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the UDP tracert operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 6 Receive response times: 6
Min/Max/Average round trip time: 1/1/1
Square-Sum of round trip time: 1
Last succeeded probe time: 2013-09-09 14:46:06.2
Extended results:
Packet loss in test: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
UDP-tracert results:
TTL Hop IP Time
1 3.1.1.1 2013-09-09 14:46:03.2
2 10.2.2.2 2013-09-09 14:46:06.2
# Display the history records of the UDP tracert operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index TTL Response Hop IP Status Time
1 2 2 10.2.2.2 Succeeded 2013-09-09 14:46:06.2
1 2 1 10.2.2.2 Succeeded 2013-09-09 14:46:05.2
1 2 2 10.2.2.2 Succeeded 2013-09-09 14:46:04.2
1 1 1 3.1.1.1 Succeeded 2013-09-09 14:46:03.2
1 1 2 3.1.1.1 Succeeded 2013-09-09 14:46:02.2
1 1 1 3.1.1.1 Succeeded 2013-09-09 14:46:01.2
Voice operation configuration example
Network requirements
As shown in Figure 18, configure a voice operation to test jitters, delay, MOS, and ICPIF between Device A and Device B.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 18. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen on IP address 10.2.2.2 and UDP port 9000.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create a voice operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type voice
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-voice] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqa-admin-test1-voice] destination port 9000
[DeviceA-nqa-admin-test1-voice] quit
# Start the voice operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the voice operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the voice operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1000 Receive response times: 1000
Min/Max/Average round trip time: 31/1328/33
Square-Sum of round trip time: 2844813
Last packet received time: 2011-06-13 09:49:31.1
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Voice results:
RTT number: 1000
Min positive SD: 1 Min positive DS: 1
Max positive SD: 204 Max positive DS: 1297
Positive SD number: 257 Positive DS number: 259
Positive SD sum: 759 Positive DS sum: 1797
Positive SD average: 2 Positive DS average: 6
Positive SD square-sum: 54127 Positive DS square-sum: 1691967
Min negative SD: 1 Min negative DS: 1
Max negative SD: 203 Max negative DS: 1297
Negative SD number: 255 Negative DS number: 259
Negative SD sum: 759 Negative DS sum: 1796
Negative SD average: 2 Negative DS average: 6
Negative SD square-sum: 53655 Negative DS square-sum: 1691776
One way results:
Max SD delay: 343 Max DS delay: 985
Min SD delay: 343 Min DS delay: 985
Number of SD delay: 1 Number of DS delay: 1
Sum of SD delay: 343 Sum of DS delay: 985
Square-Sum of SD delay: 117649 Square-Sum of DS delay: 970225
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
Voice scores:
MOS value: 4.38 ICPIF value: 0
# Display the statistics of the voice operation.
[DeviceA] display nqa statistics admin test1
NQA entry (admin admin, tag test1) test statistics:
NO. : 1
Start time: 2011-06-13 09:45:37.8
Life time: 331 seconds
Send operation times: 4000 Receive response times: 4000
Min/Max/Average round trip time: 15/1328/32
Square-Sum of round trip time: 7160528
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Voice results:
RTT number: 4000
Min positive SD: 1 Min positive DS: 1
Max positive SD: 360 Max positive DS: 1297
Positive SD number: 1030 Positive DS number: 1024
Positive SD sum: 4363 Positive DS sum: 5423
Positive SD average: 4 Positive DS average: 5
Positive SD square-sum: 497725 Positive DS square-sum: 2254957
Min negative SD: 1 Min negative DS: 1
Max negative SD: 360 Max negative DS: 1297
Negative SD number: 1028 Negative DS number: 1022
Negative SD sum: 1028 Negative DS sum: 1022
Negative SD average: 4 Negative DS average: 5
Negative SD square-sum: 495901 Negative DS square-sum: 5419
One way results:
Max SD delay: 359 Max DS delay: 985
Min SD delay: 0 Min DS delay: 0
Number of SD delay: 4 Number of DS delay: 4
Sum of SD delay: 1390 Sum of DS delay: 1079
Square-Sum of SD delay: 483202 Square-Sum of DS delay: 973651
SD lost packets: 0 DS lost packets: 0
Lost packets for unknown reason: 0
Voice scores:
Max MOS value: 4.38 Min MOS value: 4.38
Max ICPIF value: 0 Min ICPIF value: 0
DLSw operation configuration example
Network requirements
As shown in Figure 19, configure a DLSw operation to test the response time of the DLSw device.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 19. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create a DLSw operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type dlsw
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqa-admin-test1-dlsw] destination ip 10.2.2.2
# Enable the saving of history records.
[DeviceA-nqa-admin-test1-dlsw] history-record enable
[DeviceA-nqa-admin-test1-dlsw] quit
# Start the DLSw operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the DLSw operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the DLSw operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Send operation times: 1 Receive response times: 1
Min/Max/Average round trip time: 19/19/19
Square-Sum of round trip time: 361
Last succeeded probe time: 2011-11-22 10:40:27.7
Extended results:
Packet loss ratio: 0%
Failures due to timeout: 0
Failures due to disconnect: 0
Failures due to no connection: 0
Failures due to internal error: 0
Failures due to other errors: 0
# Display the history records of the DLSw operation.
[DeviceA] display nqa history admin test1
NQA entry (admin admin, tag test1) history records:
Index Response Status Time
1 19 Succeeded 2011-11-22 10:40:27.7
The output shows that the response time of the DLSw device is 19 milliseconds.
Path jitter operation configuration example
Network requirements
As shown in Figure 20, configure a path jitter operation to test the round trip time and jitters from Device A to Device B and Device C.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 20. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Execute the ip ttl-expires enable command on Device B and execute the ip unreachables enable command on Device C.
# Create a path jitter operation.
<DeviceA> system-view
[DeviceA] nqa entry admin test1
[DeviceA-nqa-admin-test1] type path-jitter
# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.
[DeviceA-nqa-admin-test1-path-jitter] destination ip 10.2.2.2
# Configure the path jitter operation to repeat every 10000 milliseconds.
[DeviceA-nqa-admin-test1-path-jitter] frequency 10000
[DeviceA-nqa-admin-test1-path-jitter] quit
# Start the path jitter operation.
[DeviceA] nqa schedule admin test1 start-time now lifetime forever
# After the path jitter operation runs for a period of time, stop the operation.
[DeviceA] undo nqa schedule admin test1
# Display the most recent result of the path jitter operation.
[DeviceA] display nqa result admin test1
NQA entry (admin admin, tag test1) test results:
Hop IP 10.1.1.2
Basic Results
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 9/21/14
Square-Sum of round trip time: 2419
Extended Results
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Path-Jitter Results
Jitter number: 9
Min/Max/Average jitter: 1/10/4
Positive jitter number: 6
Min/Max/Average positive jitter: 1/9/4
Sum/Square-Sum positive jitter: 25/173
Negative jitter number: 3
Min/Max/Average negative jitter: 2/10/6
Sum/Square-Sum positive jitter: 19/153
Hop IP 10.2.2.2
Basic Results
Send operation times: 10 Receive response times: 10
Min/Max/Average round trip time: 15/40/28
Square-Sum of round trip time: 4493
Extended Results
Failures due to timeout: 0
Failures due to internal error: 0
Failures due to other errors: 0
Packets out of sequence: 0
Packets arrived late: 0
Path-Jitter Results
Jitter number: 9
Min/Max/Average jitter: 1/10/4
Positive jitter number: 6
Min/Max/Average positive jitter: 1/9/4
Sum/Square-Sum positive jitter: 25/173
Negative jitter number: 3
Min/Max/Average negative jitter: 2/10/6
Sum/Square-Sum positive jitter: 19/153
NQA collaboration configuration example
Network requirements
As shown in Figure 21, configure a static route to Switch C with Switch B as the next hop on Switch A. Associate the static route, a track entry, and an ICMP echo operation to monitor the state of the static route.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 21. (Details not shown.)
2. On Switch A, configure a static route, and associate the static route with track entry 1.
<SwitchA> system-view
[SwitchA] ip route-static 10.1.1.2 24 10.2.1.1 track 1
3. On Switch A, configure an ICMP echo operation:
# Create an NQA operation with the administrator name admin and operation tag test1.
[SwitchA] nqa entry admin test1
# Configure the NQA operation type as ICMP echo.
[SwitchA-nqa-admin-test1] type icmp-echo
# Specify 10.2.1.1 as the destination IP address.
[SwitchA-nqa-admin-test1-icmp-echo] destination ip 10.2.1.1
# Configure the operation to repeat every 100 milliseconds.
[SwitchA-nqa-admin-test1-icmp-echo] frequency 100
# Create reaction entry 1. If the number of consecutive probe failures reaches 5, collaboration is triggered.
[SwitchA-nqa-admin-test1-icmp-echo] reaction 1 checked-element probe-fail threshold-type consecutive 5 action-type trigger-only
[SwitchA-nqa-admin-test1-icmp-echo] quit
# Start the ICMP operation.
[SwitchA] nqa schedule admin test1 start-time now lifetime forever
4. On Switch A, create track entry 1, and associate it with reaction entry 1 of the NQA operation.
[SwitchA] track 1 nqa entry admin test1 reaction 1
Verifying the configuration
# Display information about all the track entries on Switch A.
[SwitchA] display track all
Track ID: 1
State: Positive
Duration: 0 days 0 hours 0 minutes 0 seconds
Notification delay: Positive 0, Negative 0 (in seconds)
Tracked object:
NQA entry: admin test1
Reaction: 1
# Display brief information about active routes in the routing table on Switch A.
[SwitchA] display ip routing-table
Destinations : 13 Routes : 13
Destination/Mask Proto Pre Cost NextHop Interface
0.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
10.1.1.0/24 Static 60 0 10.2.1.1 Vlan3
10.2.1.0/24 Direct 0 0 10.2.1.2 Vlan3
10.2.1.0/32 Direct 0 0 10.2.1.2 Vlan3
10.2.1.2/32 Direct 0 0 127.0.0.1 InLoop0
10.2.1.255/32 Direct 0 0 10.2.1.2 Vlan3
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
127.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
224.0.0.0/4 Direct 0 0 0.0.0.0 NULL0
224.0.0.0/24 Direct 0 0 0.0.0.0 NULL0
255.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
The output shows that the static route with the next hop 10.2.1.1 is active, and the status of the track entry is positive.
# Remove the IP address of VLAN-interface 3 on Switch B.
<SwitchB> system-view
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] undo ip address
# Display information about all the track entries on Switch A.
[SwitchA] display track all
Track ID: 1
State: Negative
Duration: 0 days 0 hours 0 minutes 0 seconds
Notification delay: Positive 0, Negative 0 (in seconds)
Tracked object:
NQA entry: admin test1
Reaction: 1
# Display brief information about active routes in the routing table on Switch A.
[SwitchA] display ip routing-table
Destinations : 12 Routes : 12
Destination/Mask Proto Pre Cost NextHop Interface
0.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
10.2.1.0/24 Direct 0 0 10.2.1.2 Vlan3
10.2.1.0/32 Direct 0 0 10.2.1.2 Vlan3
10.2.1.2/32 Direct 0 0 127.0.0.1 InLoop0
10.2.1.255/32 Direct 0 0 10.2.1.2 Vlan3
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.0/32 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
127.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
224.0.0.0/4 Direct 0 0 0.0.0.0 NULL0
224.0.0.0/24 Direct 0 0 0.0.0.0 NULL0
255.255.255.255/32 Direct 0 0 127.0.0.1 InLoop0
The output shows that the static route does not exist, and the status of the track entry is negative.
ICMP template configuration example
Network requirements
As shown in Figure 22, configure an ICMP template for a feature to perform the ICMP echo operation from Device A to Device B.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 22. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create ICMP template icmp.
<DeviceA> system-view
[DeviceA] nqa template icmp icmp
# Specify 10.2.2.2 as the destination IP address of ICMP echo requests.
[DeviceA-nqatplt-icmp-icmp] destination ip 10.2.2.2
# Set the probe timeout time to 500 milliseconds for the ICMP echo operation.
[DeviceA-nqatplt-icmp-icmp] probe timeout 500
# Configure the ICMP echo operation to repeat every 3000 milliseconds.
[DeviceA-nqatplt-icmp-icmp] frequency 3000
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-icmp-icmp] reaction trigger probe-fail 2
DNS template configuration example
Network requirements
As shown in Figure 23, configure a DNS template for a feature to perform the DNS operation. The operation tests whether Device A can perform the address resolution through the DNS server.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 23. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create DNS template dns.
<DeviceA> system-view
[DeviceA] nqa template dns dns
# Specify the IP address of the DNS server 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-dns-dns] destination ip 10.2.2.2
# Specify host.com as the domain name to be translated.
[DeviceA-nqatplt-dns-dns] resolve-target host.com
# Set the domain name resolution type to type A.
[DeviceA-nqatplt-dns-dns] resolve-type A
# Specify 3.3.3.3 as the expected IP address.
[DeviceA-nqatplt-dns-dns] expect ip 3.3.3.3
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-dns-dns] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-dns-dns] reaction trigger probe-fail 2
TCP template configuration example
Network requirements
As shown in Figure 24, configure a TCP template for a feature to perform the TCP operation. The operation tests whether Device A can establish a TCP connection to Device B.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 24. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to the IP address 10.2.2.2 and TCP port 9000.
[DeviceB] nqa server tcp-connect 10.2.2.2 9000
4. Configure Device A:
# Create TCP template tcp.
<DeviceA> system-view
[DeviceA] nqa template tcp tcp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-tcp-tcp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqatplt-tcp-tcp] destination port 9000
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-tcp-tcp] reaction trigger probe-fail 2
TCP half open template configuration example
Network requirements
As shown in Figure 25, configure a TCP half open template for a feature to test whether Device B can provide the TCP service for Device A.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 25. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device A:
# Create TCP half open template test.
<DeviceA> system-view
[DeviceA] nqa template tcphalfopen test
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-tcphalfopen-test] destination ip 10.2.2.2
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-tcphalfopen-test] reaction trigger probe-fail 2
UDP template configuration example
Network requirements
As shown in Figure 26, configure a UDP template for a feature to perform the UDP operation. The operation tests whether Device A can receive a response from Device B.
Configuration procedure
1. Assign IP addresses to interfaces, as shown in Figure 26. (Details not shown.)
2. Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
3. Configure Device B:
# Enable the NQA server.
<DeviceB> system-view
[DeviceB] nqa server enable
# Configure a listening service to listen to the IP address 10.2.2.2 and UDP port 9000.
[DeviceB] nqa server udp-echo 10.2.2.2 9000
4. Configure Device A:
# Create UDP template udp.
<DeviceA> system-view
[DeviceA] nqa template udp udp
# Specify 10.2.2.2 as the destination IP address.
[DeviceA-nqatplt-udp-udp] destination ip 10.2.2.2
# Set the destination port number to 9000.
[DeviceA-nqatplt-udp-udp] destination port 9000
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-udp-udp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-udp-udp] reaction trigger probe-fail 2
HTTP template configuration example
Network requirements
As shown in Figure 27, configure an HTTP template for a feature to perform the HTTP operation. The operation tests whether the NQA client can get data from the HTTP server.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 27. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create the HTTP template http.
<DeviceA> system-view
[DeviceA] nqa template http http
# Specify http://10.2.2.2/index.htm as the URL of the HTTP server.
[DeviceA-nqatplt-http-http] url http://10.2.2.2/index.htm
# Set the HTTP operation type to get.
[DeviceA-nqatplt-http-http] operation get
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-http-http] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-http-http] reaction trigger probe-fail 2
FTP template configuration example
Network requirements
As shown in Figure 28, configure an FTP template for a feature to perform the FTP operation. The operation tests whether Device A can upload a file to the FTP server. The login username and password are admin and systemtest, respectively. The file to be transferred to the FTP server is config.txt.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 28. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Create FTP template ftp.
<DeviceA> system-view
[DeviceA] nqa template ftp ftp
# Specify the URL of the FTP server.
[DeviceA-nqatplt-ftp-ftp] url ftp://10.2.2.2
# Specify 10.1.1.1 as the source IP address.
[DeviceA-nqatplt-ftp-ftp] source ip 10.1.1.1
# Configure the device to upload file config.txt to the FTP server.
[DeviceA-nqatplt-ftp-ftp] operation put
[DeviceA-nqatplt-ftp-ftp] filename config.txt
# Set the username to admin for the FTP server login.
[DeviceA-nqatplt-ftp-ftp] username admin
# Set the password to systemtest for the FTP server login.
[DeviceA-nqatplt-ftp-ftp] password simple systemtest
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-ftp-ftp] reaction trigger probe-fail 2
RADIUS template configuration example
Network requirements
As shown in Figure 29, configure a RADIUS template for a feature to test whether the RADIUS server (Device B) can provide authentication service for Device A. The username and password are admin and systemtest, respectively. The shared key is 123456 for secure RADIUS authentication.
Configuration procedure
# Assign IP addresses to interfaces, as shown in Figure 29. (Details not shown.)
# Configure static routes or a routing protocol to make sure the devices can reach each other. (Details not shown.)
# Configure the RADIUS server. (Details not shown.)
# Create RADIUS template radius.
<DeviceA> system-view
[DeviceA] nqa template radius radius
# Specify 10.2.2.2 as the destination IP address of the operation.
[DeviceA-nqatplt-radius-radius] destination ip 10.2.2.2
# Set the username to admin.
[DeviceA-nqatplt-radius-radius] username admin
# Set the password to systemtest.
[DeviceA-nqatplt-radius-radius] password simple systemtest
# Set the shared key to 123456 in plain text for secure RADIUS authentication.
[DeviceA-nqatplt-radius-radius] key simple 123456
# Configure the NQA client to notify the feature of the successful operation event if the number of consecutive successful probes reaches 2.
[DeviceA-nqatplt-radius-radius] reaction trigger probe-pass 2
# Configure the NQA client to notify the feature of the operation failure if the number of consecutive failed probes reaches 2.
[DeviceA-nqatplt-radius-radius] reaction trigger probe-fail 2
Configuring NTP
Synchronize your device with a trusted time source by using the Network Time Protocol (NTP) or changing the system time before you run it on a live network. Various tasks, including network management, charging, auditing, and distributed computing depend on an accurate system time setting, because the timestamps of system messages and logs use the system time.
Overview
NTP is typically used in large networks to dynamically synchronize time among network devices. It guarantees higher clock accuracy than manual system clock setting. In a small network that does not require high clock accuracy, you can keep time synchronized among devices by changing their system clocks one by one.
NTP runs over UDP and uses UDP port 123.
|
NOTE: NTP is supported only on the following Layer 3 interfaces: · Layer 3 Ethernet interfaces. · Layer 3 Ethernet subinterfaces. · Layer 3 aggregate interfaces. · Layer 3 aggregate subinterfaces. · VLAN interfaces, and tunnel interfaces. |
How NTP works
Figure 30 shows how NTP synchronizes the system time between two devices (Device A and Device B, in this example). Assume that:
· Prior to the time synchronization, the time is set to 10:00:00 am for Device A and 11:00:00 am for Device B.
· Device B is used as the NTP server. Device A is to be synchronized to Device B.
· It takes 1 second for an NTP message to travel from Device A to Device B, and from Device B to Device A.
· It takes 1 second for Device B to process the NTP message.
The synchronization process is as follows:
1. Device A sends Device B an NTP message, which is timestamped when it leaves Device A. The time stamp is 10:00:00 am (T1).
2. When this NTP message arrives at Device B, Device B adds a timestamp showing the time when the message arrived at Device B. The timestamp is 11:00:01 am (T2).
3. When the NTP message leaves Device B, Device B adds a timestamp showing the time when the message left Device B. The timestamp is 11:00:02 am (T3).
4. When Device A receives the NTP message, the local time of Device A is 10:00:03 am (T4).
Up to now, Device A can calculate the following parameters based on the timestamps:
· The roundtrip delay of the NTP message: Delay = (T4 – T1) – (T3 – T2) = 2 seconds.
· Time difference between Device A and Device B: Offset = ((T2 – T1) + (T3 – T4)) /2 = 1 hour.
Based on these parameters, Device A can be synchronized to Device B.
This is only a rough description of the work mechanism of NTP. For more information, see the related protocols and standards.
NTP architecture
NTP uses stratums 1 to 16 to define clock accuracy, as shown in Figure 31. A lower stratum value represents higher accuracy. Clocks at stratums 1 through 15 are in synchronized state, and clocks at stratum 16 are not synchronized.
A stratum 1 NTP server gets its time from an authoritative time source, such as an atomic clock. It provides time for other devices as the primary NTP server. A stratum 2 time server receives its time from a stratum 1 time server, and so on.
To ensure time accuracy and availability, you can specify multiple NTP servers for a device. The device selects an optimal NTP server as the clock source based on parameters such as stratum. The clock that the device selects is called the reference source. For more information about clock selection, see the related protocols and standards.
If the devices in a network cannot synchronize to an authoritative time source, you can perform the following tasks:
· Select a device that has a relatively accurate clock from the network.
· Use the local clock of the device as the reference clock to synchronize other devices in the network.
Association modes
NTP supports the following association modes:
· Client/server mode
· Symmetric active/passive mode
· Broadcast mode
· Multicast mode
Table 2 NTP association mode
Mode |
Working process |
Principle |
Application scenario |
Client/server |
On the client, specify the IP address of the NTP server. A client sends a clock synchronization message to the NTP servers. Upon receiving the message, the servers automatically operate in server mode and send a reply. If the client can be synchronized to multiple time servers, it selects an optimal clock and synchronizes its local clock to the optimal reference source after receiving the replies from the servers. |
A client can synchronize to a server, but a server cannot synchronize to a client. |
As Figure 31 shows, this mode is intended for configurations where devices of a higher stratum synchronize to devices with a lower stratum. |
Symmetric active/passive |
On the symmetric active peer, specify the IP address of the symmetric passive peer. A symmetric active peer periodically sends clock synchronization messages to a symmetric passive peer. The symmetric passive peer automatically operates in symmetric passive mode and sends a reply. If the symmetric active peer can be synchronized to multiple time servers, it selects an optimal clock and synchronizes its local clock to the optimal reference source after receiving the replies from the servers. |
A symmetric active peer and a symmetric passive peer can be synchronized to each other. If both of them are synchronized, the peer with a higher stratum is synchronized to the peer with a lower stratum. |
As Figure 31 shows, this mode is most often used between servers with the same stratum to operate as a backup for one another. If a server fails to communicate with all the servers of a lower stratum, the server can still synchronize to the servers of the same stratum. |
Broadcast |
A server periodically sends clock synchronization messages to the broadcast address 255.255.255.255. Clients listen to the broadcast messages from the servers to synchronize to the server according to the broadcast messages. When a client receives the first broadcast message, the client and the server start to exchange messages to calculate the network delay between them. Then, only the broadcast server sends clock synchronization messages. |
A broadcast client can synchronize to a broadcast server, but a broadcast server cannot synchronize to a broadcast client. |
A broadcast server sends clock synchronization messages to synchronize clients in the same subnet. As Figure 31 shows, broadcast mode is intended for configurations involving one or a few servers and a potentially large client population. The broadcast mode has a lower time accuracy than the client/server and symmetric active/passive modes because only the broadcast servers send clock synchronization messages. |
Multicast |
A multicast server periodically sends clock synchronization messages to the user-configured multicast address. Clients listen to the multicast messages from servers and synchronize to the server according to the received messages. |
A multicast client can synchronize to a multicast server, but a multicast server cannot synchronize to a multicast client. |
A multicast server can provide time synchronization for clients in the same subnet or in different subnets. The multicast mode has a lower time accuracy than the client/server and symmetric active/passive modes. |
In this document, an "NTP server" or a "server" refers to a device that operates as an NTP server in client/server mode. Time servers refer to all the devices that can provide time synchronization, including NTP servers, NTP symmetric peers, broadcast servers, and multicast servers.
NTP security
To improve time synchronization security, NTP provides the access control and authentication functions.
NTP access control
You can control NTP access by using an ACL. The access rights are in the following order, from the least restrictive to the most restrictive:
· Peer—Allows time requests and NTP control queries (such as alarms, authentication status, and time server information) and allows the local device to synchronize itself to a peer device.
· Server—Allows time requests and NTP control queries, but does not allow the local device to synchronize itself to a peer device.
· Synchronization—Allows only time requests from a system whose address passes the access list criteria.
· Query—Allows only NTP control queries from a peer device to the local device.
When the device receives an NTP request, it matches the request with the access rights in the order from the least restrictive to the most restrictive: peer, server, synchronization, and query.
· If no NTP access control is configured, the peer access right applies.
· If the IP address of the peer device matches a permit statement in an ACL, the access right is granted to the peer device. If a deny statement or no ACL is matched, no access right is granted.
· If no ACL is specified for an access right or the ACL specified for the access right is not created, the access right is not granted.
· If none of the ACLs specified for the access rights is created, the peer access right applies.
· If none of the ACLs specified for the access rights contains rules, no access right is granted.
This feature provides minimal security for a system running NTP. A more secure method is NTP authentication.
NTP authentication
Use this feature to authenticate the NTP messages for security purposes. If an NTP message passes authentication, the device can receive it and get time synchronization information. If not, the device discards the message. This function makes sure the device does not synchronize to an unauthorized time server.
Figure 32 NTP authentication
As shown in Figure 32, NTP authentication works as follows:
1. The sender uses the MD5 algorithm to calculate the NTP message according to the key identified by a key ID. Then it sends the calculated digest together with the NTP message and key ID to the receiver.
2. Upon receiving the message, the receiver performs the following actions:
a. Finds the key according to the key ID in the message.
b. Uses the MD5 algorithm to calculate the digest.
c. Compares the digest with the digest contained in the NTP message.
- If they are different, the receiver discards the message.
- If they are the same and an NTP session is not required to be created, the receiver responds to the message. For information about NTP sessions, see "Configuring the maximum number of dynamic associations."
- If they are the same and an NTP session is to be created or has been created, the local device determines whether the sender is allowed to use the authentication ID after the NTP session is established. If the sender is allowed to use the authentication ID, the receiver accepts the message. If the sender is not allowed to use the authentication ID, the receiver discards the message.
NTP for MPLS L3VPN instances
On an MPLS L3VPN network, a PE that acts as an NTP client or active peer can synchronize with the NTP server or passive peer in an MPLS L3VPN instance.
As shown in Figure 33, users in VPN 1 and VPN 2 are connected to the MPLS backbone network through provider edge (PE) devices. VPN instances vpn1 and vpn2 have been created for VPN 1 and VPN 2, respectively on the PEs. Services of the two VPN instances are isolated. Time synchronization between PEs and devices in the two VPN instances can be realized if you perform the following tasks:
· Configure the PEs to operate in NTP client or symmetric active mode.
· Specify the VPN instance to which the NTP server or NTP symmetric passive peer belongs.
For more information about MPLS L3VPN, VPN instance, and PE, see MPLS Configuration Guide.
Protocols and standards
· RFC 1305, Network Time Protocol (Version 3) Specification, Implementation and Analysis
· RFC 5905, Network Time Protocol Version 4: Protocol and Algorithms Specification
Configuration restrictions and guidelines
When you configure NTP, follow these restrictions and guidelines:
· You cannot configure both NTP and SNTP on the same device.
· Do not configure NTP on an aggregate member port.
· The NTP service and SNTP service are mutually exclusive. You can only enable either NTP service or SNTP service at a time.
· To avoid frequent time changes or even synchronization failures, do not specify more than one reference source on a network.
· Make sure you use the clock protocol command to specify the time protocol as NTP. For more information about the clock protocol command, see Fundamentals Command Reference.
Configuration task list
Tasks at a glance |
(Required.) Enabling the NTP service |
(Required.) Perform one or both of the following tasks: |
(Optional.) Configuring access control rights |
(Optional.) Configuring NTP authentication |
(Optional.) Configuring NTP optional parameters |
Enabling the NTP service
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the NTP service. |
ntp-service enable |
By default, the NTP service is disabled. |
Configuring NTP association mode
This section describes how to configure NTP association mode.
Configuring NTP in client/server mode
Follow these guidelines when you configure an NTP client:
· For the client to synchronize to an NTP server, make sure the server is synchronized by other devices or uses its local clock as a reference source.
· If the stratum level of a server is higher than or equal to a client, the client will not synchronize to that server.
· You can configure multiple servers by executing the ntp-service unicast-server or ntp-service ipv6 unicast-server commands multiple times.
· When the device operates in client/server mode, specify the IP address for the server on the client.
To configure an NTP client:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
·
Specify an NTP server for the device: ·
Specify an IPv6 NTP server for the device: |
By default, no NTP server is specified. |
Configuring NTP in symmetric active/passive mode
Follow these guidelines when you configure a symmetric-active peer:
· For a symmetric-passive peer to process NTP messages from a symmetric-active peer, execute the ntp-service enable command on the symmetric passive peer to enable NTP.
· For time synchronization between the symmetric-active peer and the symmetric-passive peer, make sure either or both of them are in synchronized state.
· You can configure multiple symmetric-passive peers by executing the ntp-service unicast-peer or ntp-service ipv6 unicast-peer command multiple times.
· When the device operates in symmetric active/passive mode, specify on a symmetric-active peer the IP address for a symmetric-passive peer.
To configure a symmetric-active peer:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Specify a symmetric-passive peer for the device. |
·
Specify a symmetric-passive peer: ·
Specify an IPv6 symmetric-passive peer: |
By default, no symmetric-passive peer is specified. |
Configuring NTP in broadcast mode
For a broadcast client to synchronize to a broadcast server, make sure the broadcast server is synchronized by other devices or uses its local clock as a reference source.
Configure NTP in broadcast mode on both the broadcast server and client.
Configuring a broadcast client
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
Enter the interface for receiving NTP broadcast messages. |
3. Configure the device to operate in broadcast client mode. |
ntp-service broadcast-client |
By default, the device does not operate in any NTP association mode. After you execute the command, the device receives NTP broadcast messages from the specified interface. |
Configuring the broadcast server
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
Enter the interface for sending NTP broadcast messages. |
3. Configure the device to operate in NTP broadcast server mode. |
ntp-service broadcast-server [ authentication-keyid keyid | version number ] * |
By default, the device does not operate in any NTP association mode. After you execute the command, the device sends NTP broadcast messages from the specified interface. |
Configuring NTP in multicast mode
For a multicast client to synchronize to a multicast server, make sure the multicast server is synchronized by other devices or uses its local clock as a reference source.
Configure NTP in multicast mode on both the multicast server and client.
Configuring a multicast client
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
Enter the interface for receiving NTP multicast messages. |
3. Configure the device to operate in multicast client mode. |
·
Configure the device to operate in multicast
client mode: ·
Configure the device to operate in IPv6 multicast
client mode: |
By default, the device does not operate in any NTP association mode. After you execute the command, the device receives NTP multicast messages from the specified interface. |
Configuring the multicast server
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
Enter the interface for sending NTP multicast message. |
3. Configure the device to operate in multicast server mode. |
·
Configure the device to operate in multicast
server mode: ·
Configure the device to operate in multicast
server mode: |
By default, the device does not operate in any NTP association mode. After you execute the command, the device receives NTP multicast messages from the specified interface. |
Configuring access control rights
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the NTP service access control right for a peer device to access the local device. |
·
Configure the NTP service access
control right for a peer device to access the local device ·
Configure the IPv6 NTP service access control
right for a peer device to access the local device |
By default, the NTP service access control right for a peer device to access the local device is peer. |
Before you configure the NTP service access control right to the local device, create and configure an ACL associated with the access control right. For more information about ACL, see ACL and QoS Configuration Guide.
Configuring NTP authentication
This section provides instructions for configuring NTP authentication.
Configuring NTP authentication in client/server mode
To ensure a successful NTP authentication, configure the same key ID, authentication algorithm, and key on the server and client. Make sure the peer device is allowed to use the authentication ID.
To configure NTP authentication for a client:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
5. Associate the specified key with an NTP server. |
·
Associate the specified key with an NTP
server: ·
Associate the specified key with an IPv6 NTP server: |
N/A |
To configure NTP authentication for a server:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
NTP authentication results differ when different configurations are performed on client and server. For more information, see Table 3. (N/A in the table means that whether the configuration is performed does not make any difference.)
Table 3 NTP authentication results
Client |
Server |
Authentication result |
|||
Enable NTP authentication |
Configure a key and configure it as a trusted key |
Associate the key with an NTP server |
Enable NTP authentication |
Configure a key and configure it as a trusted key |
|
Yes |
Yes |
Yes |
Yes |
Yes |
Succeeded |
Yes |
Yes |
Yes |
Yes |
No |
Failed |
Yes |
Yes |
Yes |
No |
N/A |
Failed |
Yes |
No |
Yes |
N/A |
N/A |
Failed |
Yes |
N/A |
No |
N/A |
N/A |
No authentication |
No |
N/A |
N/A |
N/A |
N/A |
No authentication |
Configuring NTP authentication in symmetric active/passive mode
To ensure a successful NTP authentication, configure the same key ID, authentication algorithm, and key on the active peer and passive peer. Make sure the peer device is allowed to use the authentication ID.
To configure NTP authentication for an active peer:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
5. Associate the specified key with a passive peer. |
·
Associate the specified key with a
passive peer: ·
Associate the specified key with a passive
peer: |
N/A |
To configure NTP authentication for a passive peer:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
NTP authentication results differ when different configurations are performed on active peer and passive peer. For more information, see Table 4. (N/A in the table means that whether the configuration is performed does not make any difference.)
Table 4 NTP authentication results
Active peer |
Passive peer |
Authentication result |
||||||||
Enable NTP authentication |
Configure a key and configure it as a trusted key |
Associate the key with a passive peer |
Enable NTP authentication |
Configure a key and configure it as a trusted key |
||||||
Stratum level of the active and passive peers is not considered. |
||||||||||
Yes |
Yes |
Yes |
Yes |
Yes |
Succeeded |
|||||
Yes |
Yes |
Yes |
Yes |
No |
Failed |
|||||
Yes |
Yes |
Yes |
No |
N/A |
Failed |
|||||
Yes |
N/A |
No |
Yes |
N/A |
Failed |
|||||
Yes |
N/A |
No |
No |
N/A |
No authentication |
|||||
No |
N/A |
N/A |
Yes |
N/A |
Failed |
|||||
No |
N/A |
N/A |
No |
N/A |
No authentication |
|||||
The active peer has a higher stratum than the passive peer. |
||||||||||
Yes |
No |
Yes |
N/A |
N/A |
Failed |
|||||
The passive peer has a higher stratum than the active peer. |
||||||||||
Yes |
No |
Yes |
Yes |
N/A |
Failed |
|||||
Yes |
No |
Yes |
No |
N/A |
No authentication |
|||||
Configuring NTP authentication in broadcast mode
To ensure a successful NTP authentication, configure the same key ID, authentication algorithm, and key on the broadcast server and client. Make sure the peer device is allowed to use the authentication ID.
To configure NTP authentication for a broadcast client:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
To configure NTP authentication for a broadcast server:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
5. Enter interface view. |
interface interface-type interface-number |
N/A |
6. Associate the specified key with the broadcast server. |
ntp-service broadcast-server authentication-keyid keyid |
By default, the broadcast server is not associated with any key. |
NTP authentication results differ when different configurations are performed on broadcast client and server. For more information, see Table 5. (N/A in the table means that whether the configuration is performed does not make any difference.)
Table 5 NTP authentication results
Broadcast server |
Broadcast client |
Authentication result |
|||
Enable NTP authentication |
Configure a key and configure it as a trusted key |
Associate the key with a broadcast server |
Enable NTP authentication |
Configure a key and configure it as a trusted key |
|
Yes |
Yes |
Yes |
Yes |
Yes |
Succeeded |
Yes |
Yes |
Yes |
Yes |
No |
Failed |
Yes |
Yes |
Yes |
No |
N/A |
Failed |
Yes |
No |
Yes |
Yes |
N/A |
Failed |
Yes |
No |
Yes |
No |
N/A |
No authentication |
Yes |
N/A |
No |
Yes |
N/A |
Failed |
Yes |
N/A |
No |
No |
N/A |
No authentication |
No |
N/A |
N/A |
Yes |
N/A |
Failed |
No |
N/A |
N/A |
No |
N/A |
No authentication |
Configuring NTP authentication in multicast mode
To ensure a successful NTP authentication, configure the same key ID, authentication algorithm, and key on the multicast server and client. Make sure the peer device is allowed to use the authentication ID.
To configure NTP authentication for a multicast client:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
To configure NTP authentication for a multicast server:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NTP authentication. |
ntp-service authentication enable |
By default, NTP authentication is disabled. |
3. Configure an NTP authentication key. |
ntp-service authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no NTP authentication key exists. |
4. Configure the key as a trusted key. |
ntp-service reliable authentication-keyid keyid |
By default, no authentication key is configured as a trusted key. |
5. Enter interface view. |
interface interface-type interface-number |
N/A |
6. Associate the specified key with the multicast server. |
·
Associate the specified key with a
multicast server: ·
Associate the specified key with an
IPv6 multicast server: |
By default, no multicast server is associated with the specified key. |
NTP authentication results differ when different configurations are performed on broadcast client and server. For more information, see Table 6. (N/A in the table means that whether the configuration is performed does not make any difference.)
Table 6 NTP authentication results
Multicast server |
Multicast client |
Authentication result |
|||
Enable NTP authentication |
Configure a key and configure it as a trusted key |
Associate the key with a multicast server |
Enable NTP authentication |
Configure a key and configure it as a trusted key |
|
Yes |
Yes |
Yes |
Yes |
Yes |
Succeeded |
Yes |
Yes |
Yes |
Yes |
No |
Failed |
Yes |
Yes |
Yes |
No |
N/A |
Failed |
Yes |
No |
Yes |
Yes |
N/A |
Failed |
Yes |
No |
Yes |
No |
N/A |
No authentication |
Yes |
N/A |
No |
Yes |
N/A |
Failed |
Yes |
N/A |
No |
No |
N/A |
No authentication |
No |
N/A |
N/A |
Yes |
N/A |
Failed |
No |
N/A |
N/A |
No |
N/A |
No authentication |
Configuring NTP optional parameters
The configuration tasks in this section are optional tasks. Configure them to improve NTP security, performance, or reliability.
Specifying the source interface for NTP messages
To prevent interface status changes from causing NTP communication failures, configure the device to use the IP address of an interface that is always up. For example, you can configure the device to use a loopback interface as the source IP address for the NTP messages to be sent.
When the device responds to an NTP request, the source IP address of the NTP response is always the IP address of the interface that has received the NTP request.
Follow these guidelines when you specify the source interface for NTP messages:
· If you have specified the source interface for NTP messages in the ntp-service unicast-server/ntp-service ipv6 unicast-server or ntp-service unicast-peer/ntp-service ipv6 unicast-peer command, the specified interface acts as the source interface for NTP messages.
· If you have configured the ntp-service broadcast-server or ntp-service multicast-server/ntp-service ipv6 multicast-server command in an interface view, this interface acts as the source interface for broadcast or multicast NTP messages.
To specify the source interface for NTP messages:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Specify the source interface for NTP messages. |
·
Specify the source interface for NTP
messages: ·
Specify the source interface for IPv6 NTP
messages: |
By default, no source interface is specified for NTP messages. |
Disabling an interface from receiving NTP messages
When NTP is enabled, all interfaces by default can receive NTP messages. For security purposes, you can disable some of the interfaces to receive NTP messages.
To disable an interface to receive NTP messages:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Disable the interface to receive NTP messages. |
·
For IPv4: ·
For IPv6: |
By default, an interface receives NTP messages. |
Configuring the maximum number of dynamic associations
NTP has the following types of associations:
· Static association—A manually created association.
· Dynamic association—Temporary association created by the system during NTP operation. A dynamic association is removed if no messages are exchanged within about 12 minutes.
The following describes how an association is established in different association modes:
· Client/server mode—After you specify an NTP server, the system creates a static association on the client. The server simply responds passively upon the receipt of a message, rather than creating an association (static or dynamic).
· Symmetric active/passive mode—After you specify a symmetric-passive peer on a symmetric active peer, static associations are created on the symmetric-active peer, and dynamic associations are created on the symmetric-passive peer.
· Broadcast or multicast mode—Static associations are created on the server, and dynamic associations are created on the client.
A single device can have a maximum of 128 concurrent associations, including static associations and dynamic associations.
Perform this task to restrict the number of dynamic associations to prevent dynamic associations from occupying too many system resources.
To configure the maximum number of dynamic associations:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the maximum number of dynamic sessions allowed to be established. |
ntp-service max-dynamic-sessions number |
By default, the command can establish up to 100 dynamic sessions. |
Setting a DSCP value for NTP packets
The DSCP value determines the sending precedence of a packet.
To set a DSCP value for NTP packets:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Set a DSCP value for NTP packets. |
·
IPv4 packets: ·
IPv6 packets: |
The default DSCP value is 48 for IPv4 packets and 56 for IPv6 packets. |
Configuring the local clock as a reference source
Follow these guidelines when you configure the local clock as a reference source:
· Make sure the local clock can provide the time accuracy required for the network. After you configure the local clock as a reference source, the local clock is synchronized, and can operate as a time server to synchronize other devices in the network. If the local clock is incorrect, timing errors occur.
· Before you configure this feature, adjust the local system time to make sure it is accurate.
· Devices differ in clock precision. To avoid network flapping and clock synchronization failure, do not configure multiple reference sources on the same network segment.
To configure the local clock as a reference source:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the local clock as a reference source. |
ntp-service refclock-master [ ip-address ] [ stratum ] |
By default, the device does not use the local clock as a reference source. |
Displaying and maintaining NTP
Execute display commands in any view.
Task |
Command |
Display information about IPv6 NTP associations. |
display ntp-service ipv6 sessions [ verbose ] |
Display information about IPv4 NTP associations. |
display ntp-service sessions [ verbose ] |
Display information about NTP service status. |
display ntp-service status |
Display brief information about the NTP servers from the local device back to the primary reference source. |
display ntp-service trace [ source interface-type interface-number ] |
NTP configuration examples
NTP client/server mode configuration example
Network requirements
As shown in Figure 34, perform the following tasks:
· Configure the local clock of Device A as a reference source, with stratum level 2.
· Configure Device B to operate in client mode and Device A to be used as the NTP server for Device B.
Configuration procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each other, as shown in Figure 34. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify Device A as the NTP server of Device B so that Device B is synchronized to Device A.
[DeviceB] ntp-service unicast-server 1.0.1.11
4. Verify the configuration:
# Verify that Device B has synchronized to Device A, and the clock stratum level is 3 on Device B and 2 on Device A.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 1.0.1.11
Local mode: client
Reference clock ID: 1.0.1.11
Leap indicator: 00
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00383 ms
Root dispersion: 16.26572 ms
Reference time: d0c6033f.b9923965 Wed, Dec 29 2010 18:58:07.724
# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[12345]1.0.1.11 127.127.1.0 2 1 64 15 -4.0 0.0038 16.262
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Total sessions: 1
IPv6 NTP client/server mode configuration example
Network requirements
As shown in Figure 35, perform the following tasks:
· Configure the local clock of Device A as a reference source, with stratum level 2.
· Configure Device B to operate in client mode and Device A to be used as the IPv6 NTP server for Device B.
Configuration procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each other, as shown in Figure 35. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Specify Device A as the IPv6 NTP server of Device B so that Device B is synchronized to Device A.
[DeviceB] ntp-service ipv6 unicast-server 3000::34
4. Verify the configuration:
# Verify that Device B has synchronized to Device A, and the clock stratum level is 3 on Device B and 2 on Device A.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3000::34
Local mode: client
Reference clock ID: 163.29.247.19
Leap indicator: 00
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.02649 ms
Root dispersion: 12.24641 ms
Reference time: d0c60419.9952fb3e Wed, Dec 29 2010 19:01:45.598
# Verify that an IPv6 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Source: [12345]3000::34
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 15 Poll interval: 64
Last receive time: 19 Offset: 0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
NTP symmetric active/passive mode configuration example
Network requirements
As shown in Figure 36, perform the following tasks:
· Configure the local clock of Device A as a reference source, with stratum level 2.
· Configure Device A to operate in symmetric-active mode and specify Device B as the passive peer of Device A.
Configuration procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each other, as shown in Figure 36. (Details not shown.)
2. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
3. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
# Configure Device B as a symmetric passive peer.
[DeviceA] ntp-service unicast-peer 3.0.1.32
4. Verify the configuration:
# Verify that Device B has synchronized to Device A.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3.0.1.31
Local mode: sym_passive
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.000916 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00609 ms
Root dispersion: 1.95859 ms
Reference time: 83aec681.deb6d3e5 Wed, Jan 8 2014 14:33:11.081
# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[12]3.0.1.31 127.127.1.0 2 62 64 34 0.4251 6.0882 1392.1
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Total sessions: 1
IPv6 NTP symmetric active/passive mode configuration example
Network requirements
As shown in Figure 37, perform the following tasks:
· Configure the local clock of Device A as a reference source, with stratum level 2.
· Configure Device A to operate in symmetric-active mode and specify Device B as the IPv6 passive peer of Device A.
Configuration procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each other, as shown in Figure 37. (Details not shown.)
2. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
3. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
# Configure Device B as an IPv6 symmetric passive peer.
[DeviceA] ntp-service ipv6 unicast-peer 3000::36
4. Verify the configuration:
# Verify that Device B has synchronized to Device A.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3000::35
Local mode: sym_passive
Reference clock ID: 251.73.79.32
Leap indicator: 11
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.01855 ms
Root dispersion: 9.23483 ms
Reference time: d0c6047c.97199f9f Wed, Dec 29 2010 19:03:24.590
# Verify that an IPv6 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Source: [1234]3000::35
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 15 Poll interval: 64
Last receive time: 19 Offset: 0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
NTP broadcast mode configuration example
Network requirements
As shown in Figure 38, Switch C functions as the NTP server for multiple devices on a network segment and synchronizes the time for the devices.
· Configure Switch C's local clock as a reference source, with stratum level 2.
· Configure Switch C to operate in broadcast server mode and send broadcast messages from VLAN-interface 2.
· Configure Switch A and Switch B to operate in broadcast client mode, and listen to broadcast messages through VLAN-interface 2.
Configuration procedure
1. Assign an IP address to each interface, and make sure Switch A, Switch B, and Switch C can reach each other, as shown in Figure 38. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in broadcast server mode and send broadcast messages through VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server
3. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Configure Switch A to operate in broadcast client mode and receive broadcast messages on VLAN-interface 2.
[SwitchA] interface vlan-interface 2
[SwitchA-Vlan-interface2] ntp-service broadcast-client
4. Configure Switch B:
# Enable the NTP service.
<SwitchB> system-view
[SwitchB] ntp-service enable
# Configure Switch B to operate in broadcast client mode and receive broadcast messages on VLAN-interface 2.
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ntp-service broadcast-client
5. Verify the configuration:
# Verify that Switch A has synchronized to Switch C, and the clock stratum level is 3 on Switch A and 2 on Switch C.
[SwitchA-Vlan-interface2] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.044281 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00229 ms
Root dispersion: 4.12572 ms
Reference time: d0d289fe.ec43c720 Sat, Jan 8 2011 7:00:14.922
# Verify that an IPv4 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 2 1 64 519 -0.0 0.0022 4.1257
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
NTP multicast mode configuration example
Network requirements
As shown in Figure 39, Switch C functions as the NTP server for multiple devices on different network segments and synchronizes the time for the devices.
· Configure Switch C's local clock as a reference source, with stratum level 2.
· Configure Switch C to operate in multicast server mode and send multicast messages from VLAN-interface 2.
· Configure Switch A and Switch D to operate in multicast client mode and receive multicast messages through VLAN-interface 3 and VLAN-interface 2, respectively.
Configuration procedure
1. Assign an IP address to each interface, and make sure the switches can reach each other, as shown in Figure 39. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in multicast server mode and send multicast messages through VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service multicast-server
3. Configure Switch D:
# Enable the NTP service.
<SwitchD> system-view
[SwitchD] ntp-service enable
# Configure Switch D to operate in multicast client mode and receive multicast messages on VLAN-interface 2.
[SwitchD] interface vlan-interface 2
[SwitchD-Vlan-interface2] ntp-service multicast-client
4. Verify the configuration:
Switch D and Switch C are on the same subnet, so Switch D can do the following:
? Receive the multicast messages from Switch C without being enabled with the multicast functions.
? Synchronize to Switch C.
# Verify that Switch D has synchronized to Switch C, and the clock stratum level is 3 on Switch D and 2 on Switch C.
[SwitchD-Vlan-interface2] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.044281 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00229 ms
Root dispersion: 4.12572 ms
Reference time: d0d289fe.ec43c720 Sat, Jan 8 2011 7:00:14.922
# Verify that an IPv4 NTP association has been established between Switch D and Switch C.
[SwitchD-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 2 1 64 519 -0.0 0.0022 4.1257
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
5. Configure Switch B:
Because Switch A and Switch C are on different subnets, you must enable the multicast functions on Switch B before Switch A can receive multicast messages from Switch C.
# Enable IP multicast routing and IGMP.
<SwitchB> system-view
[SwitchB] multicast routing
[SwitchB-mrib] quit
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] pim dm
[SwitchB-Vlan-interface2] quit
[SwitchB] vlan 3
[SwitchB-vlan3] port hundredgige 1/0/1
[SwitchB-vlan3] quit
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] igmp enable
[SwitchB-Vlan-interface3] igmp static-group 224.0.1.1
[SwitchB-Vlan-interface3] quit
[SwitchB] igmp-snooping
[SwitchB-igmp-snooping] quit
[SwitchB] interface hundredgige 1/0/1
[SwitchB-HundredGigE1/0/1] igmp-snooping static-group 224.0.1.1 vlan 3
6. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Configure Switch A to operate in multicast client mode and receive multicast messages on VLAN-interface 3.
[SwitchA] interface vlan-interface 3
[SwitchA-Vlan-interface3] ntp-service multicast-client
7. Verify the configuration:
# Verify that Switch A has synchronized to Switch C, and the clock stratum level is 3 on Switch A and 2 on Switch C.
[SwitchA-Vlan-interface3] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.165741 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00534 ms
Root dispersion: 4.51282 ms
Reference time: d0c61289.10b1193f Wed, Dec 29 2010 20:03:21.065
# Verify that an IPv4 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface3] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1234]3.0.1.31 127.127.1.0 2 247 64 381 -0.0 0.0053 4.5128
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
IPv6 NTP multicast mode configuration example
Network requirements
As shown in Figure 40, Switch C functions as the NTP server for multiple devices on different network segments and synchronizes the time for the devices.
· Configure Switch C's local clock as a reference source, with stratum level 2.
· Configure Switch C to operate in IPv6 multicast server mode and send IPv6 multicast messages from VLAN-interface 2.
· Configure Switch A and Switch D to operate in IPv6 multicast client mode and receive IPv6 multicast messages through VLAN-interface 3 and VLAN-interface 2, respectively.
|
NOTE: This switch series does not support IPv6 PIM. Switch B must be a Layer 3 switch that supports IPv6 PIM. |
Configuration procedure
1. Assign an IP address to each interface, and make sure the switches can reach each other, as shown in Figure 40. (Details not shown.)
2. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[SwitchC] ntp-service refclock-master 2
# Configure Switch C to operate in IPv6 multicast server mode and send multicast messages through VLAN-interface 2.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service ipv6 multicast-server ff24::1
3. Configure Switch D:
# Enable the NTP service.
<SwitchD> system-view
[SwitchD] ntp-service enable
# Configure Switch D to operate in IPv6 multicast client mode and receive multicast messages on VLAN-interface 2.
[SwitchD] interface vlan-interface 2
[SwitchD-Vlan-interface2] ntp-service ipv6 multicast-client ff24::1
4. Verify the configuration:
Switch D and Switch C are on the same subnet, so Switch D can do the following:
? Receive the IPv6 multicast messages from Switch C without being enabled with the IPv6 multicast functions.
? Synchronize to Switch C.
# Verify that Switch D has synchronized to Switch C, and the clock stratum level is 3 on Switch D and 2 on Switch C.
[SwitchD-Vlan-interface2] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3000::2
Local mode: bclient
Reference clock ID: 165.84.121.65
Leap indicator: 00
Clock jitter: 0.000977 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00000 ms
Root dispersion: 8.00578 ms
Reference time: d0c60680.9754fb17 Wed, Dec 29 2010 19:12:00.591
# Verify that an IPv6 NTP association has been established between Switch D and Switch C.
[SwitchD-Vlan-interface2] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Source: [1234]3000::2
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 111 Poll interval: 64
Last receive time: 23 Offset: -0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
5. Configure Switch B:
Because Switch A and Switch C are on different subnets, you must enable the IPv6 multicast functions on Switch B before Switch A can receive IPv6 multicast messages from Switch C.
# Enable IPv6 multicast functions.
<SwitchB> system-view
[SwitchB] ipv6 multicast routing
[SwitchB-mrib6] quit
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ipv6 pim dm
[SwitchB-Vlan-interface2] quit
[SwitchB] vlan 3
[SwitchB-vlan3] port hundredgige 1/0/1
[SwitchB-vlan3] quit
[SwitchB] interface vlan-interface 3
[SwitchB-Vlan-interface3] mld enable
[SwitchB-Vlan-interface3] mld static-group ff24::1
[SwitchB-Vlan-interface3] quit
[SwitchB] mld-snooping
[SwitchB-mld-snooping] quit
[SwitchB] interface hundredgige 1/0/1
[SwitchB-HundredGigE1/0/1] mld-snooping static-group ff24::1 vlan 3
6. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Configure Switch A to operate in IPv6 multicast client mode and receive IPv6 multicast messages on VLAN-interface 3.
[SwitchA] interface vlan-interface 3
[SwitchA-Vlan-interface3] ntp-service ipv6 multicast-client ff24::1
7. Verify the configuration:
# Verify that Switch A has synchronized to Switch C, and the clock stratum level is 3 on Switch A and 2 on Switch C.
[SwitchA-Vlan-interface3] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 3000::2
Local mode: bclient
Reference clock ID: 165.84.121.65
Leap indicator: 00
Clock jitter: 0.165741 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00534 ms
Root dispersion: 4.51282 ms
Reference time: d0c61289.10b1193f Wed, Dec 29 2010 20:03:21.065
# Verify that an IPv6 NTP association has been established between Switch A and Switch C.
[SwitchA-Vlan-interface3] display ntp-service ipv6 sessions
Notes: 1 source(master), 2 source(peer), 3 selected, 4 candidate, 5 configured.
Source: [124]3000::2
Reference: 127.127.1.0 Clock stratum: 2
Reachabilities: 2 Poll interval: 64
Last receive time: 71 Offset: -0.0
Roundtrip delay: 0.0 Dispersion: 0.0
Total sessions: 1
Configuration example for NTP client/server mode with authentication
Network requirements
As shown in Figure 41, perform the following tasks:
· Configure the local clock of Device A as a reference source, with stratum level 2.
· Configure Device B to operate in client mode and specify Device A as the NTP server of Device B.
· Configure NTP authentication on both Device A and Device B.
Configuration procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each other, as shown in Figure 41. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
3. Configure Device B:
# Enable the NTP service.
<DeviceB> system-view
[DeviceB] ntp-service enable
# Enable NTP authentication on Device B.
[DeviceB] ntp-service authentication enable
# Set an authentication key, and input the key in plain text.
[DeviceB] ntp-service authentication-keyid 42 authentication-mode md5 simple aNiceKey
# Specify the key as a trusted key.
[DeviceB] ntp-service reliable authentication-keyid 42
# Specify Device A as the NTP server of Device B, and associate the server with key 42.
[DeviceB] ntp-service unicast-server 1.0.1.11 authentication-keyid 42
Before Device B can synchronize its clock to that of Device A, enable NTP authentication for Device A.
4. Configure NTP authentication on Device A:
# Enable NTP authentication.
[DeviceA] ntp-service authentication enable
# Set an authentication key, and input the key in plain text.
[DeviceA] ntp-service authentication-keyid 42 authentication-mode md5 simple aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 42
5. Verify the configuration:
# Verify that Device B has synchronized to Device A, and the clock stratum level is 3 on Device B and 2 on Device A.
[DeviceB] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 1.0.1.11
Local mode: client
Reference clock ID: 1.0.1.11
Leap indicator: 00
Clock jitter: 0.005096 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00655 ms
Root dispersion: 1.15869 ms
Reference time: d0c62687.ab1bba7d Wed, Dec 29 2010 21:28:39.668
# Verify that an IPv4 NTP association has been established between Device B and Device A.
[DeviceB] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]1.0.1.11 127.127.1.0 2 1 64 519 -0.0 0.0065 0.0
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
Configuration example for NTP broadcast mode with authentication
Network requirements
As shown in Figure 42, Switch C functions as the NTP server for multiple devices on different network segments and synchronizes the time for the devices. Switch A and Switch B authenticate the reference source.
· Configure Switch C's local clock as a reference source, with stratum level 3.
· Configure Switch C to operate in broadcast server mode and send broadcast messages from VLAN-interface 2.
· Configure Switch A and Switch B to operate in broadcast client mode and receive broadcast messages through VLAN-interface 2.
· Enable NTP authentication on Switch A, Switch B, and Switch C.
Configuration procedure
1. Assign an IP address to each interface, and make sure Switch A, Switch B, and Switch C can reach each other, as shown in Figure 42. (Details not shown.)
2. Configure Switch A:
# Enable the NTP service.
<SwitchA> system-view
[SwitchA] ntp-service enable
# Enable NTP authentication on Switch A. Configure an NTP authentication key, with the key ID of 88 and key value of 123456. Input the key in plain text, and specify it as a trusted key.
[SwitchA] ntp-service authentication enable
[SwitchA] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchA] ntp-service reliable authentication-keyid 88
# Configure Switch A to operate in NTP broadcast client mode and receive NTP broadcast messages on VLAN-interface 2.
[SwitchA] interface vlan-interface 2
[SwitchA-Vlan-interface2] ntp-service broadcast-client
3. Configure Switch B:
# Enable the NTP service.
<SwitchB> system-view
[SwitchB] ntp-service enable
# Enable NTP authentication on Switch B. Configure an NTP authentication key, with the key ID of 88 and key value of 123456. Input the key in plain text and specify it as a trusted key.
[SwitchB] ntp-service authentication enable
[SwitchB] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchB] ntp-service reliable authentication-keyid 88
# Configure Switch B to operate in broadcast client mode and receive NTP broadcast messages on VLAN-interface 2.
[SwitchB] interface vlan-interface 2
[SwitchB-Vlan-interface2] ntp-service broadcast-client
4. Configure Switch C:
# Enable the NTP service.
<SwitchC> system-view
[SwitchC] ntp-service enable
# Specify the local clock as the reference source, with stratum level 3.
[SwitchC] ntp-service refclock-master 3
# Configure Switch C to operate in NTP broadcast server mode and use VLAN-interface 2 to send NTP broadcast packets.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server
[SwitchC-Vlan-interface2] quit
5. Verify the configuration:
NTP authentication is enabled on Switch A and Switch B, but not on Switch C, so Switch A and Switch B cannot synchronize their local clocks to Switch C.
# Verify that Switch B has not synchronized to Switch C.
[SwitchB-Vlan-interface2] display ntp-service status
Clock status: unsynchronized
Clock stratum: 16
Reference clock ID: none
6. Enable NTP authentication on Switch C:
# Enable NTP authentication on Switch C. Configure an NTP authentication key, with the key ID of 88 and key value of 123456. Input the key in plain text, and specify it as a trusted key.
[SwitchC] ntp-service authentication enable
[SwitchC] ntp-service authentication-keyid 88 authentication-mode md5 simple 123456
[SwitchC] ntp-service reliable authentication-keyid 88
# Specify Switch C as an NTP broadcast server, and associate the key 88 with Switch C.
[SwitchC] interface vlan-interface 2
[SwitchC-Vlan-interface2] ntp-service broadcast-server authentication-keyid 88
7. Verify the configuration:
# Verify that Switch B has synchronized to Switch C, and the clock stratum level is 4 on Switch B and 3 on Switch C.
[SwitchB-Vlan-interface2] display ntp-service status
Clock status: synchronized
Clock stratum: 4
System peer: 3.0.1.31
Local mode: bclient
Reference clock ID: 3.0.1.31
Leap indicator: 00
Clock jitter: 0.006683 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00127 ms
Root dispersion: 2.89877 ms
Reference time: d0d287a7.3119666f Sat, Jan 8 2011 6:50:15.191
# Verify that an IPv4 NTP association has been established between Switch B and Switch C.
[SwitchB-Vlan-interface2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]3.0.1.31 127.127.1.0 3 3 64 68 -0.0 0.0000 0.0
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
Configuration example for MPLS L3VPN network time synchronization in client/server mode
Network requirements
As shown in Figure 43, two MPLS L3VPN instances are present on PE 1 and PE 2: vpn1 and vpn2. CE 1 and CE 3 are devices in VPN 1.
To synchronize time between PE 2 and CE 1 in VPN 1, perform the following tasks:
· Configure CE 1's local clock as a reference source, with stratum level 2.
· Configure CE 1 in the VPN instance vpn1 as the NTP server of PE 2.
Configuration procedure
Before you perform the following configuration, be sure you have completed MPLS L3VPN-related configurations. For information about configuring MPLS L3VPN, see MPLS Configuration Guide.
1. Assign an IP address to each interface, as shown in Figure 43. Make sure CE 1 and PE 1, PE 1 and PE 2, and PE 2 and CE 3 can reach each other. (Details not shown.)
2. Configure CE 1:
# Enable the NTP service.
<CE1> system-view
[CE1] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[CE1] ntp-service refclock-master 2
3. Configure PE 2:
# Enable the NTP service.
<PE2> system-view
[PE2] ntp-service enable
# Specify CE 1 in the VPN instance vpn1 as the NTP server of PE 2.
[PE2] ntp-service unicast-server 10.1.1.1 vpn-instance vpn1
4. Verify the configuration:
# Verify that PE 2 has synchronized to CE 1, with stratum level 3.
[PE2] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 10.1.1.1
Local mode: client
Reference clock ID: 10.1.1.1
Leap indicator: 00
Clock jitter: 0.005096 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00655 ms
Root dispersion: 1.15869 ms
Reference time: d0c62687.ab1bba7d Wed, Dec 29 2010 21:28:39.668
# Verify that an IPv4 NTP association has been established between PE 2 and CE 1.
[PE2] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]10.1.1.1 127.127.1.0 2 1 64 519 -0.0 0.0065 0.0
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
# Verify that server 127.0.0.1 has synchronized to server 10.1.1.1, and server 10.1.1.1 has synchronized to the local clock.
[PE2] display ntp-service trace
Server 127.0.0.1
Stratum 3 , jitter 0.000, synch distance 796.50.
Server 10.1.1.1
Stratum 2 , jitter 939.00, synch distance 0.0000.
RefID 127.127.1.0
Configuration example for MPLS L3VPN network time synchronization in symmetric active/passive mode
Network requirements
As shown in Figure 44, two VPN instances are present on PE 1 and PE 2: vpn1 and vpn2. CE 1 and CE 3 belong to VPN 1.
To synchronize time between PE 1 and CE 1 in VPN 1, perform the following tasks:
· Configure CE 1's local clock as a reference source, with stratum level 2.
· Configure CE 1 in the VPN instance vpn1 as the symmetric-passive peer of PE 1.
Configuration procedure
Before you perform the following configuration, be sure you have completed MPLS L3VPN-related configurations. For information about configuring MPLS L3VPN, see MPLS Configuration Guide.
1. Assign an IP address to each interface, as shown in Figure 44. Make sure CE 1 and PE 1, PE 1 and PE 2, and PE 2 and CE 3 can reach each other. (Details not shown.)
2. Configure CE 1:
# Enable the NTP service.
<CE1> system-view
[CE1] ntp-service enable
# Specify the local clock as the reference source, with stratum level 2.
[CE1] ntp-service refclock-master 2
3. Configure PE 1:
# Enable the NTP service.
<PE1> system-view
[PE1] ntp-service enable
# Specify CE 1 in the VPN instance vpn1 as the symmetric-passive peer of PE 1.
[PE1] ntp-service unicast-peer 10.1.1.1 vpn-instance vpn1
4. Verify the configuration:
# Verify that PE 1 has synchronized to CE 1, with stratum level 3.
[PE1] display ntp-service status
Clock status: synchronized
Clock stratum: 3
System peer: 10.1.1.1
Local mode: sym_active
Reference clock ID: 10.1.1.1
Leap indicator: 00
Clock jitter: 0.005096 s
Stability: 0.000 pps
Clock precision: 2^-18
Root delay: 0.00655 ms
Root dispersion: 1.15869 ms
Reference time: d0c62687.ab1bba7d Wed, Dec 29 2010 21:28:39.668
# Verify that an IPv4 NTP association has been established between PE 1 and CE 1.
[PE1] display ntp-service sessions
source reference stra reach poll now offset delay disper
********************************************************************************
[1245]10.1.1.1 127.127.1.0 2 1 64 519 -0.0 0.0000 0.0
Notes: 1 source(master),2 source(peer),3 selected,4 candidate,5 configured.
Total sessions: 1
# Verify that server 127.0.0.1 has synchronized to server 10.1.1.1, and server 10.1.1.1 has synchronized to the local clock.
[PE1] display ntp-service trace
Server 127.0.0.1
Stratum 3 , jitter 0.000, synch distance 796.50.
Server 10.1.1.1
Stratum 2 , jitter 939.00, synch distance 0.0000.
RefID 127.127.1.0
Configuring SNTP
SNTP is a simplified, client-only version of NTP specified in RFC 4330. SNTP supports only the client/server mode. An SNTP-enabled device can receive time from NTP servers, but cannot provide time services to other devices.
SNTP uses the same packet format and packet exchange procedure as NTP, but provides faster synchronization at the price of time accuracy.
If you specify multiple NTP servers for an SNTP client, the server with the best stratum is selected. If multiple servers are at the same stratum, the NTP server whose time packet is first received is selected.
Configuration restrictions and guidelines
When you configure SNTP, follow these restrictions and guidelines:
· You cannot configure both NTP and SNTP on the same device.
· Make sure you use the clock protocol command to specify the time protocol as NTP.
Configuration task list
Tasks at a glance |
(Required.) Enabling the SNTP service |
(Required.) Specifying an NTP server for the device |
(Optional.) Configuring SNTP authentication |
Enabling the SNTP service
The NTP service and SNTP service are mutually exclusive. You can only enable either NTP service or SNTP service at a time.
To enable the SNTP service:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the SNTP service. |
sntp enable |
By default, the SNTP service is not enabled. |
Specifying an NTP server for the device
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Specify an NTP server for the device. |
·
For IPv4: ·
For IPv6: |
By default, no NTP server is specified for the device. Repeat this step to specify multiple NTP servers. To use authentication, you must specify the authentication-keyid keyid option. |
To use an NTP server as the time source, make sure its clock has been synchronized. If the stratum level of the NTP server is greater than or equal to that of the client, the client does not synchronize with the NTP server.
Configuring SNTP authentication
SNTP authentication ensures that an SNTP client is synchronized only to an authenticated trustworthy NTP server.
Follow these guidelines when you configure SNTP authentication:
· Enable authentication on both the NTP server and the SNTP client.
· Use the same authentication key ID, authentication algorithm, and key on the NTP server and SNTP client. Specify the key as a trusted key on both the NTP server and the SNTP client. For information about configuring NTP authentication on an NTP server, see "Configuring NTP."
· On the SNTP client, associate the specified key with NTP server. Make sure the server is allowed to use the key ID for authentication on the client.
With authentication disabled, the SNTP client can synchronize with the NTP server regardless of whether the NTP server is enabled with authentication.
To configure SNTP authentication on the SNTP client:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable SNTP authentication. |
sntp authentication enable |
By default, SNTP authentication is disabled. |
3. Configure an SNTP authentication key. |
sntp authentication-keyid keyid authentication-mode { hmac-sha-1 | hmac-sha-256 | hmac-sha-384 | hmac-sha-512 | md5 } { cipher | simple } string [ acl ipv4-acl-number | ipv6 acl ipv6-acl-number ] * |
By default, no SNTP authentication key exists. |
4. Specify the key as a trusted key. |
sntp reliable authentication-keyid keyid |
By default, no trusted key is specified. |
5. Associate the SNTP authentication key with an NTP server. |
·
For IPv4: ·
For IPv6: |
By default, no NTP server is specified. |
Displaying and maintaining SNTP
Execute display commands in any view.
Task |
Command |
Display information about all IPv6 SNTP associations. |
display sntp ipv6 sessions |
Display information about all IPv4 SNTP associations. |
display sntp sessions |
SNTP configuration example
Network requirements
As shown in Figure 45, perform the following tasks:
· Configure the local clock of Device A as a reference source, with stratum level 2.
· Configure Device B to operate in SNTP client mode, and specify Device A as the NTP server.
· Configure NTP authentication on Device A and SNTP authentication on Device B.
Configuration procedure
1. Assign an IP address to each interface, and make sure Device A and Device B can reach each other, as shown in Figure 45. (Details not shown.)
2. Configure Device A:
# Enable the NTP service.
<DeviceA> system-view
[DeviceA] ntp-service enable
# Configure the local clock of Device A as a reference source, with stratum level 2.
[DeviceA] ntp-service refclock-master 2
# Enable NTP authentication on Device A.
[DeviceA] ntp-service authentication enable
# Configure an NTP authentication key, with the key ID of 10 and key value of aNiceKey. Input the key in plain text.
[DeviceA] ntp-service authentication-keyid 10 authentication-mode md5 simple aNiceKey
# Specify the key as a trusted key.
[DeviceA] ntp-service reliable authentication-keyid 10
3. Configure Device B:
# Enable the SNTP service.
<DeviceB> system-view
[DeviceB] sntp enable
# Enable SNTP authentication on Device B.
[DeviceB] sntp authentication enable
# Configure an SNTP authentication key, with the key ID of 10 and key value of aNiceKey. Input the key in plain text.
[DeviceB] sntp authentication-keyid 10 authentication-mode md5 simple aNiceKey
# Specify the key as a trusted key.
[DeviceB] sntp reliable authentication-keyid 10
# Specify Device A as the NTP server of Device B, and associate the server with key 10.
[DeviceB] sntp unicast-server 1.0.1.11 authentication-keyid 10
4. Verify the configuration:
# Verify that an SNTP association has been established between Device B and Device A, and Device B has synchronized to Device A.
[DeviceB] display sntp sessions
NTP server Stratum Version Last receive time
1.0.1.11 2 4 Tue, May 17 2011 9:11:20.833 (Synced)
Configuring SNMP
Overview
Simple Network Management Protocol (SNMP) is an Internet standard protocol widely used for a management station to access and operate the devices on a network, regardless of their vendors, physical characteristics, and interconnect technologies.
SNMP enables network administrators to read and set the variables on managed devices for state monitoring, troubleshooting, statistics collection, and other management purposes.
SNMP framework
The SNMP framework contains the following elements:
· SNMP manager—Works on an NMS to monitor and manage the SNMP-capable devices in the network.
· SNMP agent—Works on a managed device to receive and handle requests from the NMS, and sends notifications to the NMS when events, such as an interface state change, occur.
· Management Information Base (MIB)—Specifies the variables (for example, interface status) maintained by the SNMP agent for the SNMP manager to read and set.
Figure 46 Relationship between NMS, agent, and MIB
MIB and view-based MIB access control
A MIB stores variables called "nodes" or "objects" in a tree hierarchy and identifies each node with a unique OID. An OID is a dotted numeric string that uniquely identifies the path from the root node to a leaf node. For example, object B in Figure 47 is uniquely identified by the OID {1.2.1.1}.
A MIB view represents a set of MIB objects (or MIB object hierarchies) with certain access privileges and is identified by a view name. The MIB objects included in the MIB view are accessible while those excluded from the MIB view are inaccessible.
A MIB view can have multiple view records each identified by a view-name oid-tree pair.
You control access to the MIB by assigning MIB views to SNMP groups or communities.
SNMP operations
SNMP provides the following basic operations:
· Get—NMS retrieves the SNMP object nodes in an agent MIB.
· Set—NMS modifies the value of an object node in an agent MIB.
· Notification—SNMP agent sends traps or informs to report events to the NMS. The difference between these two types of notification is that informs require acknowledgment but traps do not. Traps are available in SNMPv1, SNMPv2c, and SNMPv3, but informs are available only in SNMPv2c and SNMPv3.
Protocol versions
SNMPv1, SNMPv2c, and SNMPv3 are supported in non-FIPS mode. Only SNMPv3 is supported in FIPS mode. An NMS and an SNMP agent must use the same SNMP version to communicate with each other.
· SNMPv1—Uses community names for authentication. To access an SNMP agent, an NMS must use the same community name as set on the SNMP agent. If the community name used by the NMS differs from the community name set on the agent, the NMS cannot establish an SNMP session to access the agent or receive traps from the agent.
· SNMPv2c—Uses community names for authentication. SNMPv2c is compatible with SNMPv1, but supports more operation types, data types, and error codes.
· SNMPv3—Uses a user-based security model (USM) to secure SNMP communication. You can configure authentication and privacy mechanisms to authenticate and encrypt SNMP packets for integrity, authenticity, and confidentiality.
Access control modes
SNMP uses the following modes to control access to MIB objects:
· View-based Access Control Model—The VACM mode controls access to MIB objects by assigning MIB views to SNMP communities or users.
· Role based access control—The RBAC mode controls access to MIB objects by assigning user roles to SNMP communities or users.
? SNMP communities or users with predefined user role network-admin or level-15 have read and write access to all MIB objects.
? SNMP communities or users with predefined user role network-operator have read-only access to all MIB objects.
? SNMP communities or users with a non-predefined user role have user-assigned access rights. To create a non-predefined user role, use the role command. To assign MIB object rights to the user role, use the rule command.
RBAC mode controls access on a per MIB object basis, and VACM mode controls access on a MIB view basis. As a best practice to enhance MIB security, use RBAC mode.
If you create the same SNMP community or user with both modes multiple times, the most recent configuration takes effect. For more information about RBAC, see Fundamentals Command Reference.
SNMP silence
SNMP silence enables the device to automatically detect and defend against SNMP attacks.
After you enable SNMP, the device automatically starts an SNMP silence timer and counts the number of packets that fail SNMP authentication within 1 minute.
· If the number of the packets is smaller than 100, the device restarts the timer and counting.
· If the number of the packets is equal to or greater than 100, the SNMP module enters a 5-minute silence period, during which the device does not respond to any SNMP packets. After the 5 minutes expire, the device restarts the timer and counting.
FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more information about FIPS mode, see Security Configuration Guide.
Configuring SNMP basic parameters
SNMPv3 differs from SNMPv1 and SNMPv2c in many ways. Their configuration procedures are described in separate sections.
Configuring SNMPv1 or SNMPv2c basic parameters
SNMPv1 and SNMPv2c settings are not supported in FIPS mode.
To configure SNMPv1 or SNMPv2c basic parameters:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Enable the SNMP agent. |
snmp-agent |
By default, the SNMP agent is disabled. The SNMP agent is enabled when you use any command that begins with snmp-agent except for the snmp-agent calculate-password command. |
3. (Optional.) Configure the system contact. |
snmp-agent sys-info contact sys-contact |
By default, the system contact is Hangzhou H3C Tech. Co., Ltd.. |
4. (Optional.) Configure the system location. |
snmp-agent sys-info location sys-location |
By default, the system location is Hangzhou, China |
5. Enable SNMPv1 or SNMPv2c. |
snmp-agent sys-info version { all | { v1 | v2c } *} |
By default, SNMPv3 is enabled. |
6. (Optional.) Set a local engine ID. |
snmp-agent local-engineid engineid |
By default, the local engine ID is the company ID plus the device ID. The device ID varies by device model. |
7. (Optional.) Set an engine ID for a remote SNMP entity. |
snmp-agent remote { ipv4-address | ipv6 ipv6-address } [ vpn-instance vpn-instance-name ] engineid engineid |
By default, no remote entity engine IDs exist. This step is required for the device to send SNMPv1 or SNMPv2c notifications to a host, typically NMS. |
8. (Optional.) Create or update a MIB view. |
snmp-agent mib-view { excluded | included } view-name oid-tree [ mask mask-value ] |
By default, the MIB view ViewDefault is predefined. In this view, all the MIB objects in the iso subtree but the snmpUsmMIB, snmpVacmMIB, and snmpModules.18 subtrees are accessible. Each view-name oid-tree pair represents a view record. If you specify the same record with different MIB sub-tree masks multiple times, the most recent configuration takes effect. Except for the four sub-trees in the default MIB view, you can create up to 16 unique MIB view records. |
9. Configure the SNMP access right. |
·
(Method 1.) Create an SNMP community: · (Method 2.) Create an SNMPv1/v2c group, and add users to the group: a. snmp-agent group { v1 | v2c } group-name [ read-view view-name ] [ write-view view-name ] [ notify-view view-name ] [ acl { ipv4-acl-number | name ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name ipv6-acl-name } ] * b. snmp-agent usm-user { v1 | v2c } user-name group-name [ acl { ipv4-acl-number | name ipv4-acl-name } | acl ipv6 { ipv6-acl-number | name ipv6-acl-name } ] * |
By default, no SNMP group or SNMP community exists. The username in method 2 has the same purpose as the community name in method 1. Whichever method you use, make sure the configured name is the same as the community name on the NMS. |
10. (Optional.) Create an SNMP context. |
snmp-agent context context-name |
By default, no SNMP contexts exist. |
11. (Optional.) Map an SNMP community to an SNMP context. |
snmp-agent community-map community-name context context-name |
By default, no mapping exists between an SNMP community and an SNMP context. |
12. (Optional.) Configure the maximum SNMP packet size (in bytes) that the SNMP agent can handle. |
snmp-agent packet max-size byte-count |
By default, an SNMP agent can process SNMP packets with a maximum size of 1500 bytes, |
13. Specify the UDP port for receiving SNMP packets. |
snmp-agent port port-num |
By default, the device uses UDP port 161 for receiving SNMP packets. |
14. (Optional.) Configure SNMP agent alive notification sending and set the sending interval. |
snmp-agent trap periodical-interval interval |
By default, sending SNMP agent alive notifications is enabled and the sending interval is 60 seconds. |
Configuring SNMPv3 basic parameters
SNMPv3 users are managed in groups. All SNMPv3 users in a group share the same security model, but can use different authentication and privacy key settings. To implement a security model for a user and avoid SNMP communication failures, make sure the security model configuration for the group and the security key settings for the user are compliant with Table 7 and match the settings on the NMS.
Table 7 Basic security setting requirements for different security models
Security model |
Security model keyword for the group |
Security key settings for the user |
Remarks |
Authentication with privacy |
privacy |
Authentication key, privacy key |
If the authentication key or the privacy key is not configured, SNMP communication will fail. |
Authentication without privacy |
authentication |
Authentication key |
If no authentication key is configured, SNMP communication will fail. The privacy key (if any) for the user does not take effect. |
No authentication, no privacy |
Neither authentication nor privacy |
None |
The authentication and privacy keys, if configured, do not take effect. |
To configure SNMPv3 basic parameters:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Enable the SNMP agent. |
snmp-agent |
By default, the SNMP agent is disabled. The SNMP agent is enabled when you use any command that begins with snmp-agent except for the snmp-agent calculate-password command. |
3. (Optional.) Configure the system contact. |
snmp-agent sys-info contact sys-contact |
By default, the system contact is Hangzhou H3C Tech. Co., Ltd.. |
4. (Optional.) Configure the system location. |
snmp-agent sys-info location sys-location |
By default, the system location is Hangzhou, China. |
5. Enable SNMPv3. |
snmp-agent sys-info version { all | { v1 | v2c | v3 } * |
By default, SNMPv3 is enabled. |
6. (Optional.) Set a local engine ID. |
snmp-agent local-engineid engineid |
By default, the local engine ID is the company ID plus the device ID. The device ID varies by device model. IMPORTANT: After you change the local engine ID, the existing SNMPv3 users and encrypted keys become invalid, and you must reconfigure them. |
7. (Optional.) Set an engine ID for a remote SNMP entity. |
snmp-agent remote { ipv4-address | ipv6 ipv6-address } [ vpn-instance vpn-instance-name ] engineid engineid |
By default, no remote entity engine IDs exist. This step is required for the device to send SNMPv3 notifications to a host, typically NMS. |
8. (Optional.) Create or update a MIB view. |
snmp-agent mib-view { excluded | included } view-name oid-tree [ mask mask-value ] |
By default, the MIB view ViewDefault is predefined. In this view, all the MIB objects in the iso subtree but the snmpUsmMIB, snmpVacmMIB, and snmpModules.18 subtrees are accessible. Each view-name oid-tree pair represents a view record. If you specify the same record with different MIB sub-tree masks multiple times, the most recent configuration takes effect. Except for the four sub-trees in the default MIB view, you can create up to 16 unique MIB view records. |
9. (Optional.) Create an SNMPv3 group. |
·
In non-FIPS mode: ·
In FIPS mode: |
By default, no SNMP groups exist. |
10. (Optional.) Calculate a digest for the ciphertext key converted from a plaintext key. |
·
In non-FIPS mode: ·
In FIPS mode: |
N/A |
11. Create an SNMPv3 user. |
·
In non-FIPS mode (in VACM mode): ·
In non-FIPS mode (in RBAC mode): ·
In FIPS mode (in VACM mode): ·
In FIPS mode (in RBAC mode): |
If the cipher keyword is specified, the arguments auth-password and priv-password are used as encrypted keys. To send notifications to an SNMPv3 NMS, you must specify the remote keyword. |
12. (Optional.) Assign a user role to an SNMPv3 user created in RBAC mode. |
snmp-agent usm-user v3 user-name user-role role-name |
By default, an SNMPv3 user has the user role assigned to it at its creation. |
13. (Optional.) Create an SNMP context. |
snmp-agent context context-name |
By default, no SNMP contexts exist |
14. (Optional.) Configure the maximum SNMP packet size (in bytes) that the SNMP agent can handle. |
snmp-agent packet max-size byte-count |
By default, an SNMP agent can process SNMP packets with a maximum size of 1500 bytes. |
15. (Optional.) Specify the UDP port for receiving SNMP packets. |
snmp-agent port port-num |
By default, the device uses UDP port 161 for receiving SNMP packets. |
16. (Optional.) Configure SNMP agent alive notification sending and set the sending interval. |
snmp-agent trap periodical-interval interval |
By default, sending SNMP agent alive notifications is enabled and the sending interval is 60 seconds. |
Configuring SNMP logging
Enable SNMP logging only if necessary. SNMP logging is memory-intensive and might impact device performance.
The SNMP agent logs Get requests, Set requests, Set responses, SNMP notifications, and SNMP authentication failures, but does not log Get responses.
· Get operation—The agent logs the IP address of the NMS, name of the accessed node, and node OID.
· Set operation—The agent logs the NMS' IP address, name of accessed node, node OID, variable value, and error code and index for the Set operation.
· Notification tracking—The agent logs the SNMP notifications after sending them to the NMS.
· SNMP authentication failure—The agent logs related information when an NMS fails to be authenticated by the agent.
The SNMP module sends these logs to the information center. You can configure the information center to output these messages to certain destinations, such as the console and the log buffer. The total output size for the node field (MIB node name) and the value field (value of the MIB node) in each log entry is 1024 bytes. If this limit is exceeded, the information center truncates the data in the fields. For more information about the information center, see "Configuring the information center."
To configure SNMP logging:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Enable SNMP logging. |
snmp-agent log { all | authfail | get-operation | set-operation } |
By default, SNMP logging is disabled. |
3. (Optional.) Enable SNMP notification logging. |
snmp-agent trap log |
By default, SNMP notification logging is disabled. |
Configuring SNMP notifications
The SNMP Agent sends notifications (traps and informs) to inform the NMS of significant events, such as link state changes and user logins or logouts. Unless otherwise stated, the trap keyword in the command line includes both traps and informs.
Enabling SNMP notifications
Enable an SNMP notification only if necessary. SNMP notifications are memory-intensive and might affect device performance.
To generate linkUp or linkDown notifications when the link state of an interface changes, you must perform the following tasks:
· Enable linkUp or linkDown notification globally by using the snmp-agent trap enable standard [ linkdown | linkup ] * command.
· Enable linkUp or linkDown notification on the interface by using the enable snmp trap updown command.
After you enable notifications for a module, whether the module generates notifications also depends on the configuration of the module. For more information, see the configuration guide for each module.
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable SNMP notifications. |
snmp-agent trap enable [ configuration | protocol | standard [ authentication | coldstart | linkdown | linkup | warmstart ] * | system ] |
By default, SNMP configuration notifications, standard notifications, and system notifications are enabled. Whether other SNMP notifications are enabled varies by modules. |
3. Enter interface view. |
interface interface-type interface-number |
N/A |
4. Enable link state notifications. |
enable snmp trap updown |
By default, link state notifications are enabled. |
Configuring the SNMP agent to send notifications to a host
You can configure the SNMP agent to send notifications as traps or informs to a host, typically an NMS, for analysis and management. Traps are less reliable and use fewer resources than informs, because an NMS does not send an acknowledgment when it receives a trap.
Configuration guidelines
When network congestion occurs or the destination is not reachable, the SNMP agent buffers notifications in a queue. You can set the queue size and the notification lifetime (the maximum time that a notification can stay in the queue). A notification is deleted when its lifetime expires. When the notification queue is full, the oldest notifications are automatically deleted.
You can extend standard linkUp/linkDown notifications to include interface description and interface type, but must make sure the NMS supports the extended SNMP messages.
To send informs, make sure of the following information:
· The SNMP agent and the NMS use SNMPv2c or SNMPv3.
· If SNMPv3 is used, you must configure the SNMP engine ID of the NMS when you configure SNMPv3 basic settings. Also, specify the IP address of the SNMP engine when you create the SNMPv3 user.
Configuration prerequisites
Configure the SNMP agent with the same basic SNMP settings as the NMS. If SNMPv1 or SNMPv2c is used, you must configure a community name. If SNMPv3 is used, you must configure an SNMPv3 user, a MIB view, and a remote SNMP engine ID associated with the SNMPv3 user for notifications.
Make sure the SNMP agent and the NMS can reach each other.
Configuration procedure
To configure the SNMP agent to send notifications to a host:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure a target host. |
·
(In non-FIPS mode.) Send traps to the target
host: ·
(In FIPS mode.) Send traps to the target host: ·
(In non-FIPS mode.) Send informs to the target
host: ·
(In FIPS mode.) Send informs to the
target host: |
By default, no target host is configured. |
3. (Optional.) Configure a source address for notifications. |
snmp-agent { inform | trap } source interface-type { interface-number | interface-number.subnumber } |
By default, SNMP uses the IP address of the outgoing routed interface as the source IP address. |
4. (Optional.) Enable extended linkUp/linkDown notifications. |
snmp-agent trap if-mib link extended |
By default, the SNMP agent sends standard linkup/linkDown notifications. |
5. (Optional.) Set the notification queue size. |
snmp-agent trap queue-size size |
By default, the notification queue can hold 100 notification messages. |
6. (Optional.) Set the notification lifetime. |
snmp-agent trap life seconds |
The default notification lifetime is 120 seconds. |
Displaying the SNMP settings
Execute display commands in any view.
Task |
Command |
Display SNMP agent system information. |
display snmp-agent sys-info [ contact | location | version ] * |
Display SNMP agent statistics. |
display snmp-agent statistics |
Display the local engine ID. |
display snmp-agent local-engineid |
Display SNMP group information. |
display snmp-agent group [ group-name ] |
Display remote engine IDs. |
display snmp-agent remote [ { ipv4-address | ipv6 ipv6-address } [ vpn-instance vpn-instance-name ] ] |
Display basic information about the notification queue. |
display snmp-agent trap queue |
Display the modules that can generate notifications and their notification enabling status. |
display snmp-agent trap-list |
Display SNMPv3 user information. |
display snmp-agent usm-user [ engineid engineid | username user-name | group group-name ] * |
Display SNMPv1 or SNMPv2c community information. (This command is not supported in FIPS mode.) |
display snmp-agent community [ read | write ] |
Display MIB view information. |
display snmp-agent mib-view [ exclude | include | viewname view-name ] |
Display SNMP MIB node information. |
display snmp-agent mib-node [ details | index-node | trap-node | verbose ] |
Display SNMP contexts. |
display snmp-agent context [ context-name ] |
SNMPv1/SNMPv2c configuration example
The SNMPv1 configuration procedure is the same as the SNMPv2c configuration procedure. This example uses SNMPv1, and is not available in FIPS mode.
Network requirements
As shown in Figure 48, the NMS (1.1.1.2/24) uses SNMPv1 to manage the SNMP agent (1.1.1.1/24), and the agent automatically sends notifications to report events to the NMS.
Configuration procedure
1. Configure the SNMP agent:
# Configure the IP address of the agent and make sure the agent and the NMS can reach each other. (Details not shown.)
# Specify SNMPv1, and create the read-only community public and the read and write community private.
<Agent> system-view
[Agent] snmp-agent sys-info version v1
[Agent] snmp-agent community read public
[Agent] snmp-agent community write private
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable SNMP notifications, specify the NMS at 1.1.1.2 as an SNMP trap destination, and use public as the community name. (To make sure the NMS can receive traps, specify the same SNMP version in the snmp-agent target-host command as is configured on the NMS.)
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname public v1
2. Configure the SNMP NMS:
? Specify SNMPv1.
? Create the read-only community public, and create the read and write community private.
? Set the timeout timer and maximum number of retries as needed.
For information about configuring the NMS, see the NMS manual.
|
NOTE: The SNMP settings on the agent and the NMS must match. |
Verifying the configuration
# Try to get the MTU value of NULL0 interface from the agent. The attempt succeeds.
Send request to 1.1.1.1/161 ...
Protocol version: SNMPv1
Operation: Get
Request binding:
1: 1.3.6.1.2.1.2.2.1.4.135471
Response binding:
1: Oid=ifMtu.135471 Syntax=INT Value=1500
Get finished
# Use a wrong community name to get the value of a MIB node on the agent. You can see an authentication failure trap on the NMS.
1.1.1.1/2934 V1 Trap = authenticationFailure
SNMP Version = V1
Community = public
Command = Trap
Enterprise = 1.3.6.1.4.1.43.1.16.4.3.50
GenericID = 4
SpecificID = 0
Time Stamp = 8:35:25.68
SNMPv3 configuration example
Network requirements
As shown in Figure 49, the NMS (1.1.1.2/24) uses SNMPv3 to monitor and manage the interface status of the agent (1.1.1.1/24). The agent automatically sends notifications to report events to the NMS. The default UDP port 162 is used for SNMP notifications.
The NMS and the agent perform authentication when they establish an SNMP session. The authentication algorithm is SHA-1 and the authentication key is 123456TESTauth&!. The NMS and the agent also encrypt the SNMP packets between them by using the AES algorithm and the privacy key 123456TESTencr&!.
Configuration procedure
Configuring SNMPv3 in RBAC mode
1. Configure the agent:
# Assign IP address 1.1.1.1/24 to the agent and make sure the agent and the NMS can reach each other. (Details not shown.)
# Create user role test, and assign test read-only access to the objects under the snmpMIB node (OID: 1.3.6.1.6.3.1), including the linkUp and linkDown objects.
<Agent> system-view
[Agent] role name test
[Agent-role-test] rule 1 permit read oid 1.3.6.1.6.3.1
# Assign user role test read-only access to the system node (OID: 1.3.6.1.2.1.1) and read-write access to the interfaces node (OID: 1.3.6.1.2.1.2).
[Agent-role-test] rule 2 permit read oid 1.3.6.1.2.1.1
[Agent-role-test] rule 3 permit read write oid 1.3.6.1.2.1.2
[Agent-role-test] quit
# Create SNMPv3 user RBACtest. Assign user role test to RBACtest. Set the authentication algorithm to sha, authentication key to 123456TESTauth&!, encryption algorithm to aes128, and privacy key to 123456TESTencr&!.
[Agent] snmp-agent usm-user v3 RBACtest user-role test simple authentication-mode sha 123456TESTauth&! privacy-mode aes128 123456TESTencr&!
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable notifications on the agent. Specify the NMS at 1.1.1.2 as the notification destination, and RBACtest as the username.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname RBACtest v3 privacy
2. Configure the NMS:
? Specify SNMPv3.
? Create SNMPv3 user RBACtest.
? Enable both authentication and privacy functions.
? Use SHA-1 for authentication and AES for encryption.
? Set the authentication key to 123456TESTauth&! and the privacy key to 123456TESTencr&!.
? Set the timeout timer and maximum number of retries.
For information about configuring the NMS, see the NMS manual.
|
NOTE: The SNMP settings on the agent and the NMS must match. |
Configuring SNMPv3 in VACM mode
1. Configure the agent:
# Assign IP address 1.1.1.1/24 to the agent, and make sure the agent and the NMS can reach each other. (Details not shown.)
# Create SNMPv3 group managev3group and assign managev3group read-only access to the objects under the snmpMIB node (OID: 1.3.6.1.6.3.1) in the test view, including the linkUp and linkDown objects.
<Agent> system-view
[Agent] undo snmp-agent mib-view ViewDefault
[Agent] snmp-agent mib-view included test snmpMIB
[Agent] snmp-agent group v3 managev3group privacy read-view test
# Assign SNMPv3 group managev3group read-write access to the objects under the system node (OID: 1.3.6.1.2.1.1) and interfaces node (OID: 1.3.6.1.2.1.2) in the test view.
[Agent] snmp-agent mib-view included test 1.3.6.1.2.1.1
[Agent] snmp-agent mib-view included test 1.3.6.1.2.1.2
[Agent] snmp-agent group v3 managev3group privacy read-view test write-view test
# Add user VACMtest to SNMPv3 group managev3group, and set the authentication algorithm to sha, authentication key to 123456TESTauth&!, encryption algorithm to aes128, and privacy key to 123456TESTencr&!.
[Agent] snmp-agent usm-user v3 VACMtest managev3group simple authentication-mode sha 123456TESTauth&! privacy-mode aes128 123456TESTencr&!
# Configure contact and physical location information for the agent.
[Agent] snmp-agent sys-info contact Mr.Wang-Tel:3306
[Agent] snmp-agent sys-info location telephone-closet,3rd-floor
# Enable notifications on the agent. Specify the NMS at 1.1.1.2 as the trap destination, and VACMtest as the username.
[Agent] snmp-agent trap enable
[Agent] snmp-agent target-host trap address udp-domain 1.1.1.2 params VACMtest v3 privacy
2. Configure the SNMP NMS:
? Specify SNMPv3.
? Create SNMPv3 user VACMtest.
? Enable both authentication and privacy functions.
? Use SHA-1 for authentication and AES for encryption.
? Set the authentication key to 123456TESTauth&! and the privacy key to 123456TESTencr&!.
? Set the timeout timer and maximum number of retries.
For information about configuring the NMS, see the NMS manual.
|
NOTE: The SNMP settings on the agent and the NMS must match. |
Verifying the configuration
· Use username RBACtest to access the agent.
# Retrieve the value of the sysName node. The value Agent is returned.
# Set the value for the sysName node to Sysname. The operation fails because the NMS does not have write access to the node.
# Shut down or bring up an interface on the agent. The NMS receives linkUP (OID: 1.3.6.1.6.3.1.1.5.4) or linkDown (OID: 1.3.6.1.6.3.1.1.5.3) notifications.
· Use username VACMtest to access the agent.
# Retrieve the value of the sysName node. The value Agent is returned.
# Set the value for the sysName node to Sysname. The operation succeeds.
# Shut down or bring up an interface on the agent. The NMS receives linkUP (OID: 1.3.6.1.6.3.1.1.5.4) or linkDown (OID: 1.3.6.1.6.3.1.1.5.3) notifications.
Configuring RMON
Overview
Remote Network Monitoring (RMON) is an enhancement to SNMP. It enables proactive remote monitoring and management of network devices and subnets. An RMON monitor periodically or continuously collects traffic statistics for the network attached to a port on the managed device. The managed device can automatically send a notification when a statistic crosses an alarm threshold, so the NMS does not need to constantly poll MIB variables and compare the results.
RMON uses SNMP notifications to notify NMSs of various alarm conditions such as broadcast traffic threshold exceeded. In contrast, SNMP reports function and interface operating status changes such as link up, link down, and module failure. For more information about SNMP notifications, see "Configuring SNMP."
H3C devices provide an embedded RMON agent as the RMON monitor. An NMS can perform basic SNMP operations to access the RMON MIB.
RMON groups
Among standard RMON groups, H3C implements the statistics group, history group, event group, alarm group, probe configuration group, and user history group. H3C also implements a private alarm group, which enhances the standard alarm group. The probe configuration group and user history group are not configurable from the CLI. To configure these two groups, you must access the MIB.
Statistics group
The statistics group samples traffic statistics for monitored Ethernet interfaces and stores the statistics in the Ethernet statistics table (ethernetStatsTable). The statistics include:
· Number of collisions.
· CRC alignment errors.
· Number of undersize or oversize packets.
· Number of broadcasts.
· Number of multicasts.
· Number of bytes received.
· Number of packets received.
The statistics in the Ethernet statistics table are cumulative sums.
History group
The history group periodically samples traffic statistics on interfaces and saves the history samples in the history table (etherHistoryTable). The statistics include:
· Bandwidth utilization.
· Number of error packets.
· Total number of packets.
The history table stores traffic statistics collected for each sampling interval.
Event group
The event group controls the generation and notifications of events triggered by the alarms defined in the alarm group and the private alarm group. The following are RMON alarm event handling methods:
· Log—Logs event information (including event time and description) in the event log table so the management device can get the logs through SNMP.
· Trap—Sends an SNMP notification when the event occurs.
· Log-Trap—Logs event information in the event log table and sends an SNMP notification when the event occurs.
· None—Takes no actions.
Alarm group
The RMON alarm group monitors alarm variables, such as the count of incoming packets (etherStatsPkts) on an interface. After you create an alarm entry, the RMON agent samples the value of the monitored alarm variable regularly. If the value of the monitored variable is greater than or equal to the rising threshold, a rising alarm event is triggered. If the value of the monitored variable is smaller than or equal to the falling threshold, a falling alarm event is triggered. The event group defines the action to take on the alarm event.
If an alarm entry crosses a threshold multiple times in succession, the RMON agent generates an alarm event only for the first crossing. For example, if the value of a sampled alarm variable crosses the rising threshold multiple times before it crosses the falling threshold, only the first crossing triggers a rising alarm event, as shown in Figure 50.
Figure 50 Rising and falling alarm events
Private alarm group
The private alarm group enables you to perform basic math operations on multiple variables, and compare the calculation result with the rising and falling thresholds.
The RMON agent samples variables and takes an alarm action based on a private alarm entry as follows:
1. Samples the private alarm variables in the user-defined formula.
2. Processes the sampled values with the formula.
3. Compares the calculation result with the predefined thresholds, and then takes one of the following actions:
? Triggers the event associated with the rising alarm event if the result is equal to or greater than the rising threshold.
? Triggers the event associated with the falling alarm event if the result is equal to or less than the falling threshold.
If a private alarm entry crosses a threshold multiple times in succession, the RMON agent generates an alarm event only for the first crossing. For example, if the value of a sampled alarm variable crosses the rising threshold multiple times before it crosses the falling threshold, only the first crossing triggers a rising alarm event.
Sample types for the alarm group and the private alarm group
The RMON agent supports the following sample types:
· absolute—RMON compares the value of the monitored variable with the rising and falling thresholds at the end of the sampling interval.
· delta—RMON subtracts the value of the monitored variable at the previous sample from the current value, and then compares the difference with the rising and falling thresholds.
Protocols and standards
· RFC 4502, Remote Network Monitoring Management Information Base Version 2
· RFC 2819, Remote Network Monitoring Management Information Base Status of this Memo
Configuring the RMON statistics function
RMON implements the statistics function through the Ethernet statistics group and the history group.
The Ethernet statistics group provides the cumulative statistic for a variable from the time the statistics entry is created to the current time. For more information about the configuration, see "Creating an RMON Ethernet statistics entry."
The history group provides statistics that are sampled for a variable for each sampling interval. The history group uses the history control table to control sampling, and it stores samples in the history table. For more information about the configuration, see "Creating an RMON history control entry."
Creating an RMON Ethernet statistics entry
Command |
Remarks |
|
1. Enter system view. |
system-view |
N/A |
2. Enter Ethernet interface view. |
interface interface-type interface-number |
N/A |
3. Create an RMON Ethernet statistics entry. |
rmon statistics entry-number [ owner text ] |
By default, no RMON Ethernet statistics entry exists. You can create only one RMON statistics entry for an Ethernet interface. |
Creating an RMON history control entry
You can configure multiple history control entries for one interface, but you must make sure their entry numbers and sampling intervals are different.
You can create a history control entry successfully even if the specified bucket size exceeds the available history table size. RMON will set the bucket size as closely to the expected bucket size as possible.
To create an RMON history control entry:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter Ethernet interface view. |
interface interface-type interface-number |
N/A |
3. Create an RMON history control entry. |
rmon history entry-number buckets number interval interval [ owner text ] |
By default, no RMON history control entries exist. |
Configuring the RMON alarm function
When you configure the RMON alarm function, follow these guidelines:
· To send notifications to the NMS when an alarm is triggered, configure the SNMP agent as described in "Configuring SNMP" before configuring the RMON alarm function.
· For a new event, alarm, or private alarm entry to be created:
? The entry must not have the same set of parameters as an existing entry.
? The maximum number of entries is not reached.
Table 8 shows the parameters to be compared for duplication and the entry limits.
Table 8 RMON configuration restrictions
Entry |
Parameters to be compared |
Maximum number of entries |
Event |
· Event description (description string) · Event type (log, trap, logtrap, or none) · Community name (security-string) |
60 |
Alarm |
· Alarm variable (alarm-variable) · Sampling interval (sampling-interval) · Sample type (absolute or delta) · Rising threshold (threshold-value1) · Falling threshold (threshold-value2) |
60 |
Private alarm |
· Alarm variable formula (prialarm-formula) · Sampling interval (sampling-interval) · Sample type (absolute or delta) · Rising threshold (threshold-value1) · Falling threshold (threshold-value2) |
50 |
To configure the RMON alarm function:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Create an RMON event entry. |
rmon event entry-number [ description string ] { log | log-trap security-string | none | trap security-string } [ owner text ] |
By default, no RMON event entries exist. |
3. Create an RMON alarm entry. |
·
Create an RMON alarm entry: ·
Create an RMON private alarm entry: |
By default, no RMON alarm entries or RMON private alarm entries exist. You can associate an alarm with an event that has not been created yet. The alarm will trigger the event only after the event is created. |
Displaying and maintaining RMON settings
Execute display commands in any view.
Task |
Command |
Display RMON statistics. |
display rmon statistics [ interface-type interface-number] |
Display RMON history control entries and history samples. |
display rmon history [ interface-type interface-number ] |
Display RMON alarm entries. |
display rmon alarm [ entry-number ] |
Display RMON private alarm entries. |
display rmon prialarm [ entry-number ] |
Display RMON event entries. |
display rmon event [ entry-number ] |
Display log information for event entries. |
display rmon eventlog [ entry-number ] |
RMON configuration examples
Ethernet statistics group configuration example
Network requirements
As shown in Figure 51, create an RMON Ethernet statistics entry on the device to gather cumulative traffic statistics for HundredGigE 1/0/1.
Configuration procedure
# Create an RMON Ethernet statistics entry for HundredGigE 1/0/1.
<Sysname> system-view
[Sysname] interface hundredgige 1/0/1
[Sysname-HundredGigE1/0/1] rmon statistics 1 owner user1
# Display statistics collected for HundredGigE 1/0/1.
<Sysname> display rmon statistics hundredgige 1/0/1
EtherStatsEntry 1 owned by user1 is VALID.
Interface : HundredGigE1/0/1<ifIndex.3>
etherStatsOctets : 21657 , etherStatsPkts : 307
etherStatsBroadcastPkts : 56 , etherStatsMulticastPkts : 34
etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0
etherStatsFragments : 0 , etherStatsJabbers : 0
etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0
etherStatsDropEvents (insufficient resources): 0
Incoming packets by size:
64 : 235 , 65-127 : 67 , 128-255 : 4
256-511: 1 , 512-1023: 0 , 1024-1518: 0
# Get the traffic statistics from the NMS through SNMP. (Details not shown.)
History group configuration example
Network requirements
As shown in Figure 52, create an RMON history control entry on the device to sample traffic statistics for HundredGigE 1/0/1 every minute.
Configuration procedure
# Create an RMON history control entry to sample traffic statistics every minute for HundredGigE 1/0/1. Retain a maximum of eight samples for the interface in the history statistics table.
<Sysname> system-view
[Sysname] interface hundredgige 1/0/1
[Sysname-HundredGigE1/0/1] rmon history 1 buckets 8 interval 60 owner user1
# Display the history statistics collected for HundredGigE 1/0/1.
[Sysname-HundredGigE1/0/1] display rmon history
HistoryControlEntry 1 owned by user1 is VALID
Sampled interface : HundredGigE1/0/1<ifIndex.3>
Sampling interval : 60(sec) with 8 buckets max
Sampling record 1 :
dropevents : 0 , octets : 834
packets : 8 , broadcast packets : 1
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
Sampling record 2 :
dropevents : 0 , octets : 962
packets : 10 , broadcast packets : 3
multicast packets : 6 , CRC alignment errors : 0
undersize packets : 0 , oversize packets : 0
fragments : 0 , jabbers : 0
collisions : 0 , utilization : 0
# Get the traffic statistics from the NMS through SNMP. (Details not shown.)
Alarm function configuration example
Network requirements
As shown in Figure 53, configure the device to monitor the incoming traffic statistic on HundredGigE 1/0/1, and send RMON alarms when either of the following conditions is met:
· The 5-second delta sample for the traffic statistic crosses the rising threshold (100).
· The 5-second delta sample for the traffic statistic drops below the falling threshold (50).
Configuration procedure
# Configure the SNMP agent (the device) with the same SNMP settings as the NMS at 1.1.1.2. This example uses SNMPv1, read community public, and write community private.
<Sysname> system-view
[Sysname] snmp-agent
[Sysname] snmp-agent community read public
[Sysname] snmp-agent community write private
[Sysname] snmp-agent sys-info version v1
[Sysname] snmp-agent trap enable
[Sysname] snmp-agent trap log
[Sysname] snmp-agent target-host trap address udp-domain 1.1.1.2 params securityname public
# Create an RMON Ethernet statistics entry for HundredGigE 1/0/1.
[Sysname] interface hundredgige 1/0/1
[Sysname-HundredGigE1/0/1] rmon statistics 1 owner user1
[Sysname-HundredGigE1/0/1] quit
# Create an RMON event entry and an RMON alarm entry to send SNMP notifications when the delta sample for 1.3.6.1.2.1.16.1.1.1.4.1 exceeds 100 or drops below 50.
[Sysname] rmon event 1 trap public owner user1
[Sysname] rmon alarm 1 1.3.6.1.2.1.16.1.1.1.4.1 5 delta rising-threshold 100 1 falling-threshold 50 1 owner user1
|
NOTE: The string 1.3.6.1.2.1.16.1.1.1.4.1 is the object instance for HundredGigE 1/0/1. The digits before the last digit (1.3.6.1.2.1.16.1.1.1.4) represent the object for total incoming traffic statistics. The last digit (1) is the RMON Ethernet statistics entry index for HundredGigE 1/0/1. |
# Display the RMON alarm entry.
<Sysname> display rmon alarm 1
AlarmEntry 1 owned by user1 is VALID.
Sample type : delta
Sampled variable : 1.3.6.1.2.1.16.1.1.1.4.1<etherStatsOctets.1>
Sampling interval (in seconds) : 5
Rising threshold : 100(associated with event 1)
Falling threshold : 50(associated with event 1)
Alarm sent upon entry startup : risingOrFallingAlarm
Latest value : 0
# Display statistics for HundredGigE 1/0/1.
<Sysname> display rmon statistics hundredgige 1/0/1
EtherStatsEntry 1 owned by user1 is VALID.
Interface : HundredGigE1/0/1<ifIndex.3>
etherStatsOctets : 57329 , etherStatsPkts : 455
etherStatsBroadcastPkts : 53 , etherStatsMulticastPkts : 353
etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0
etherStatsFragments : 0 , etherStatsJabbers : 0
etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0
etherStatsDropEvents (insufficient resources): 0
Incoming packets by size :
64 : 7 , 65-127 : 413 , 128-255 : 35
256-511: 0 , 512-1023: 0 , 1024-1518: 0
The NMS receives the notification when the alarm is triggered.
Configuring the Event MIB
Overview
The Event Management Information Base (Event MIB) provides the ability to monitor MIB objects on a local or remote system by using SNMP. It takes the notification or set action whenever a trigger condition is met.
The Event MIB is an enhancement to remote network monitoring (RMON):
· In addition to threshold tests, the Event MIB provides Boolean and existence tests for event triggers.
· When a trigger condition is met, the Event MIB sends a notification to the NMS, sets the value of a MIB object, or performs both operations.
Monitored objects
In the Event MIB, you can monitor the following MIB objects:
· Table node.
· Conceptual row node.
· Table column node.
· Simple leaf node.
· Parent node of a leaf node.
The monitored objects can be fully specified or wildcarded:
· To monitor a specific instance, for example, the description node for the interface with index 2 ifDescr.2, specify the monitored object ifDescr.2.
· To monitor multiple instances, for example, all instances of the interface description node ifDescr, configure the monitored objects to be wildcarded by using ifDescr.
Object owner
An object owner can only be an SNMPv3 user. You can assign the object owner the rights to access the monitored objects. For more information about SNMPv3 user access rights, see "Configuring SNMP."
Trigger test
Existence test
An existence test monitors and manages the absence, presence, and change of a MIB object, for example, interface status. When a monitored object is specified, the system reads the value of the monitored object regularly.
· If the test type is Absent, the system triggers an alarm event and takes the specified action when the state of the monitored object changes to absent.
· If the test type is Present, the system triggers an alarm event and takes the specified action when the state of the monitored object changes to present.
· If the test type is Changed, the system triggers an alarm event and takes the specified action when the value of the monitored object changes.
Boolean test
A Boolean test compares the value of the monitored object with the reference value and takes action according to the comparison result. The comparison types include unequal, equal, less, lessorequal, greater, and greaterorequal. For example, if the comparison type is equal, an event is triggered when the value of the monitored object equals the reference value. The event will not be triggered again until the value becomes unequal and comes back to equal.
Threshold test
A threshold test regularly compares the value of the monitored object with the threshold values.
· A rising alarm event is triggered if the value of the monitored object is greater than or equal to the rising threshold.
· A falling alarm event is triggered if the value of the monitored object is smaller than or equal to the falling threshold.
· A rising alarm event is triggered if the difference between the current sampled value and the previous sampled value is greater than or equal to the delta rising threshold.
· A falling alarm event is triggered if the difference between the current sampled value and the previous sampled value is smaller than or equal to the delta falling threshold.
· A falling alarm event is triggered if the values of the monitored object, the rising threshold, and the falling threshold are the same.
· A falling alarm event is triggered if the delta rising threshold, the delta falling threshold, and the difference between the current sampled value and the previous sampled value is the same.
The alarm management module defines the set or notification action to take on alarm events.
If the value of the monitored object crosses a threshold multiple times in succession, the managed device triggers an alarm event only for the first crossing. For example, if the value of a sampled object crosses the rising threshold multiple times before it crosses the falling threshold, only the first crossing triggers a rising alarm event, as shown in Figure 54.
Figure 54 Rising and falling alarm events
Event actions
The Event MIB triggers one or both of the following actions when the trigger condition is met:
· Set action—Uses SNMP to set the value of the monitored object.
· Notification action—Uses SNMP to send a notification to the NMS. If an object list is specified for the notification action, the notification will carry the specified objects in the object list.
Prerequisites
Before you configure the Event MIB, make sure the SNMP agent and NMS are configured correctly.
Event MIB configuration task list
To configure the Event MIB:
Tasks at a glance |
(Required.) Configuring Event MIB object lists |
(Required.) Configuring an event |
(Required.) Configuring a trigger · (Optional.) Configuring a Boolean trigger test · (Optional.) Configuring an existence trigger test · (Optional.) Configuring a threshold trigger test |
(Required.) Enabling SNMP notifications for Event MIB |
Configuring Event MIB sampling
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Set the minimum sampling interval. |
snmp mib event sample minimum min-number |
By default, the minimum sampling interval is 1 second. The sampling intervals of triggers must be greater than or equal to the minimum sampling interval. Changing the minimum sampling interval does not affect the exiting instances. |
3. (Optional.) Configure the maximum number of object instances that can concurrently sampled. |
snmp mib event sample instance maximum max-number |
By default, the maximum number of object instances that can be concurrently sampled is limited by the available resources. The value is 0. Changing the maximum number of object instances that can be concurrently sampled does not affect the existing instances. |
Configuring Event MIB object lists
You can specify an Event MIB object list by using the object list owner command in trigger view, trigger-test view (including trigger-Boolean view, trigger existence view, and trigger threshold view), and action-notification view. The objects in the list will be added to the triggered notifications.
If you specify an object list respectively in any two of the views or all the three views, the object lists are added to the triggered notification in this sequence: trigger view, trigger-test view, and action-notification view.
To configure an object list:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure an Event MIB object list. |
snmp mib event object list owner group-owner name group-name object-index oid object-identifier [ wildcard ] |
By default, no Event MIB object lists exist. Use the current username as the object list owner. |
Configuring an event
Creating an event
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create an event. |
snmp mib event owner event-owner name event-name |
By default, no event exists. The owner must be an SNMPv3 user. |
3. (Optional.) Configure a description for the event. |
description text |
By default, an event does not have a description. |
4. Specify an action for the event. |
action { notification | set } |
By default, no action is specified for an event. |
5. Enable the event. |
event enable |
By default, an event is disabled. To enable the event action for the Boolean, existence, or threshold trigger test, you must enable the event. |
Configuring a set action for an event
When you enable a set action, a set entry is created automatically. All fields in the entry have default values.
To configure a set action:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter event view. |
snmp mib event owner event-owner name event-name |
N/A |
3. Enable the set action and enter set action view. |
action set |
N/A |
4. Specify an object by its OID for the set action. |
oid object-identifier |
By default, no object is specified for the set action. |
5. Enable wildcard search for OIDs. |
wildcard oid |
By default, the set-action object is fully specified. This command must be used in conjunction with the oid object-identifier command. |
6. Set the value for the object. |
value integer-value |
The default value for the object is 0. |
7. (Optional.) Specify a context for the object. |
context context-name |
By default, no context is specified for an object. |
8. (Optional.) Enable wildcard search for contexts. |
wildcard context |
By default, the context for an object is fully specified. This command must be used in conjunction with the context context-name command. |
Configuring a notification action for an event
When you enable a notification action, a notification entry is created automatically. All fields in the entry have default values.
To configure a notification action:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter event view |
snmp mib event owner event-owner name event-name |
N/A |
3. Enable the notification action and enter notification action view. |
action notification |
N/A |
4. Specify a notification OID. |
oid object-identifier |
By default, no notification OID is specified. |
5. Specify an object list to be added to the notification triggered by the event. |
object list owner group-owner name group-name |
By default, no object list is specified for the notification action. |
Configuring a trigger
You can specify a Boolean test, an existence test, or a threshold test for a trigger. For more information, see "Configuring a Boolean trigger test," "Configuring an existence trigger test," and "Configuring a threshold trigger test."
To configure a trigger:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a trigger and enter its view, or enter the view of an existing trigger. |
snmp mib event trigger owner trigger-owner name trigger-name |
By default, no triggers exist. The owner must be an SNMPv3 user. |
3. (Optional.) Configure a description for the trigger. |
description text |
By default, a trigger does not have a description. |
4. Set the sampling interval. |
frequency interval |
By default, the sampling interval is 600 seconds. Make sure the sampling interval is greater than or equal to the Event MIB minimum sampling interval. |
5. Specify the sampling method. |
sample { absolute | delta } |
The default sampling method is absolute. |
6. Specify the object to be sampled by its OID. |
oid object-identifier |
By default, the OID is 0.0. No object is specified for a trigger. The mteTriggerEnabled and mteTriggerTargetTag objects are read-only and cannot be sampled. |
7. Enable wildcard search for OIDs. |
wildcard oid |
By default, the object to be monitored is fully specified. This command must be used in conjunction with the oid object-identifier command. |
8. (Optional.) Configure a context for the monitored object. |
context context-name |
By default, no context is configured for a monitored object. |
9. (Optional.) Enable wildcard search for contexts. |
wildcard context |
By default, the context for a monitored object is fully specified. This command must be used in conjunction with the context context-name command. |
10. (Optional.) Specify the object list to be added to the notification triggered by the event. |
object list owner group-owner name group-name |
By default, no object list is specified for a trigger. |
11. Specify a test type for the trigger. |
test { boolean | existence | threshold } |
By default, no test type is specified for a trigger. |
12. Enable the trigger. |
trigger enable |
By default, a trigger is disabled. |
Configuring a Boolean trigger test
When you enable a Boolean trigger test, a Boolean entry is created automatically. All fields in the entry have default values.
To configure a Boolean trigger test:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter trigger view. |
snmp mib event trigger owner trigger-owner name trigger-name |
N/A |
3. Enter trigger-Boolean view. |
test boolean |
N/A |
4. Specify a Boolean comparison type. |
comparison { equal | greater | greaterorequal | less | lessorequal | unequal } |
The default Boolean comparison type is unequal. |
5. Set the reference value for the Boolean trigger test. |
value integer-value |
The default reference value for a Boolean trigger test is 0. |
6. Configure the event for the Boolean trigger test. |
event owner event-owner name event-name |
By default, no event is configured for a Boolean trigger test. |
7. (Optional.) Specify the object list to be added to the notification triggered by the event. |
object list owner group-owner name group-name |
By default, no object list is specified for a Boolean trigger test. |
8. (Optional.) Enable the event to be triggered when the trigger condition is met at the first sampling. |
startup enable |
By default, the event is triggered when the trigger condition is met at the first sampling. |
Configuring an existence trigger test
When you enable an existence trigger test, an existence entry is created automatically. All fields in the entry have default values.
To configure an existence trigger test:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter trigger view. |
snmp mib event trigger owner trigger-owner name trigger-name |
N/A |
3. Enter trigger-existence view. |
test existence |
N/A |
4. Specify the event for the existence trigger test. |
event owner event-owner name event-name |
By default, no event is specified for an existence trigger test. The owner must be an SNMPv3 user. |
5. (Optional.) Specify the object list to be added to the notification triggered by the event. |
object list owner group-owner name group-name |
By default, no object list is specified for an existence trigger test. |
6. Specify an existence trigger test type. |
type { absent | changed | present } |
The default existence trigger test types are present and absent. |
7. Specify an existence trigger test type for the first sampling. |
startup { absent | present } |
For the first sampling, you must execute the startup { absent | present } command to enable the event trigger. By default, both the present and absent existence trigger test types are allowed for the first sampling. |
Configuring a threshold trigger test
When you enable a threshold trigger test, a threshold entry is created automatically. All fields in the entry have default values.
To configure a threshold trigger test:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter trigger view. |
snmp mib event trigger owner trigger-owner name trigger-name |
You can only specify an existing SNMPv3 user as the trigger owner. |
3. Enter trigger-threshold view. |
test boolean |
N/A |
4. Specify the object list to be added to the notification triggered by the event. |
object list owner group-owner name group-name |
By default, no object list is specified for a threshold trigger test. The owner must be an SNMPv3 user. |
5. (Optional.) Specify the type of the threshold trigger test for the first sampling. |
startup { falling | rising | rising-or-falling } |
For the first sampling, you must execute the startup { falling | rising | rising-or-falling } command to enable the event trigger. The default threshold trigger test type for the first sampling is rising-or-falling. |
6. Specify the delta falling threshold and the falling alarm event triggered when the sampled value is smaller than or equal to the threshold. |
delta falling { event owner event-owner name event-name | value integer-value } |
By default, the delta falling threshold is 0, and no falling alarm event is specified. |
7. Specify the delta rising threshold and the rising alarm event triggered when the sampled value is greater than or equal to the threshold. |
delta rising { event owner event-owner name event-name | value integer-value } |
By default, the delta rising threshold is 0, and no rising alarm event is specified. |
8. Specify the falling threshold and the falling alarm event triggered when the sampled value is smaller than or equal to the threshold. |
falling { event owner event-owner name event-name | value integer-value } |
By default, the falling threshold is 0, and no falling alarm event is specified. |
9. Specify the rising threshold and the ring alarm event triggered when the sampled value is greater than or equal to the threshold. |
rising { event owner event-owner name event-name | value integer-value } |
By default, the rising threshold is 0, and no rising alarm event is specified. |
Enabling SNMP notifications for Event MIB
To report critical Event MIB events to an NMS, enable SNMP notifications for Event MIB. For Event MIB event notifications to be sent correctly, you must also configure SNMP on the device. For more information about SNMP configuration, see the network management and monitoring configuration guide for the device.
To configure SNMP notifications for Event MIB:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable snmp notifications for Event MIB. |
snmp-agent trap enable event-mib |
By default, SNMP notifications are enabled for Event MIB. |
Displaying and maintaining the Event MIB
Execute display commands in any view.
Task |
Command |
Display global Event MIB configuration and statistics. |
display snmp mib event summary |
Display trigger information. |
display snmp mib event trigger [ owner trigger-owner name trigger-name ] |
Display event information. |
display snmp mib event event [ owner event-owner name event-name ] |
Display object list information. |
display snmp mib event object list [ owner group-owner name group-name ] |
Display Event MIB configuration and statistics. |
display snmp mib event |
Event MIB configuration examples
Existence trigger test configuration example
Network requirements
As shown in Figure 55, the device acts as the agent. Use Event MIB to monitor the device. When interface hot-swap or virtual interface creation or deletion occurs on the device, the agent sends an mteTriggerFired notification to the NMS.
Configuration procedure
1. Configure the device:
# Add the user owner1 to the SNMPv3 group g3. Assign g3 the right to access the MIB view a.
<Sysname> system-view
[Sysname] snmp-agent usm-user v3 owner1 g3
[Sysname] snmp-agent group v3 g3 read-view a write-view a notify-view a
[Sysname] snmp-agent mib-view included a iso
# Set the SNMP context to contextnameA.
[Sysname] snmp-agent context contextnameA
# Configure the device to use the username owner1 to send SNMPv3 notifications to the NMS at 192.168.1.26.
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params securityname owner1 v3
[Sysname] snmp-agent trap enable event-mib
2. Set the Event MIB minimum sampling interval to 50 seconds and set the maximum number to 100 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample minimum 50
[Sysname] snmp mib event sample instance maximum 100
3. Configure a trigger:
# Create a trigger. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the sampling interval is greater than or equal to the Event MIB minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify the object IfIndex OID 1.3.6.1.2.1.2.2.1.1 as the monitored object. Enable wildcard search for OIDs.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.2.1.2.2.1.1
[Sysname-trigger-owner1-triggerA] wildcard oid
# Configure the context contextnameA for the monitored object and enable wildcard search for contexts.
[Sysname-trigger-owner1-triggerA] context contextnameA
[Sysname-trigger-owner1-triggerA] wildcard context
# Specify the existence trigger test for the trigger.
[Sysname-trigger-owner1-triggerA] test existence
[Sysname-trigger-owner1-triggerA-existence] quit
# Enable the trigger.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit
Verifying the configuration
# Display Event MIB brief information.
[Sysname] display snmp mib event summary
TriggerFailures : 0
EventFailures : 0
SampleMinimum : 50
SampleInstanceMaximum : 100
SampleInstance : 20
SampleInstancesHigh : 20
SampleInstanceLacks : 0
# Display information about the trigger with the owner owner1 and the name trigger A.
[Sysname] display snmp mib event trigger owner owner1 name triggerA
Trigger entry triggerA owned by owner1:
TriggerComment : N/A
TriggerTest : existence
TriggerSampleType : absoluteValue
TriggerValueID : 1.3.6.1.2.1.2.2.1.1<ifIndex>
TriggerValueIDWildcard : true
TriggerTargetTag : N/A
TriggerContextName : contextnameA
TriggerContextNameWildcard : true
TriggerFrequency(in seconds): 60
TriggerObjOwner : N/A
TriggerObjName : N/A
TriggerEnabled : true
Existence entry:
ExiTest : present | absent
ExiStartUp : present | absent
ExiObjOwner : N/A
ExiObjName : N/A
ExiEvtOwner : N/A
ExiEvtName : N/A
# Create VLAN-interface 2 on the device.
[Sysname] vlan 2
[Sysname-vlan2] quit
[Sysname] interface vlan 2
The NMS receives an mteTriggerFired notification from the device.
Boolean trigger test configuration example
Network requirements
As shown in Figure 55, the device acts as the agent. The NMS uses SNMPv3 to monitor and manage the device. Configure a trigger and configure a Boolean trigger test for the trigger. When the trigger condition is met, the device sends an mteTriggerFired notification to the NMS.
Figure 56 Network diagram
Configuration procedure
1. Configure the device:
# Add the user owner1 to the SNMPv3 group g3. Assign g3 the right to access the MIB view a.
<Sysname> system-view
[Sysname] snmp-agent usm-user v3 owner1 g3
[Sysname] snmp-agent group v3 g3 read-view a write-view a notify-view a
[Sysname] snmp-agent mib-view included a iso
# Configure the device to use the username owner1 to send SNMPv3 notifications to the NMS at 192.168.1.26.
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params securityname owner1 v3
# Enable SNMP notifications for Event MIB.
[Sysname] snmp-agent trap enable event-mib
2. Set the Event MIB minimum sampling interval to 50 seconds and set the maximum number to 100 for object instances that can be concurrently sampled.
[Sysname] snmp mib event sample minimum 50
[Sysname] snmp mib event sample instance maximum 100
3. Configure the Event MIB object lists. When a notification action is triggered, the system adds the objects in the specified object list to the notification.
[Sysname] snmp mib event object list owner owner1 name objectA 1 oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.6.11
[Sysname] snmp mib event object list owner owner1 name objectB 1 oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11
[Sysname] snmp mib event object list owner owner1 name objectC 1 oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.8.11
4. Configure an event:
# Create an event. Specify its owner as owner1 and its name as EventA.
[Sysname] snmp mib event owner owner1 name EventA
# Specify the notification action for the event.
[Sysname-event-owner1-EventA] action notification
# Specify the notification object hh3cEntityExtMemUsageThresholdNotification by its OID 1.3.6.1.4.1.25506.2.6.2.0.5 for the notification.
[Sysname-event-owner1-EventA-notification] oid 1.3.6.1.4.1.25506.2.6.2.0.5
# Specify the object list to be added to the notification when the notification action is triggered
[Sysname-event-owner1-EventA-notification] object list owner owner1 name objectC
[Sysname-event-owner1-EventA-notification] quit
# Enable the event.
[Sysname-event-owner1-EventA] event enable
[Sysname-event-owner1-EventA] quit
5. Configure a trigger:
# Create a trigger. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the interval is greater than or equal to the global minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify the monitored object by its OID.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11
# Specify the object list to be added to the notification when the notification action is triggered.
[Sysname-trigger-owner1-triggerA] object list owner owner1 name objectA
# Enable the Boolean trigger test. Specify the comparison type, reference value, event, and object list for the test.
[Sysname-trigger-owner1-triggerA] test existence
[Sysname-trigger-owner1-triggerA-existence] quit
[Sysname-trigger-owner1-triggerA-boolean] comparison greater
[Sysname-trigger-owner1-triggerA-boolean] value 10
[Sysname-trigger-owner1-triggerA-boolean] event owner owner1 name EventA
[Sysname-trigger-owner1-triggerA-boolean] object list owner owner1 name objectB
[Sysname-trigger-owner1-triggerA-boolean] quit
# Enable the trigger.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit
Verifying the configuration
# Display Event MIB configuration and statistics
[Sysname] display snmp mib event summary
TriggerFailures : 0
EventFailures : 0
SampleMinimum : 50
SampleInstanceMaximum : 10
SampleInstance : 1
SampleInstancesHigh : 1
SampleInstanceLacks : 0
# Display information about the Event MIB object lists.
[Sysname] display snmp mib event object list
Object list objectA owned by owner1:
ObjIndex : 1
ObjID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.6.11<hh3cEntityExt
CpuUsage.11>
ObjIDWildcard : false
Object list objectB owned by owner1:
ObjIndex : 1
ObjID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11<hh3cEntityExt
CpuUsageThreshold.11>
ObjIDWildcard : false
Object list objectC owned by owner1:
ObjIndex : 1
ObjID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.8.11<hh3cEntityExt
MemUsage.11>
ObjIDWildcard : false
# Display information about the event.
[Sysname]display snmp mib event event owner owner1 name EventA
Event entry EventA owned by owner1:
EvtComment : N/A
EvtAction : notification
EvtEnabled : true
Notification entry:
NotifyOID : 1.3.6.1.4.1.25506.2.6.2.0.5<hh3cEntityExtMemUsag
eThresholdNotification>
NotifyObjOwner : owner1
NotifyObjName : objectC
# Display information about the trigger.
[Sysname] display snmp mib event trigger owner owner1 name triggerA
Trigger entry triggerA owned by owner1:
TriggerComment : N/A
TriggerTest : boolean
TriggerSampleType : absoluteValue
TriggerValueID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11<hh3cEntityExt
MemUsageThreshold.11>
TriggerValueIDWildcard : false
TriggerTargetTag : N/A
TriggerContextName : N/A
TriggerContextNameWildcard : false
TriggerFrequency(in seconds): 60
TriggerObjOwner : owner1
TriggerObjName : objectA
TriggerEnabled : true
Boolean entry:
BoolCmp : greater
BoolValue : 10
BoolStartUp : true
BoolObjOwner : owner1
BoolObjName : objectB
BoolEvtOwner : owner1
BoolEvtName : EventA
# When the value of the monitored object 1.3.6.1.4.1.25506.2.6.1.1.1.1.9.11 is greater than 10, the NMS receives an mteTriggerFired notification.
Threshold trigger test configuration example
Network requirements
As shown in Figure 55, the device acts as the agent. The NMS uses SNMPv3 to monitor and manage the device. Configure a trigger and configure a threshold trigger test for the trigger. When the trigger conditions are met, the agent sent an mteTriggerFired notification to the NMS.
Figure 57 Network diagram
Configuration procedure
1. Configure the device:
# Add the user named owner1 to the SNMPv3 group g3. Assign g3 the right to access the MIB view a.
<Sysname> system-view
[Sysname] snmp-agent usm-user v3 owner1 g3
[Sysname] snmp-agent group v3 g3 read-view a write-view a notify-view a
[Sysname] snmp-agent mib-view included a iso
# Configure the agent to use the username owner1 to send SNMPv3 notifications to the NMS at 192.168.1.26.
[Sysname] snmp-agent target-host trap address udp-domain 192.168.1.26 params securityname owner1 v3
[Sysname] snmp-agent trap enable
2. Set the Event MIB minimum sampling interval to 50 seconds and set the maximum number to 10 for object instances that can be concurrently sampled.
<Sysname> system-view
[Sysname] snmp mib event sample minimum 50
[Sysname] snmp mib event sample instance maximum 10
3. Configure a trigger:
# Create a trigger. Specify its owner as owner1 and its name as triggerA.
[Sysname] snmp mib event trigger owner owner1 name triggerA
# Set the sampling interval to 60 seconds. Make sure the interval is greater than or equal to the Event MIB minimum sampling interval.
[Sysname-trigger-owner1-triggerA] frequency 60
# Specify the monitored object.
[Sysname-trigger-owner1-triggerA] oid 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11
# Enable the threshold trigger test. Specify the rising threshold, falling threshold, delta rising threshold, and delta falling threshold for the test.
[Sysname-trigger-owner1-triggerA] test threshold
[Sysname-trigger-owner1-triggerA-threshold] rising value 3
[Sysname-trigger-owner1-triggerA-threshold] falling value 1
[Sysname-trigger-owner1-triggerA-threshold] delta rising value 80
[Sysname-trigger-owner1-triggerA-threshold] delta falling value 10
[Sysname-trigger-owner1-triggerA-threshold] quit
# Enable the trigger.
[Sysname-trigger-owner1-triggerA] trigger enable
[Sysname-trigger-owner1-triggerA] quit
Verifying the configuration
# Display Event MIB configuration and statistics.
[Sysname] display snmp mib event summary
TriggerFailures : 0
EventFailures : 0
SampleMinimum : 50
SampleInstanceMaximum : 10
SampleInstance : 1
SampleInstancesHigh : 1
SampleInstanceLacks : 0
# Display information about the trigger.
[Sysname] display snmp mib event trigger owner owner1 name triggerA
Trigger entry triggerA owned by owner1:
TriggerComment : N/A
TriggerTest : threshold
TriggerSampleType : absoluteValue
TriggerValueID : 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11<hh3cEntityExt
CpuUsageThreshold.11>
TriggerValueIDWildcard : false
TriggerTargetTag : N/A
TriggerContextName : N/A
TriggercontextNameWildcard : false
TriggerFrequency(in seconds): 60
TriggerObjOwner : N/A
TriggerObjName : N/A
TriggerEnabled : true
Threshold entry:
ThresStartUp : risingOrFalling
ThresRising : 80
ThresFalling : 10
ThresDeltaRising : 0
ThresDeltaFalling : 0
ThresObjOwner : N/A
ThresObjName : N/A
ThresRisEvtOwner : N/A
ThresRisEvtName : N/A
ThresFalEvtOwner : N/A
ThresFalEvtName : N/A
ThresDeltaRisEvtOwner : N/A
ThresDeltaRisEvtName : N/A
ThresDeltaFalEvtOwner : N/A
ThresDeltaFalEvtName : N/A
# When the rising threshold of the monitored object 1.3.6.1.4.1.25506.2.6.1.1.1.1.7.11 is greater than 80, the NMS receives an mteTriggerFired notification.
Configuring NETCONF
Overview
Network Configuration Protocol (NETCONF) is an XML-based network management protocol with filtering capabilities. It provides programmable mechanisms to manage and configure network devices. Through NETCONF, you can configure device parameters, retrieve parameter values, and get statistics information.
In NETCONF messages, each data item is contained in a fixed element. This enables different devices of the same vendor to provide the same access method and the same result presentation method. For the devices of different vendors, XML mapping in NETCONF messages can achieve the same effect. For a network environment containing different devices regardless of vendors, you can develop a NETCONF-based NMS system to configure and manage devices in a simple and effective way.
NETCONF structure
NETCONF has four layers: content layer, operations layer, RPC layer, and transport protocol layer.
Table 9 NETCONF layers and XML layers
NETCONF layer |
XML layer |
Description |
Content |
Configuration data, status data, and statistics information |
The content layer contains a set of managed objects, which can be configuration data, status data, and statistics information. For information about the operable data, see the NETCONF XML API reference for the device. |
Operations |
<get>,<get-config>,<edit-config>… |
The operations layer defines a set of base operations invoked as RPC methods with XML-encoded parameters. NETCONF base operations include data retrieval operations, configuration operations, lock operations, and session operations. For the device supported operations, see "Appendix A Supported NETCONF operations." |
RPC |
<rpc>,<rpc-reply> |
The RPC layer provides a simple, transport-independent framing mechanism for encoding RPCs. The <rpc> and <rpc-reply> elements are used to enclose NETCONF requests and responses (data at the operations layer and the content layer). |
Transport Protocol |
·
In non-FIPS mode: ·
In FIPS mode: |
The transport protocol layer provides reliable, connection-oriented, serial data links. In non-FIPS mode, the following login methods are available: · You can log in through Telnet, SSH, or the console port to perform NETCONF operations at the CLI. · You can log in through HTTP or HTTPS to perform NETCONF operations or perform NETCONF over SOAP operations. In FIPS mode, all login methods are the same as in non-FIPS mode except that you cannot use HTTP or Telnet. |
NETCONF message format
NETCONF
|
IMPORTANT: When configuring NETCONF in XML view, you must add the end mark "]]>]]>" at the end of an XML message. Otherwise, the device cannot identify the message. Examples in this chapter do not have this end mark. Do add it in actual operations. |
All NETCONF messages are XML-based and comply with RFC 4741. Any incoming NETCONF messages must pass XML Schema check before it can be processed. If a NETCONF message fails XML Schema check, the device sends an error message to the client.
For information about the NETCONF operations supported by the device and the operable data, see the NETCONF XML API reference for the device.
The following example shows a NETCONF message for getting all parameters of all interfaces on the device:
<?xml version="1.0" encoding="utf-8"?>
<rpc message-id ="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>
NETCONF over SOAP
All NETCONF over SOAP messages are XML-based and comply with RFC 4741. NETCONF messages are contained in the <Body> element of SOAP messages. NETCONF over SOAP messages also comply with the following rules:
· SOAP messages must use the SOAP Envelope namespaces.
· SOAP messages must use the SOAP Encoding namespaces.
· SOAP messages cannot contain the following information:
? DTD reference.
? XML processing instructions.
The following example shows a NETCONF over SOAP message for getting all parameters of all interfaces on the device:
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
<env:Header>
<auth:Authentication env:mustUnderstand="1" xmlns:auth="http://www.h3c.com/netconf/base:1.0">
<auth:AuthInfo>800207F0120020C</auth:AuthInfo>
</auth:Authentication>
</env:Header>
<env:Body>
<rpc message-id ="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>
</env:Body>
</env:Envelope>
How to use NETCONF
You can use NETCONF to manage and configure the device by using the methods in Table 10.
Table 10 NETCONF methods for configuring the device
Configuration tool |
Login method |
Remarks |
CLI |
· Console port · SSH · Telnet |
To implement NETCONF operations, copy valid NETCONF messages to the CLI in XML view. |
Custom interface |
N/A |
To use this method, you must enable NETCONF over SOAP to encode the NETCONF messages sent from a custom interface in SOAP. |
Protocols and standards
· RFC 3339, Date and Time on the Internet: Timestamps
· RFC 4741, NETCONF Configuration Protocol
· RFC 4742, Using the NETCONF Configuration Protocol over Secure SHell (SSH)
· RFC 4743, Using NETCONF over the Simple Object Access Protocol (SOAP)
· RFC 5277, NETCONF Event Notifications
· RFC 5381, Experience of Implementing NETCONF over SOAP
· RFC 5539, NETCONF over Transport Layer Security (TLS)
· RFC 6241, Network Configuration Protocol
FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for features, commands, and parameters might differ in FIPS mode (see Security Configuration Guide) and non-FIPS mode.
NETCONF configuration task list
Tasks at a glance |
(Optional.) Configuring NETCONF over SOAP |
(Optional.) Enabling NETCONF over SSH |
(Optional.) Enabling NETCONF logging |
(Required.) Establishing a NETCONF session |
(Optional.) Subscribing to event notifications |
(Optional.) Locking/unlocking the configuration |
(Optional.) Performing the <get>/<get-bulk> operation |
(Optional.) Performing the <get-config>/<get-bulk-config> operation |
(Optional.) Performing the <edit-config> operation |
(Optional.) Saving, rolling back, and loading the configuration |
(Optional.) Enabling preprovisioning |
(Optional.) Filtering data |
(Optional.) Performing CLI operations through NETCONF |
(Optional.) Retrieving NETCONF information |
(Optional.) Retrieving YANG file content |
(Optional.) Retrieving NETCONF session information |
(Optional.) Terminating another NETCONF session |
(Optional.) Returning to the CLI |
Configuring NETCONF over SOAP
NETCONF messages can be encapsulated into SOAP messages and transmitted over HTTP and HTTPS. After configuring NETCONF over SOAP, you can develop a configuration interface to perform NETCONF operations.
To configure NETCONF over SOAP:
Step |
Command |
Remark |
1. Enter system view. |
system-view |
N/A |
2. Enable NETCONF over SOAP. |
·
Enable NETCONF over SOAP over HTTP (not
available in FIPS mode): ·
Enable NETCONF over SOAP over HTTPS: |
By default, the NETCONF over SOAP feature is disabled. |
3. Set the DSCP value for NETCONF over SOAP packets. |
·
Set the DSCP value for NETCONF over SOAP over
HTTP packets: ·
Set the DSCP value for NETCONF over SOAP over
HTTPs packets: |
By default, the DSCP value is 0 for NETCONF over SOAP packets. |
4. Apply an ACL to NETCONF over SOAP traffic. |
·
Apply an ACL to NETCONF over SOAP over HTTP
traffic (not available in FIPS mode): ·
Apply an ACL to NETCONF over SOAP over HTTPS
traffic: |
By default, no ACL is applied to NETCONF over SOAP traffic. |
5. Specify a mandatory authentication domain for NETCONF users. |
netconf soap domain domain-name |
By default, no mandatory authentication domain is specified for NETCONF users. For information about authentication domains, see Security Configuration Guide. |
Enabling NETCONF over SSH
This feature allows users to use a client to perform NETCONF operations on the device through a NETCONF over SSH connection.
To enable NETCONF over SSH:
Step |
Command |
Remark |
1. Enter system view. |
system-view |
N/A |
2. Enable NETCONF over SSH. |
netconf ssh server enable |
By default, NETCONF over SSH is disabled. |
3. Specify a port to listen for NETCONF over SSH connections. |
netconf ssh server port port-number |
By default, port 830 listens for NETCONF over SSH connections. |
Enabling NETCONF logging
NETCONF logging generates logs for different NETCONF operation sources and NETCONF operations.
To enable NETCONF logging:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable NETCONF logging. |
netconf log source { all | { agent | soap } * } { { protocol-operation { all | { action | config | get | session | set | syntax | others } * } } | verbose } |
By default, NETCONF logging is disabled. |
Establishing a NETCONF session
A client must send a hello message to a device to complete the capabilities exchange before the device processes other requests from the client.
You can use the aaa session-limit command to set the maximum number of NETCONF sessions that the device can support. If the upper limit is reached, new NETCONF users cannot access the device. For information about this command, see Security Configuration Guide.
Do not configure NETCONF when another user is configuring NETCONF. If multiple users simultaneously configure NETCONF, the configuration result returned to each user might be inconsistent with the user request.
Setting the NETCONF session idle timeout time
If no NETCONF packets are exchanged between the device and a user within the NETCONF session idle timeout time, the device tears down the session.
To set the NETCONF session idle timeout time:
Task |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Set the NETCONF session idle timeout time. |
netconf { soap | agent } idle-timeout minute |
By default, the NETCONF session idle timeout time is as follows: · 10 minutes for NETCONF over SOAP over HTTP sessions and NETCONF over SOAP over HTTPS sessions. · 0 minutes for NETCONF over SSH sessions, NETCONF over Telnet sessions, and NETCONF over console sessions. The sessions never time out. |
Entering XML view
Task |
Command |
Remarks |
Enter XML view. |
xml |
Available in user view. |
Exchanging capabilities
After you enter XML view, the client and the device exchange their capabilities before you can perform subsequent operations. The device automatically advertises its NETCONF capabilities to the client in a hello message as follows:
<?xml version="1.0" encoding="UTF-8"?><hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"><capabilities><capability>urn:ietf:params:netconf:base:1.1</capability><capability>urn:ietf:params:netconf:writable-running</capability><capability>urn:ietf:params:netconf:capability:notification:1.0</capability><capability>urn:ietf:params:netconf:capability:validate:1.1</capability><capability>urn:ietf:params:netconf:capability:interleave:1.0</capability><capability>urn:h3c:params:netconf:capability:h3c-netconf-ext:1.0</capability></capabilities><session-id>1</session-id></hello>]]>]]>
The <capabilities> parameter represents the capabilities supported by the device. The supported capabilities vary by device model.
The <session-id> parameter represents the unique ID assigned to the current session.
After receiving the hello message from the device, copy the following message to notify the device of the capabilities (user-configurable) supported by the client:
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
capability-set
</capability>
</capabilities>
</hello>
Use a pair of <capability> and </capability> tags to enclose each capability set.
Subscribing to event notifications
After you subscribe to event notifications, the device sends event notifications to the NETCONF client when a subscribed event takes place on the device. The notifications include the code, group, severity, start time, and description of the events. The device supports only log subscription. For information about which event notifications you can subscribe to, see the system log messages reference for the device.
A subscription takes effect only on the current session. If the session is terminated, the subscription is automatically canceled.
You can send multiple subscription messages to subscribe to notification of multiple events.
Subscription procedure
# Copy the following message to the client to complete the subscription:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<create-subscription xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<stream>NETCONF</stream>
<filter>
<event xmlns="http://www.h3c.com/netconf/event:1.0">
<Code>code</Code>
<Group>group</Group>
<Severity>severity</Severity>
</event>
</filter>
<startTime>start-time</startTime>
<stopTime>stop-time</stopTime>
</create-subscription>
</rpc>
The <stream> parameter represents the event stream type supported by the device. Only NETCONF is supported.
The <event> parameter represents an event to which you subscribe.
The <code> parameter represents a mnemonic symbol.
The <group> parameter represents the module name.
The <severity> parameter represents the severity level of the event.
The <start-time> parameter represents the start time of the subscription.
The <stop-time> parameter represents the end time of the subscription.
After receiving the subscription request from the client, the device returns a response in the following format if the subscription is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
If the subscription fails, the device returns an error message in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>error-type</error-type>
<error-tag>error-tag</error-tag>
<error-severity>error-severity</error-severity>
<error-message xml:lang="en">error-message</error-message>
</rpc-error>
</rpc-reply>
For more information about error messages, see RFC 4741.
Example for subscribing to event notifications
Network requirements
Configure a client to subscribe to all events with no time limitation. After the subscription is successful, all events on the device are sent to the client before the session between the device and client is terminated.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Subscribe to all events with no time limitation.
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<create-subscription xmlns ="urn:ietf:params:xml:ns:netconf:notification:1.0">
<stream>NETCONF</stream>
</create-subscription>
</rpc>
Verifying the configuration
# If the client receives the following response, the subscription is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">
<ok/>
</rpc-reply>
# When another client (192.168.100.130) logs in to the device, the device sends a notification to the client that has subscribed to all events:
<?xml version="1.0" encoding="UTF-8"?>
<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2011-01-04T12:30:52</eventTime>
<event xmlns="http://www.h3c.com/netconf/event:1.0">
<Group>SHELL</Group>
<Code>SHELL_LOGIN</Code>
<Slot>6</Slot>
<Severity>Notification</Severity>
<context>VTY logged in from 192.168.100.130.</context>
</event>
</notification>
Locking/unlocking the configuration
You can use multiple methods such as NETCONF, CLI, and SNMP to configure the device. During device configuration and maintenance or network troubleshooting, a user can lock the configuration to prevent other users from changing it. After that, only the user holding the lock can change the configuration, and other users can only read the configuration.
The <lock> operation locks only configuration data that can be changed by <edit-config> operations. Other configuration data are not limited by the <lock> operation.
In addition, only the user holding the lock can release the lock. After the lock is released, other users can change the current configuration or lock the configuration. If the session of the user that holds the lock is terminated, the system automatically releases the lock.
Locking the configuration
# Copy the following text to the client to lock the configuration:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<lock>
<target>
<running/>
</target>
</lock>
</rpc>
After receiving the lock request, the device returns a response in the following format if the <lock> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Unlocking the configuration
# Copy the following text to the client to unlock the configuration:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<unlock>
<target>
<running/>
</target>
</unlock>
</rpc>
After receiving the unlock request, the device returns a response in the following format if the <unlock> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Example for locking the configuration
Network requirements
Lock the device configuration so that other users cannot change the device configuration.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Lock the configuration.
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<lock>
<target>
<running/>
</target>
</lock>
</rpc>
Verifying the configuration
If the client receives the following response, the <lock> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
If another client sends a lock request, the device returns the following response:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rpc-error>
<error-type>protocol</error-type>
<error-tag>lock-denied</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en"> Lock failed because the NETCONF lock is held by another session.</error-message>
<error-info>
<session-id>1</session-id>
</error-info>
</rpc-error>
</rpc-reply>
The output shows that the <lock> operation failed because the client with session ID 1 held the lock, and only the client holding the lock can release the lock.
Performing service operations
You can use NETCONF to perform service operations on the device, such as retrieving and modifying the specified information. The basic operations include <get>, <get-bulk>, <get-config>, <get-bulk-config>, and <edit-config>, which are used to retrieve all data, retrieve configuration data, and edit the data of the specified module. For more information, see the NETCONF XML API reference for the device.
|
NOTE: For the <get>, <get-bulk>, <get-config>, and <get-bulk-config> operations, NETCONF will output the retrieved data to the client with unidentifiable characters replaced with question marks (?), if any. |
Performing the <get>/<get-bulk> operation
The <get> operation is used to retrieve device configuration and state information that match the conditions. In some cases, this operation leads to inefficiency.
The <get-bulk> operation is used to retrieve a number of data entries starting from the data entry next to the one with the specified index. One data entry contains a device configuration entry and a state information entry. The data entry quantity is defined by the count attribute, and the index is specified by the index attribute. The returned output does not include the index information. If you do not specify the index attribute, the index value starts with 1 by default.
The <get-bulk> operation retrieves all the rest data entries starting from the data entry next to the one with the specified index if either of the following conditions occurs:
· You do not specify the count attribute.
· The number of matching data entries is less than the value of the count attribute.
# Copy the following text to the client to perform the <get> operation:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<getoperation>
<filter>
<top xmlns="http://www.h3c.com/netconf/data:1.0">
Specify the module, submodule, table name, and column name
</top>
</filter>
</getoperation>
</rpc>
The <getoperation> parameter can be <get> or <get-bulk>. The <filter> element is used to filter data, and it can contain module name, submodule name, table name, and column name.
· If the module name and the submodule name are not provided, the operation retrieves the data for all modules and submodules. If a module name or a submodule name is provided, the operation retrieves the data for the specified module or submodule.
· If the table name is not provided, the operation retrieves the data for all tables. If a table name is provided, the operation retrieves the data for the specified table.
· If only the index column is provided, the operation retrieves the data for all columns. If the index column and other columns are provided, the operation retrieves the data for the index column and the specified columns.
The <get> and <get-bulk> messages are similar. A <get-bulk> message carries the count and index attributes. The following is a <get-bulk> message example:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:xc="http://www.h3c.com/netconf/base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0" xmlns:base="http://www.h3c.com/netconf/base:1.0">
<Syslog>
<Logs xc:count="5">
<Log>
<Index>10</Index>
</Log>
</Logs>
</Syslog>
</top>
</filter>
</get-bulk>
</rpc>
The count attribute complies with the following rules:
· The count attribute can be placed in the module node and table node. In other nodes, it cannot be resolved.
· When the count attribute is placed in the module node, a descendant node inherits this count attribute if the descendant node does not contain the count attribute.
Verifying the configuration
After receiving the get-bulk request, the device returns a response in the following format if the operation is successful:
<?xml version="1.0"?>
<rpc-reply message-id="100"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
Device state and configuration data
</data>
</rpc-reply>
Performing the <get-config>/<get-bulk-config> operation
The <get-config> and <get-bulk-config> operations are used to retrieve all non-default configurations, which are configured by using the CLI or MIB. The <get-config> and <get-bulk-config> messages can contain the <filter> element for filtering data.
The <get-config> and <get-bulk-config> messages are similar. The following is a <get-config> message example:
<?xml version="1.0"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
<filter>
<top xmlns="http://www.h3c.com/netconf/config:1.0">
Specify the module name, submodule name, table name, and column name
</top>
</filter>
</get-config>
</rpc>
Verifying the configuration
After receiving the get-config request, the device returns a response in the following format if the operation is successful:
<?xml version="1.0"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
All data matching the specified filter
</data>
</rpc-reply>
Performing the <edit-config> operation
The <edit-config> operation supports the following operation attributes: merge, create, replace, remove, delete, default-operation, error-option, test-option, and incremental. For more information about these attributes, see "Appendix A Supported NETCONF operations."
# Copy the following text to perform the <edit-config> operation:
<?xml version="1.0"?>
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target><running></running></target>
<error-option>
Default operation when an error occurs
</error-option>
<config>
<top xmlns="http://www.h3c.com/netconf/config:1.0">
Specify the module name, submodule name, table name, and column name
</top>
</config>
</edit-config>
</rpc>
After receiving the edit-config request, the device returns a response in the following format if the operation is successful:
<?xml version="1.0">
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
All-module configuration data retrieval example
Network requirements
Retrieve configuration data for all modules.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Retrieve configuration data for all modules.
<rpc message-id="100"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
</get-config>
</rpc>
Verifying the configuration
If the client receives the following text, the <get-config> operation is successful:
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">
<data>
<top xmlns="http://www.h3c.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>1307</IfIndex>
<Shutdown>1</Shutdown>
</Interface>
<Interface>
<IfIndex>1308</IfIndex>
<Shutdown>1</Shutdown>
</Interface>
<Interface>
<IfIndex>1309</IfIndex>
<Shutdown>1</Shutdown>
</Interface>
<Interface>
<IfIndex>1311</IfIndex>
<VlanType>2</VlanType>
</Interface>
<Interface>
<IfIndex>1313</IfIndex>
<VlanType>2</VlanType>
</Interface>
</Interfaces>
</Ifmgr>
<Syslog>
<LogBuffer>
<BufferSize>120</BufferSize>
</LogBuffer>
</Syslog>
<System>
<Device>
<SysName>H3C</SysName>
<TimeZone>
<Zone>+11:44</Zone>
<ZoneName>beijing</ZoneName>
</TimeZone>
</Device>
</System>
</top>
</data>
</rpc-reply>
Syslog configuration data retrieval example
Network requirements
Retrieve configuration data for the Syslog module.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Retrieve configuration data for the Syslog module.
<rpc message-id="100"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/config:1.0">
<Syslog/>
</top>
</filter>
</get-config>
</rpc>
Verifying the configuration
If the client receives the following text, the <get-config> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">
<data>
<top xmlns="http://www.h3c.com/netconf/config:1.0">
<Syslog>
<LogBuffer>
<BufferSize>120</BufferSize>
</LogBuffer>
</Syslog>
</top>
</data>
</rpc-reply>
Example for retrieving a data entry for the interface table
Network requirements
Retrieve a data entry for the interface table.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
</capabilities>
</hello>
# Retrieve a data entry for the interface table.
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-bulk>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0" xmlns:web="http://www.h3c.com/netconf/base:1.0">
<Ifmgr>
<Interfaces web:count="1">
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-bulk>
</rpc>
Verifying the configuration
If the client receives the following text, the <get-bulk> operation is successful:
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">
<data>
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>3</IfIndex>
<Name>HundredGigE1/0/2</Name>
<AbbreviatedName>HGE1/0/2</AbbreviatedName>
<PortIndex>3</PortIndex>
<ifTypeExt>22</ifTypeExt>
<ifType>6</ifType>
<Description>HundredGigE1/0/2 Interface</Description>
<AdminStatus>2</AdminStatus>
<OperStatus>2</OperStatus>
<ConfigSpeed>0</ConfigSpeed>
<ActualSpeed>100000</ActualSpeed>
<ConfigDuplex>3</ConfigDuplex>
<ActualDuplex>1</ActualDuplex>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</data>
</rpc-reply>
Example for changing the value of a parameter
Network requirements
Change the log buffer size for the Syslog module to 512.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
</capabilities>
</hello>
# Change the log buffer size for the Syslog module to 512.
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:web="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<config>
<top xmlns="http://www.h3c.com/netconf/config:1.0" web:operation="merge">
<Syslog>
<LogBuffer>
<BufferSize>512</BufferSize>
</LogBuffer>
</Syslog>
</top>
</config>
</edit-config>
</rpc>
Verifying the configuration
If the client receives the following text, the <edit-config> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Saving, rolling back, and loading the configuration
Use NETCONF to save, roll back, or load the configuration.
Performing the <save>, <rollback>, or <load> operation consumes a lot of system resources. Do not perform these operations when the system resources are heavily occupied.
Saving the configuration
# Copy the following text to the client to save the device configuration to the specified file:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save OverWrite="false">
<file>Specify the configuration file name</file>
</save>
</rpc>
The name of the specified configuration file must start with the storage media name and end with the extension .cfg. If the text includes the file column, you must specify the file name. The specified file will be used as the next-startup configuration file. If the text does not include the file column, the configuration is automatically saved to the default main next-startup configuration file.
The OverWrite attribute takes effect only when the name of the specified configuration file already exists. If the attribute uses the default value true, the current configuration is saved and overwrites the original file. If the attribute value is set to false, the current configuration cannot be saved, and the system displays an error message.
After receiving the save request, the device returns a response in the following format if the <save> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Rolling back the configuration based on a configuration file
# Copy the following text to the client to roll back the configuration based on a configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<rollback>
<file>Specify the configuration file name</file>
</rollback>
</rpc>
After receiving the rollback request, the device returns a response in the following format if the <rollback> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Rolling back the configuration based on a rollback point
You can roll back the running configuration based on a rollback point when one of the following situations occurs:
· A NETCONF client sends a rollback request.
· The NETCONF session idle time is longer than the rollback idle timeout time.
· A NETCONF client is unexpectedly disconnected from the device.
To roll back the configuration based on a rollback point, perform the following tasks:
1. Lock the system.
Multiple users might simultaneously use NETCONF to configure the device. As a best practice, lock the system before rolling back the configuration to prevent other users from modifying the running configuration.
2. Mark the beginning of a <rollback> operation. For more information, see "Performing the <save-point/begin> operation."
3. Edit the device configuration. For more information, see "Performing the <edit-config> operation."
4. Configure the rollback point. For more information, see "Performing the <save-point/commit> operation."
You can repeat this step to configure multiple rollback points.
5. Roll back the configuration based on the rollback point. For more information, see "Performing the <save-point/rollback> operation."
The configuration can also be automatically rolled back based on the most recently configured rollback point when the NETCONF session idle time is longer than the rollback idle timeout time.
6. End the rollback configuration. For more information, see "Performing the <save-point/end> operation."
7. Release the lock.
Performing the <save-point/begin> operation
# Copy the following text to the client to mark the beginning of a <rollback> operation based on a rollback point:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<begin>
<confirm-timeout>100</confirm-timeout>
</begin>
</save-point>
</rpc>
The <confirm-timeout> parameter specifies the rollback idle timeout time in the range of 1 to 65535 seconds (the default is 600 seconds). This parameter is optional.
After receiving the begin request, the device returns a response in the following format if the <begin> operation is successful:
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit>
<commit-id>1</commit-id>
</commit>
</save-point>
</data>
</rpc-reply>
Performing the <save-point/commit> operation
The system supports a maximum of 50 rollback points. When the limit is reached, you must specify the force attribute to overwrite the earliest rollback point.
# Copy the following text to the client to configure the rollback point:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<commit>
<label>SUPPORT VLAN<label>
<comment>vlan 1 to 100 and interfaces. Each vlan used for different custom as fllows: ……</comment>
</commit>
</save-point>
</rpc>
The <label> and <comment> parameters are optional.
After receiving the commit request, the device returns a response in the following format if the <commit> operation is successful:
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit>
<commit-id>2</commit-id>
</commit>
</save-point>
</data>
</rpc-reply>
Performing the <save-point/rollback> operation
# Copy the following text to the client to roll back the configuration:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<rollback>
<commit-id/>
<commit-index/>
<commit-label/>
</rollback>
</save-point>
</rpc>
The <commit-id> parameter uniquely identifies a rollback point.
The <commit-index> parameter specifies 50 most recently configured rollback points. The value of 0 indicates the most recently configured one and 49 indicates the earliest configured one.
The <commit-label> parameter exclusively specifies a label for a rollback point. The label is not required for a rollback point.
Specify one of these parameters to roll back the specified configuration. If no parameter is specified, this operation rolls back configuration based on the most recently configured rollback point.
After receiving the rollback request, the device returns a response in the following format if the <rollback> operation is successful:
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok></ok>
</rpc-reply>
Performing the <save-point/end> operation
# Copy the following text to the client to end the rollback configuration:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<end/>
</save-point>
</rpc>
After receiving the end request, the device returns a response in the following format if the <end> operation is successful:
<rpc-reply message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Performing the <save-point/get-commits> operation
# Copy the following text to the client to get the rollback point configuration records:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commits>
<commit-id/>
<commit-index/>
<commit-label/>
</get-commits>
</save-point>
</rpc>
Specify one of the <commit-id>, <commit-index>, and <commit-label> parameters to get the specified rollback point configuration records. If no parameter is specified, this operation gets records for all rollback point configurations. The following text is a <save-point>/<get-commits> request example:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commits>
<commit-label>SUPPORT VLAN</commit-label>
</get-commits>
</save-point>
</rpc>
After receiving the get commits request, the device returns a response in the following format if the <get commits> operation is successful:
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit-information>
<CommitID>2</CommitID>
<TimeStamp>Thu Oct 30 11:30:28 1980</TimeStamp>
<UserName>test</UserName>
<Label>SUPPORT VLAN</Label>
</commit-information>
</save-point>
</data>
</rpc-reply>
Performing the <save-point/get-commit-information> operation
# Copy the following text to the client to get the system configuration data corresponding to a rollback point:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commit-information>
<commit-information>
<commit-id/>
<commit-index/>
<commit-label/>
</commit-information>
<compare-information>
<commit-id/>
<commit-index/>
<commit-label/>
</compare-information
</get-commit-information>
</save-point>
</rpc>
Specify one of the <commit-id>, <commit-index>, and <commit-label> parameters to get the configuration data corresponding to the specified rollback point. The <compare-information> parameter is optional. If no parameter is specified, this operation gets the configuration data corresponding to the most recently configured rollback point. The following text is a <save-point>/<get-commit-information> request example:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save-point>
<get-commit-information>
<commit-information>
<commit-label>SUPPORT VLAN</commit-label>
</commit-information>
</get-commit-information>
</save-point>
</rpc>
After receiving the get-commit-information request, the device returns a response in the following format if the <get-commit-information> operation is successful:
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<save-point>
<commit-information>
<content>
…
interface vlan 1
…
</content>
</commit-information>
</save-point>
</data>
</rpc-reply>
Loading the configuration
After you perform the <load> operation, the loaded configurations are merged into the current configuration as follows:
· New configurations are directly loaded.
· Configurations that already exist in the current configuration are replaced by those loaded from the configuration file.
Some configurations in a configuration file might conflict with the existing configurations. For the configurations in the file to take effect, delete the existing conflicting configurations, and then load the configuration file.
# Copy the following text to the client to load a configuration file for the device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<load>
<file>Specify the configuration file name</file>
</load>
</rpc>
The name of the specified configuration file must start with the storage media name and end with the extension .cfg.
After receiving the load request, the device returns a response in the following format if the <load> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Example for saving the configuration
Network requirements
Save the current configuration to the configuration file my_config.cfg.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Save the configuration of the device to the configuration file my_config.cfg.
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<save>
<file>my_config.cfg</file>
</save>
</rpc>
Verifying the configuration
If the client receives the following response, the <save> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Enabling preprovisioning
The <config-provisioned> operation enables preprovisioning.
· With preprovisioning disabled, the configuration for an interface module is lost if you uninstall the interface module, save the running configuration, and reboot device. If you reinstall the interface module, you must reconfigure it.
· With preprovisioning enabled, you can view and modify the configuration for an interface module after you uninstall the interface module. If you save the running configuration and reboot the device, the configuration for the interface module is still retained. If you reinstall the interface module, the device applies the retained configuration to the interface module. You do not need to reconfigure the interface module.
To view or modify the configuration for an offline interface module, you can use only CLI commands.
Only the following commands support preprovisioning:
· Commands in the view of an interface module.
· Commands in slot view.
· Command qos traffic-counter.
Only interface modules in Normal state support preprovisioning.
# Copy the following text to the client to enable preprovisioning:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<config-provisioned>
</config-provisioned>
</rpc>
The device returns a response in the following format if preprovisioning is successfully enabled:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Filtering data
You can define a filter to filter information when you perform a <get>, <get-bulk>, <get-config>, or <get-bulk-config> operation. Data filtering includes the following types:
· Table-based filtering—Filters table information.
· Column-based filtering—Filters information for a single column.
For table-based filtering to take effect, you must configure table-based filtering before column-based filtering.
Table-based filtering
You can specify a match criterion for the row attribute filter to implement table-based filtering, for example, IP address filtering. The namespace is http://www.h3c.com/netconf/base:1.0. For information about the support for table-based match, see NETCONF XML API documents.
# Copy the following text to the client to retrieve the longest data with IP address 1.1.1.0 and mask length 24 from the IPv4 routing table:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:h3c="http://www.h3c.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Route>
<Ipv4Routes>
<RouteEntry h3c:filter="IP 1.1.1.0 MaskLen 24 longer/>
</Ipv4Routes>
</Route>
</top>
</filter>
</get>
</rpc>
Column-based filtering
Column-based filtering includes full match filtering, regular expression match filtering, and conditional match filtering. Full match filtering has the highest priority and conditional match filtering has the lowest priority. When more than one filtering criterion is specified, the one with the highest priority takes effect.
Full match filtering
You can specify an element value in an XML message to implement full match filtering. If multiple element values are provided, the system returns the data that matches all the specified values.
# Copy the following text to the client to retrieve configuration data of all interfaces in UP state:
<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<AdminStatus>2</AdminStatus>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>
You can also specify an attribute name that is the same as a column name of the current table at the row to implement full match filtering. The system returns only configuration data that matches this attribute name. The XML message equivalent to the above element-value-based full match filtering is as follows:
<rpc message-id ="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0"xmlns:data="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface data:AdminStatus="2"/>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>
The above examples show that both element-value-based full match filtering and attribute-name-based full match filtering can retrieve the same configuration data for all UP interfaces.
Regular expression match filtering
To implement a complex data filtering with characters, you can add a regExp attribute for a specific element.
The supported data types include integer, date and time, character string, IPv4 address, IPv4 mask, IPv6 address, MAC address, OID, and time zone.
# Copy the following text to the client to retrieve the descriptions of interfaces, of which all the characters must be upper-case letters from A to Z:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:h3c="http://www.h3c.com/netconf/base:1.0">
<get-config>
<source>
<running/>
</source>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/config:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<Description h3c:regExp="^[A-Z]*$"/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get-config>
</rpc>
Conditional match filtering
To implement a complex data filtering with digits and character strings, you can add a match attribute for a specific element. Table 11 lists the conditional match operators.
Table 11 Conditional match operators
Operation |
Operator |
Remarks |
More than |
match="more:value" |
More than the specified value. The supported data types include date, digit, and character string. |
Less than |
match="less:value" |
Less than the specified value. The supported data types include date, digit, and character string. |
Not less than |
match="notLess:value" |
Not less than the specified value. The supported data types include date, digit, and character string. |
Not more than |
match="notMore:value" |
Not more than the specified value. The supported data types include date, digit, and character string. |
Equal |
match="equal:value" |
Equal to the specified value. The supported data types include date, digit, character string, OID, and BOOL. |
Not equal |
match="notEqual:value" |
Not equal to the specified value. The supported data types include date, digit, character string, OID, and BOOL. |
Include |
match="include:string" |
Includes the specified string. The supported data types include only character string. |
Not include |
match="exclude:string" |
Excludes the specified string. The supported data types include only character string. |
Start with |
match="startWith:string" |
Starts with the specified string. The supported data types include character string and OID. |
End with |
match="endWith:string" |
Ends with the specified string. The supported data types include only character string. |
# Copy the following text to the client to retrieve extension information about the entity whose CPU usage is more than 50%:
<rpc message-id="100" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:h3c="http://www.h3c.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Device>
<ExtPhysicalEntities>
<Entity>
<CpuUsage h3c:match="more:50"></CpuUsage>
</Entity>
</ExtPhysicalEntities>
</Device>
</top>
</filter>
</get>
</rpc>
Example for filtering data with regular expression match
Network requirements
Retrieve all data including Gigabit in the Description column of the Interfaces table under the Ifmgr module.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Retrieve all data including Gigabit in the Description column of the Interfaces table under the Ifmgr module.
<?xml version="1.0"?>
<rpc message-id="100"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:reg="http://www.h3c.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<Description h3c:regExp="(Gigabit)+"/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>
Verifying the configuration
If the client receives the following text, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:reg="http://www.h3c.com/netconf/base:1.0" message-id="100">
<data>
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>2681</IfIndex>
<Description>HundredGigE1/0/1 Interface</Description>
</Interface>
<Interface>
<IfIndex>2685</IfIndex>
<Description>HundredGigE1/0/2 Interface</Description>
</Interface>
<Interface>
<IfIndex>2689</IfIndex>
<Description>HundredGigE1/0/3 Interface</Description>
</Interface>
<Interface>
</Ifmgr>
</top>
</data>
</rpc-reply>
Example for filtering data by conditional match
Network requirements
Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table under the Ifmgr module.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Retrieve data in the Name column with the ifindex value not less than 5000 in the Interfaces table under the Ifmgr module.
<rpc message-id="100"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="http://www.h3c.com/netconf/base:1.0">
<get>
<filter type="subtree">
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex h3c:match="notLess:5000"/>
<Name/>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</filter>
</get>
</rpc>
Verifying the configuration
If the client receives the following text, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="http://www.h3c.com/netconf/base:1.0" message-id="100">
<data>
<top xmlns="http://www.h3c.com/netconf/data:1.0">
<Ifmgr>
<Interfaces>
<Interface>
<IfIndex>7241</IfIndex>
<Name>NULL0</Name>
</Interface>
</Interfaces>
</Ifmgr>
</top>
</data>
</rpc-reply>
Performing CLI operations through NETCONF
You can enclose command lines in XML messages to configure the device.
Performing CLI operations through NETCONF is resource intensive. As a best practice, do not perform the following tasks:
· Enclose multiple command lines in one XML message.
· Use NETCONF to perform a CLI operation when other users are performing NETCONF CLI operations.
Configuration procedure
# Copy the following text to the client to execute the commands:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
Commands
</Execution>
</CLI>
</rpc>
The <Execution> element can contain multiple commands, with one command on one line.
After receiving the CLI operation request, the device returns a response in the following format if the CLI operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
<![CDATA[Responses to the commands]]>
</Execution>
</CLI>
</rpc-reply>
CLI operation example
Configuration requirements
Send the display current-configuration command to the device.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Copy the following text to the client to execute the display current-configuration command:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution>
display default-configuration
</Execution>
</CLI>
</rpc>
Verifying the configuration
If the client receives the following text, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<CLI>
<Execution><![CDATA[
<Sysname>display current-configuration
#
undo ip redirects enable
undo ip ttl-expires enable
undo ip unreachables enable
#
stp global enable
#
lldp global enable
#
vlan 1
#
interface NULL0
#
radius scheme system
user-name-format without-domain
#
domain system
undo access-limit enable
state active
#
domain default enable system
#
return
]]>
</Execution>
</CLI>
</rpc-reply>
Retrieving NETCONF information
# Copy the following text to the client to retrieve NETCONF information:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="m-641" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get>
<filter type='subtree'>
<netconf-state xmlns='urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring'>
<getType/>
</netconf-state>
</filter>
</get>
</rpc>
The value for the <getType> parameter can be one of the following operations:
· capabilities—Retrieves device capabilities.
· datastores—Retrieves databases from the device.
· schemas—Retrieves the list of the YANG file names from the device.
· sessions—Retrieves session information from the device.
· statistics—Retrieves NETCONF statistics.
If you do not specify a value for the <getType> parameter, the retrieval operation retrieves all NETCONF information.
The retrieval operation does not support data filtering.
After receiving the NETCONF information retrieval request, the device returns a response in the following format if the operation is successful:
<?xml version="1.0"?>
<rpc-reply message-id="100"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
ALL NETCONF information
</data>
</rpc-reply>
Retrieving YANG file content
YANG files save the NETCONF operations supported by the device. A user can know the supported operations by retrieving and analyzing the content of YANG files.
YANG files are integrated in the device software and are named in the format of yang_identifier@yang_version.yang. You cannot view the YANG file names by executing the dir command. For information about how to retrieve the YANG file names, see "Retrieving NETCONF information."
# Copy the following text to the client to retrieve the YANG file named syslog-data@2015-05-07.yang:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-schema xmlns='urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring'>
<identifier>syslog-data</identifier>
<version>2015-05-07</version>
<format>yang</format>
</get-schema>
</rpc>
# After receiving the YANG file retrieve request, the device returns a response in the following format if the operation is successful:
<?xml version="1.0"?>
<rpc-reply message-id="100"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
Content of the specified YANG file
</data>
</rpc-reply>
Retrieving NETCONF session information
You can use the <get-sessions> operation to retrieve NETCONF session information of the device.
# Copy the following message to the client to retrieve NETCONF session information from the device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions/>
</rpc>
After receiving the get-sessions request, the device returns a response in the following format if the <get-sessions> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions>
<Session>
<SessionID>Configuration session ID</SessionID>
<Line>Line information</Line>
<UserName>Name of the user creating the session</UserName>
<Since>Time when the session was created</Since>
<LockHeld>Whether the session holds a lock</LockHeld>
</Session>
</get-sessions>
</rpc-reply>
For example, to get NETCONF session information:
# Enter XML view.
<Sysname> xml
# Copy the following message to the client to exchange capabilities with the device:
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Copy the following message to the client to get the current NETCONF session information on the device:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-sessions/>
</rpc>
If the client receives a message as follows, the operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">
<get-sessions>
<Session>
<SessionID>1</SessionID>
<Line>vty0</Line>
<UserName></UserName>
<Since>2011-01-05T00:24:57</Since>
<LockHeld>false</LockHeld>
</Session>
</get-sessions>
</rpc-reply>
The output shows the following information:
· The session ID of an existing NETCONF session is 1.
· The login user type is vty0.
· The login time is 2011-01-05T00:24:57.
· The user does not hold the lock of the configuration.
Terminating another NETCONF session
NETCONF allows one client to terminate the NETCONF session of another client. The client whose session is terminated returns to user view.
Configuration procedure
# Copy the following message to the client to terminate the specified NETCONF session:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<kill-session>
<session-id>
Specified session-ID
</session-id>
</kill-session>
</rpc>
After receiving the kill-session request, the device returns a response in the following format if the <kill-session> operation is successful:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Configuration example
Configuration requirement
The user whose session's ID is 1 terminates the session with session ID 2.
Configuration procedure
# Enter XML view.
<Sysname> xml
# Exchange capabilities.
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>
urn:ietf:params:netconf:base:1.0
</capability>
</capabilities>
</hello>
# Terminate the session with session ID 2.
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<kill-session>
<session-id>2</session-id>
</kill-session>
</rpc>
Verifying the configuration
If the client receives the following text, the NETCONF session with session ID 2 has been terminated, and the client with session ID 2 has returned from XML view to user view:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Returning to the CLI
To return from XML view to the CLI, send the following close-session request:
<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<close-session/>
</rpc>
When the device receives the close-session request, it sends the following response and returns to CLI's user view:
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply>
Appendix
Appendix A Supported NETCONF operations
Table 12 lists the NETCONF operations available with Comware 7.
Operation |
Description |
XML example |
get |
Retrieves device configuration and state information. |
To retrieve device configuration and state information for the Syslog module: |
get-config |
Retrieves the non-default configuration data. If non-default configuration data does not exist, the device returns a response with empty data. |
To retrieve non-default configuration data for the interface table: |
get-bulk |
Retrieves a number of data entries (including device configuration and state information) starting from the data entry next to the one with the specified index. |
To retrieve device configuration and state information for all interface: |
get-bulk-config |
Retrieves a number of non-default configuration data entries starting from the data entry next to the one with the specified index. |
To retrieve non-default configuration for all interfaces: |
edit-config: incremental |
Adds configuration data to a column without affecting the original data. The incremental attribute applies to a list column such as the vlan permitlist column. You can use the incremental attribute for <edit-config> operations except for the replace operation. Support for the incremental attribute varies by module. For more information, see NETCONF XML API documents. |
To add VLANs 1 through 10 to an untagged VLAN list that has untagged VLANs 12 through 15: |
edit-config: merge |
Changes the running configuration. To use the merge attribute in an <edit-config> operation, you must specify the operation target (on a specified level): · If the specified target exists, the operation directly changes the configuration for the target. · If the specified target does not exist, the operation creates and configures the target. · If the specified target does not exist and it cannot be created, an error message is returned. |
To change the buffer size to 120: |
edit-config: create |
Creates a specified target. To use the create attribute in an <edit-config> operation, you must specify the operation target. · If the table supports target creation and the specified target does not exist, the operation creates and then configures the target. · If the specified target exists, a data-exist error message is returned. |
The XML data format is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to create. |
edit-config: replace |
Replaces the specified target. · If the specified target exists, the operation replaces the configuration of the target with the configuration carried in the message. · If the specified target does not exist but is allowed to be created, create the target and then apply the configuration of the target. · If the specified target does not exist and is not allowed to be created, the operation is not conducted and an invalid-value error message is returned. |
The syntax is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to replace. |
edit-config: remove |
Removes the specified configuration. · If the specified target has only the table index, the operation removes all configuration of the specified target, and the target itself. · If the specified target has the table index and configuration data, the operation removes the specified configuration data of this target. · If the specified target does not exist, or the XML message does not specify any targets, a success message is returned. |
The syntax is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to remove. |
edit-config: delete |
Deletes the specified configuration. · If the specified target has only the table index, the operation removes all configuration of the specified target, and the target itself. · If the specified target has the table index and configuration data, the operation removes the specified configuration data of this target. · If the specified target does not exist, an error message is returned, showing that the target does not exist. |
The syntax is the same as the edit-config message with the merge attribute. Change the operation attribute from merge to delete. |
edit-config: default-operation |
Modifies the current configuration of the device using the default operation method. If you do not specify an operation attribute for an <edit-config> message, NETCONF uses one of the following default operation attributes: merge, create, delete, and replace. Your setting of the value for the <default-operation> element takes effect only once. If you do not specify an operation attribute and the default operation method for an <edit-config> message, merge is always applied. · merge—The default value for the <default-operation> element. · replace—Value used when the operation attribute is not specified and the default operation method is specified as replace. · none—Value used when the operation attribute is not specified and the default operation method is specified as none. If this value is specified, the <edit-config> operation is used only for schema verification rather than issuing a configuration. If the schema verification is passed, a successful message is returned. Otherwise, an error message is returned. |
To issue an empty operation for schema verification purposes: |
edit-config: error-option |
Determines the action to take in case of a configuration error. The error-option element has one of the following values: · stop-on-error—Stops the operation on error and returns an error message. This is the default error-option value. · continue-on-error—Continues the operation on error and returns an error message. · rollback-on-error—Rolls back the configuration. |
To issue the configuration for two interfaces with the error-option element value as continue-on-error: |
edit-config: test-option |
Determines whether to issue a configuration item in an <edit-config> operation. The test-option element has one of the following values: · test-then-set—Performs a validation test before attempting to set. If the validation test fails, the <edit-config> operation is not performed. This is the default test-option value. · set—Directly performs the set operation without the validation test. · test-only—Performs only a validation test without attempting to set. If the validation test succeeds, a successful message is returned. Otherwise, an error message is returned. |
To issue the configuration for an interface for test purposes: |
action |
Issues actions that are not for configuring data, for example, reset action. |
To clear statistics information for all interfaces: |
lock |
Locks configuration data that can be changed by <edit-config> operations. Other configuration data are not limited by the lock operation. After a user locks the configuration, other users cannot use NETCONF or other configuration methods such as CLI and SNMP to configure the device. |
To lock the configuration: |
unlock |
Unlocks the configuration, so NETCONF sessions can change device configuration. When a NETCONF session is terminated, the related locked configuration is also unlocked. |
To unlock the configuration: |
get-sessions |
Retrieves information about all NETCONF sessions in the system. |
To retrieve information about all NETCONF sessions in the system: |
close-session |
Terminates the NETCONF session for the current user, to unlock the configuration and release the resources (for example, memory) of this session. This operation logs the current user off the XML view. |
To terminate the NETCONF session for the current user: |
kill-session |
Terminates the NETCONF session for another user. This operation cannot terminate the NETCONF session for the current user. |
To terminate the NETCONF session with session-id 1: |
CLI |
Executes CLI operations. A request message encloses commands in the <CLI> element, and a response message encloses the command output in the <CLI> element. NETCONF supports the following views: · Execution—User view. · Configuration—System view. To execute a command in other views, specify the command for entering the specified view, and then the desired command. |
To execute the display this command in system view: |
save |
Saves the running configuration. You can use the <file> element to specify a file for saving the configuration. If the text does not include the file column, the running configuration is automatically saved to the main next-startup configuration file. The OverWrite attribute determines whether the current configuration overwrites the original configuration file when the specified file already exists. |
To save the running configuration to the file test.cfg: |
load |
Loads the configuration. After the device finishes the <load> operation, the configuration in the specified file is merged into the current configuration of the device. |
To merge the configuration in the file a1.cfg to the current configuration of the device: |
rollback |
Rolls back the configuration. To do so, you must specify the configuration file in the <file> element. After the device finishes the <rollback> operation, the current device configuration is totally replaced with the configuration in the specified configuration file. |
To roll back the current configuration to the configuration in the file 1A.cfg: |
Configuring EAA
Overview
Embedded Automation Architecture (EAA) is a monitoring framework that enables you to self-define monitored events and actions to take in response to an event. It allows you to create monitor policies by using the CLI or Tcl scripts.
EAA framework
EAA framework includes a set of event sources, a set of event monitors, a real-time event manager (RTM), and a set of user-defined monitor policies, as shown in Figure 58.
Figure 58 EAA framework
Event sources
Event sources are software or hardware modules that trigger events (see Figure 58).
For example, the CLI module triggers an event when you enter a command. The Syslog module (the information center) triggers an event when it receives a log message.
Event monitors
EAA creates one event monitor to monitor the system for the event specified in each monitor policy. An event monitor notifies the RTM to run the monitor policy when the monitored event occurs.
RTM
RTM manages the creation, state machine, and execution of monitor policies.
EAA monitor policies
A monitor policy specifies the event to monitor and actions to take when the event occurs.
You can configure EAA monitor policies by using the CLI or Tcl.
A monitor policy contains the following elements:
· One event.
· A minimum of one action.
· A minimum of one user role.
· One running time setting.
For more information about these elements, see "Elements in a monitor policy."
Elements in a monitor policy
Event
Table 13 shows types of events that EAA can monitor.
Event type |
Description |
CLI |
CLI event occurs in response to monitored operations performed at the CLI. For example, a command is entered, a question mark (?) is entered, or the Tab key is pressed to complete a command. |
Syslog |
Syslog event occurs when the information center receives the monitored log within a specific period. NOTE: The log that is generated by the EAA RTM does not trigger the monitor policy to run. |
Process |
Process event occurs in response to a state change of the monitored process (such as an exception, shutdown, start, or restart). Both manual and automatic state changes can cause the event to occur. |
Hotplug |
Card hot-swapping event occurs when a card is inserted in or removed from the monitored slot. |
Interface |
Each interface event is associated with two user-defined thresholds: start and restart. An interface event occurs when the monitored interface traffic statistic crosses the start threshold in the following situations: · The statistic crosses the start threshold for the first time. · The statistic crosses the start threshold each time after it crosses the restart threshold. |
SNMP |
Each SNMP event is associated with two user-defined thresholds: start and restart. SNMP event occurs when the monitored MIB variable's value crosses the start threshold in the following situations: · The monitored variable's value crosses the start threshold for the first time. · The monitored variable's value crosses the start threshold each time after it crosses the restart threshold. |
SNMP-Notification |
SNMP-Notification event occurs when the monitored MIB variable's value in an SNMP notification matches the specified condition. For example, the broadcast traffic rate on an Ethernet interface reaches or exceeds 30%. |
Track |
Track event occurs when the state of the track entry changes from Positive to Negative or Negative to Positive. If you specify multiple track entries for a policy, EAA triggers the policy only when the state of all the track entries changes from Positive to Negative or Negative to Positive. If you set a suppress time for a policy, the timer starts when the policy is triggered. The system does not process the messages that report the track entry Positive-to-Negative (Negative-to-Positive) state change until the timer times out. |
Action
You can create a series of order-dependent actions to take in response to the event specified in the monitor policy.
The following are available actions:
· Executing a command.
· Sending a log.
· Enabling an active/standby switchover.
· Executing a reboot without saving the running configuration.
User role
For EAA to execute an action in a monitor policy, you must assign the policy the user role that has access to the action-specific commands and resources. If EAA lacks access to an action-specific command or resource, EAA does not perform the action and all the subsequent actions.
For example, a monitor policy has four actions numbered from 1 to 4. The policy has user roles that are required for performing actions 1, 3, and 4. However, it does not have the user role required for performing action 2. When the policy is triggered, EAA executes only action 1.
For more information about user roles, see RBAC in Fundamentals Configuration Guide.
Runtime
Policy runtime limits the amount of time that the monitor policy can run from the time it is triggered. This setting prevents system resources from being occupied by incorrectly defined policies.
EAA environment variables
EAA environment variables decouple the configuration of action arguments from the monitor policy so you can modify a policy easily.
An EAA environment variable is defined as a <variable_name variable_value> pair and can be used in different policies. When you define an action, you can enter a variable name with a leading dollar sign ($variable_name). EAA will replace the variable name with the variable value when it performs the action.
To change the value for an action argument, modify the value specified in the variable pair instead of editing each affected monitor policy.
EAA environment variables include system-defined variables and user-defined variables.
System-defined variables
System-defined variables are provided by default, and they cannot be created, deleted, or modified by users. System-defined variable names start with an underscore (_) sign. The variable values are set automatically depending on the event setting in the policy that uses the variables.
System-defined variables include the following types:
· Public variable—Available for any events.
· Event-specific variable—Available only for a type of event.
Table 14 shows all system-defined variables.
Table 14 System-defined EAA environment variables by event type
Variable name |
Description |
Any event: |
|
_event_id |
Event ID. |
_event_type |
Event type. |
_event_type_string |
Event type description. |
_event_time |
Time when the event occurs. |
_event_severity |
Severity level of an event. |
CLI: |
|
_cmd |
Commands that are matched. |
Syslog: |
|
_syslog_pattern |
Log message content. |
Hotplug: |
|
_slot |
ID of the slot where card hot-swapping occurs. |
Interface: |
|
_ifname |
Interface name. |
SNMP: |
|
_oid |
OID of the MIB variable where an SNMP operation is performed. |
_oid_value |
Value of the MIB variable. |
SNMP-Notification: |
|
_oid |
OID that is included in the SNMP notification. |
Process: |
|
_process_name |
Process name. |
User-defined variables
You can use user-defined variables for all types of events.
User-defined variable names can contain digits, characters, and the underscore sign (_), except that the underscore sign cannot be the leading character.
Configuring a user-defined EAA environment variable
Configure a user-defined EAA environment variable before you use it in an action.
To configure a user-defined EAA environment variable:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure a user-defined EAA environment variable. |
rtm environment var-name var-value |
By default, no user-defined environment variables exist. The system provides the system-defined variables in Table 14. |
Configuring a monitor policy
You can configure a monitor policy by using the CLI or Tcl.
Configuration restrictions and guidelines
When you configure monitor policies, follow these restrictions and guidelines:
· Make sure the actions in different policies do not conflict. Policy execution result will be unpredictable if policies that conflict in actions are running concurrently.
· You can assign the same policy name to a CLI-defined policy and a Tcl-defined policy. However, you cannot assign the same name to policies that are the same type.
· The system executes the actions in a policy in ascending order of action IDs. When you add actions to a policy, you must make sure the execution order is correct.
Configuring a monitor policy from the CLI
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Set the size for the EAA-monitored log buffer. |
rtm event syslog buffer-size buffer-size |
The default size of the EAA-monitored log buffer is 50000. |
3. Create a CLI-defined policy and enter its view. |
rtm cli-policy policy-name |
By default, no CLI-defined monitor policies exist. If a CLI-defined policy already exists, this command enters CLI-defined policy view. |
4. Configure an event in the policy. |
·
Configure a CLI event: ·
(In standalone mode.) Configure
a card hot-swapping event: ·
(In IRF mode.) Configure
a card hot-swapping event: ·
Configure an interface event: ·
(In standalone mode.) Configure
a process event: ·
(In IRF mode.) Configure
a process event: ·
Configure an SNMP event: ·
Configure an SNMP-Notification event: ·
Configure a Syslog event: ·
Configure a track event: |
By default, a monitor policy does not contain an event. You can configure only one event in a monitor policy. If the monitor policy already contains an event, the new event overrides the old event. |
5. Configure the actions to take when the event occurs. |
·
Configure the action to execute a command: ·
(In standalone mode.) Configure
a reboot action: ·
(In IRF mode.) Configure
a reboot action: ·
Configure an active/standby switchover action: ·
Configure a logging action: |
By default, a monitor policy does not contain any actions. Repeat this step to add a maximum of 232 actions to the policy. When you define an action, you can specify a value or specify a variable name in $variable_name format for an argument. |
6. (Optional.) Assign a user role to the policy. |
user-role role-name |
By default, a monitor policy contains user roles that its creator had at the time of policy creation. A monitor policy supports a maximum of 64 valid user roles. User roles added after this limit is reached do not take effect. An EAA policy cannot have both the security-audit user role and any other user roles. Any previously assigned user roles are automatically removed when you assign the security-audit user role to the policy. The previously assigned security-audit user role is automatically removed when you assign any other user roles to the policy. |
7. (Optional.) Configure the policy runtime. |
running-time time |
The default runtime is 20 seconds. |
8. Enable the policy. |
commit |
By default, CLI-defined policies are not enabled. A CLI-defined policy can take effect only after you perform this step. |
Configuring a monitor policy by using Tcl
Step |
Command |
Remarks |
1. Edit a Tcl script file (see Table 15). |
N/A |
The supported Tcl version is 8.5.8. |
2. Download the file to the device by using FTP or TFTP. |
N/A |
For more information about using FTP and TFTP, see Fundamentals Configuration Guide. |
3. Enter system view. |
system-view |
N/A |
4. Create a Tcl-defined policy and bind it to the Tcl script file. |
rtm tcl-policy policy-name tcl-filename |
By default, no Tcl policies exist. Make sure the script file is saved on all MPUs. This practice ensures that the policy can run correctly after an active/standby or master/standby switchover occurs or the MPU where the script file resides fails or is removed. This step enables the Tcl-defined policy. To revise the Tcl script of a policy, you must suspend all monitor policies first, and then resume the policies after you finish revising the script. The system cannot execute a Tcl-defined policy if you edit its Tcl script without suspending policies. |
Write a Tcl script in two lines for a monitor policy, as shown in Table 15.
Table 15 Tcl script requirements
Line |
Content |
Requirements |
Line 1 |
Event, user roles, and policy runtime |
This line must use the following format: ::comware::rtm::event_register eventname arg1 arg2 arg3 …user-role role-name1 | [ user-role role-name2 | [ ] ][ running-time running-time ]. The arg1 arg2 arg3 … arguments represent event matching rules. If an argument value contains spaces, use double quotation marks ("") to enclose the value. For example, "a b c." |
Line 2 |
Actions |
When you define an action, you can specify a value or specify a variable name in $variable_name format for an argument. The following actions are available: · Standard Tcl commands. · EAA-specific Tcl commands. · Commands supported by the device. |
Suspending monitor policies
This task suspends all CLI-defined and Tcl-defined monitor policies except for the policies that are running.
To suspend monitor policies:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Suspend monitor policies. |
rtm scheduler suspend |
To resume monitor polices, use the undo rtm scheduler suspend command. |
Displaying and maintaining EAA settings
Execute display commands except for the display this command in any view.
Task |
Command |
Display user-defined EAA environment variables. |
display rtm environment [ var-name ] |
Display EAA monitor policies. |
display rtm policy { active | registered [ verbose ] } [ policy-name ] |
Display the running configuration of all CLI-defined monitor policies. |
display current-configuration |
Display the running configuration of a CLI-defined monitor policy in CLI-defined monitor policy view. |
display this |
EAA configuration examples
CLI event monitor policy configuration example
Network requirements
Configure a policy from the CLI to monitor the event that occurs when a question mark (?) is entered at the command line that contains letters and digits.
When the event occurs, the system executes the command and sends the log message "hello world" to the information center.
Configuration procedure
# Create CLI-defined policy test and enter its view.
<Sysname> system-view
[Sysname] rtm cli-policy test
# Add a CLI event that occurs when a question mark (?) is entered at any command line that contains letters and digits.
[Sysname-rtm-test] event cli async mode help pattern [a-zA-Z0-9]
# Add an action that sends the message "hello world" with a priority of 4 from the logging facility local3 when the event occurs.
[Sysname-rtm-test] action 0 syslog priority 4 facility local3 msg “hello world”
# Add an action that enters system view when the event occurs.
[Sysname-rtm-test] action 2 cli system-view
# Add an action that creates VLAN 2 when the event occurs.
[Sysname-rtm-test] action 3 cli vlan 2
# Set the policy runtime to 2000 seconds. The system stops executing the policy and displays an execution failure message if it fails to complete policy execution within 2000 seconds.
[Sysname-rtm-test] running-time 2000
# Specify the network-admin user role for executing the policy.
[Sysname-rtm-test] user-role network-admin
# Enable the policy.
[Sysname-rtm-test] commit
Verifying the configuration
# Display information about the policy.
[Sysname-rtm-test] display rtm policy registered
Total number: 1
Type Event TimeRegistered PolicyName
CLI CLI Aug 29 14:56:50 2013 test
# Enable the information center to output log messages to the current monitoring terminal.
[Sysname-rtm-test] return
<Sysname> terminal monitor
# Enter a question mark (?) at a command line that contains a letter d. Verify that the system displays the "hello world" message and a policy successfully executed message on the terminal screen.
<Sysname> d?
debugging
delete
diagnostic-logfile
dir
display
<Sysname>d%May 7 02:10:03:218 2013 Sysname RTM/4/RTM_ACTION: "hello world"
%May 7 02:10:04:176 2013 Sysname RTM/6/RTM_POLICY: CLI policy test is running successfully.
Track event monitor policy configuration example
Network requirements
As shown in Figure 59, Device A has established BGP sessions with Device D and Device E. Traffic from Device D and Device E to the Internet is forwarded through Device A.
Configure a CLI-defined EAA monitor policy on Device A to disconnect the sessions with Device D and Device E when HundredGigE 1/0/1 connected to Device C is down. In this way, traffic from Device D and Device E to the Internet can be forwarded through Device B.
Configuration procedures
# Display BGP peer information for Device A.
<Sysname> display bgp peer ipv4
BGP local router ID: 1.1.1.1
Local AS number: 100
Total number of peers: 3 Peers in established state: 3
* - Dynamically created peer
Peer AS MsgRcvd MsgSent OutQ PrefRcv Up/Down State
10.2.1.2 200 13 16 0 0 00:16:12 Established
10.3.1.2 300 13 16 0 0 00:10:34 Established
10.3.2.2 300 13 16 0 0 00:10:38 Established
# Create track entry 1 and associate it with the link state of HundredGigE 1/0/1.
<Sysname> system-view
[Sysname] track 1 interface hundredgige 1/0/1
# Configure a CLI-defined EAA monitor policy so that the system automatically disables session establishment with Device D and Device E when HundredGigE 1/0/1 is down.
Sysname] rtm cli-policy test
[Sysname-rtm-test] event track 1 state negative
[Sysname-rtm-test] action 0 cli system-view
[Sysname-rtm-test] action 1 cli bgp 100
[Sysname-rtm-test] action 2 cli peer 10.3.1.2 ignore
[Sysname-rtm-test] action 3 cli peer 10.3.2.2 ignore
[Sysname-rtm-test] user-role network-admin
[Sysname-rtm-test] commit
[Sysname-rtm-test] quit
Verifying the configuration
# Shut down HundredGigE 1/0/1.
[Sysname] interface hundredgige 1/0/1
[Sysname-HundredGigE1/0/1] shutdown
# Display BGP peer information.
<Sysname> display bgp peer ipv4
BGP local router ID: 1.1.1.1
Local AS number: 100
Total number of peers: 0 Peers in established state: 0
* - Dynamically created peer
Peer AS MsgRcvd MsgSent OutQ PrefRcv Up/Down State
The command output shows that Device A does not have any BGP peers.
CLI-defined policy with EAA environment variables configuration example
Network requirements
Define an environment variable to match the IP address 1.1.1.1.
Configure a policy from the CLI to monitor the event that occurs when a command line that contains loopback0 is executed. In the policy, use the environment variable for IP address assignment.
When the event occurs, the system performs the following tasks:
· Creates the Loopback 0 interface.
· Assigns 1.1.1.1/24 to the interface.
· Sends the matching command line to the information center.
Configuration procedure
# Configure an EAA environment variable for IP address assignment. The variable name is loopback0IP, and the variable value is 1.1.1.1.
<Sysname> system-view
[Sysname] rtm environment loopback0IP 1.1.1.1
# Create the CLI-defined policy test and enter its view.
[Sysname] rtm cli-policy test
# Add a CLI event that occurs when a command line that contains loopback0 is executed.
[Sysname-rtm-test] event cli async mode execute pattern loopback0
# Add an action that enters system view when the event occurs.
[Sysname-rtm-test] action 0 cli system-view
# Add an action that creates the interface Loopback 0 and enters loopback interface view.
[Sysname-rtm-test] action 1 cli interface loopback 0
# Add an action that assigns the IP address 1.1.1.1 to Loopback 0. The loopback0IP variable is used in the action for IP address assignment.
[Sysname-rtm-test] action 2 cli ip address $loopback0IP 24
# Add an action that sends the matching loopback0 command with a priority of 0 from the logging facility local7 when the event occurs.
[Sysname-rtm-test] action 3 syslog priority 0 facility local7 msg $_cmd
# Specify the network-admin user role for executing the policy.
[Sysname-rtm-test] user-role network-admin
# Enable the policy.
[Sysname-rtm-test] commit
[Sysname-rtm-test] return
<Sysname>
Verifying the configuration
# Enable the information center to output log messages to the current monitoring terminal.
<Sysname> terminal monitor
# Execute the loopback0 command. Verify that the system displays the loopback0 message and a policy successfully executed message on the terminal screen.
<Sysname> loopback0
<Sysname>
%Jan 3 09:46:10:592 2014 Sysname RTM/0/RTM_ACTION: loopback0
%Jan 3 09:46:10:613 2014 Sysname RTM/6/RTM_POLICY: CLI policy test is running successfully.
# Verify that Loopback 0 has been created and assigned the IP address 1.1.1.1.
<Sysname> display interface loopback brief
Brief information on interfaces in route mode:
Link: ADM - administratively down; Stby - standby
Protocol: (s) - spoofing
Interface Link Protocol Primary IP Description
Loop0 UP UP(s) 1.1.1.1
<Sysname>
Tcl-defined policy configuration example
Network requirements
As shown in Figure 60, use Tcl to create a monitor policy on the Device. This policy must meet the following requirements:
· EAA sends the log message "rtm_tcl_test is running" when a command that contains the display this string is entered.
· The system executes the command only after it executes the policy successfully.
Configuration procedure
# Edit a Tcl script file (rtm_tcl_test.tcl, in this example) for EAA to send the message "rtm_tcl_test is running" when a command that contains the display this string is executed.
::comware::rtm::event_register cli sync mode execute pattern display this user-role network-admin
::comware::rtm::action syslog priority 1 facility local4 msg rtm_tcl_test is running
# Download the Tcl script file from the TFTP server at 1.2.1.1.
<Sysname> tftp 1.2.1.1 get rtm_tcl_test.tcl
# Create Tcl-defined policy test and bind it to the Tcl script file.
<Sysname> system-view
[Sysname] rtm tcl-policy test rtm_tcl_test.tcl
[Sysname] quit
Verifying the configuration
# Display information about the policy.
<Sysname> display rtm policy registered
Total number: 1
Type Event TimeRegistered PolicyName
TCL CLI Aug 29 14:54:50 2013 test
# Enable the information center to output log messages to the current monitoring terminal.
<Sysname> terminal monitor
# Execute the display this command. Verify that the system displays the rtm_tcl_test is running message and a message that the policy is being successfully executed.
<Sysname> display this
#
return
<Sysname>%Jun 4 15:02:30:354 2013 Sysname RTM/1/RTM_ACTION: rtm_tcl_test is running
%Jun 4 15:02:30:382 2013 Sysname RTM/6/RTM_POLICY: TCL policy test is running successfully.
Configuring samplers
A sampler selects a packet from sequential packets and sends the packet to other service modules for processing. Sampling is useful when you want to limit the volume of traffic to be analyzed. The sampled data is statistically accurate and sampling decreases the impact on the forwarding capacity of the device.
The following sampling modes are available: Sampler supports the random mode. In random mode, any packet might be selected from sequential packets in each sampling.
A sampler can sample packets for NetStream. Then, only the sampled packets are sent to and processed by the NetStream module. For more information about NetStream, see "Configuring NetStream."
Creating a sampler
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a sampler. |
sampler sampler-name mode random packet-interval n-power rate |
By default, no samplers exist. |
Displaying and maintaining a sampler
Execute display commands in any view.
Task |
Command |
(In standalone mode.) Display configuration information about the sampler. |
display sampler [ sampler-name ] [ slot slot-number ] |
(In IRF mode.) Display configuration information about the sampler. |
display sampler [ sampler-name ] [ chassis chassis-number slot slot-number ] |
Sampler configuration example for IPv4 NetStream
Network requirements
As shown in Figure 61, configure samplers and NetStream as follows:
· Configure IPv4 NetStream on the device to collect statistics on incoming and outgoing traffic.
· Send the NetStream data to port 5000 on the NetStream server.
· Configure random sampling in the inbound direction to select the first packet from 256 packets on HundredGigE 1/0/1.
Configuration procedure
# Create sampler 256 in random sampling mode, and set the rate to 8. The first packet of 256 (2 to the 8th power) packets is selected.
<Switch> system-view
[Switch] sampler 256 mode random packet-interval n-power 8
# Enable IPv4 NetStream to use sampler 256 to collect statistics about the incoming traffic on HundredGigE 1/0/1.
[Switch] interface hundredgige 1/0/1
[Switch-HundredGigE1/0/1] ip netstream inbound
[Switch-HundredGigE1/0/1] ip netstream inbound sampler 256
[Switch-HundredGigE1/0/1] quit
# Configure the address and port number of the NetStream server as the destination for the NetStream data export. Use the default source interface for the NetStream data export.
[Switch] ip netstream export host 12.110.2.2 5000
Verifying the configuration
# Display configuration information for sampler 256.
[Switch] display sampler 256
Sampler name: 256
Mode: Random; Packet-interval: 8; IsNpower: Y
Configuring port mirroring
Overview
Port mirroring copies the packets passing through a port or CPU to a port that connects to a data monitoring device for packet analysis.
Terminology
The following terms are used in port mirroring configuration.
Mirroring source
The mirroring sources can be one or more monitored ports or CPUs. The monitored ports and CPUs are called source ports and source CPUs, respectively.
Packets passing through mirroring sources are copied to a port connecting to a data monitoring device for packet analysis. The copies are called mirrored packets.
Source device
The device where the mirroring sources reside is called a source device.
Mirroring destination
The mirroring destination connects to a data monitoring device and is the destination port (also known as the monitor port) of mirrored packets. Mirrored packets are sent out of the monitor port to the data monitoring device.
A monitor port might receive multiple copies of a packet when it monitors multiple mirroring sources. For example, two copies of a packet are received on Port 1 when the following conditions exist:
· Port 1 is monitoring bidirectional traffic of Port 2 and Port 3 on the same device.
· The packet travels from Port 2 to Port 3.
Destination device
The device where the monitor port resides is called the destination device.
Mirroring direction
The mirroring direction specifies the direction of the traffic that is copied on a mirroring source.
· Inbound—Copies packets received.
· Outbound—Copies packets sent.
· Bidirectional—Copies packets received and sent.
|
NOTE: · For inbound traffic mirroring, the VLAN tag in the original packet is copied to the mirrored packet. · For outbound traffic mirroring, the mirrored packet carries the VLAN tag of the egress interface. |
Mirroring group
Port mirroring is implemented through mirroring groups, which include local, remote source, and remote destination groups. For more information about the mirroring groups, see "Port mirroring classification and implementation."
Reflector port, egress port, and remote probe VLAN
Reflector ports, remote probe VLANs, and egress ports are used for Layer 2 remote port mirroring. The remote probe VLAN is a dedicated VLAN for transmitting mirrored packets to the destination device. Both the reflector port and egress port reside on a source device and send mirrored packets to the remote probe VLAN. For more information about the reflector port, egress port, remote probe VLAN, and Layer 2 remote port mirroring, see "Port mirroring classification and implementation."
|
NOTE: On port mirroring devices, all ports except source, destination, reflector, and egress ports are called common ports. |
Port mirroring classification and implementation
Port mirroring includes local port mirroring and remote port mirroring.
· Local port mirroring—The mirroring sources and the mirroring destination are on the same device.
· Remote port mirroring—The mirroring sources and the mirroring destination are on different devices.
|
NOTE: On an IRF fabric, mirrored packets will be forwarded between IRF member devices if the mirroring-related ports are not on the same IRF member device. In this case, the IRF links might be overloaded. |
Local port mirroring
In local port mirroring, the following conditions exist:
· The source device is directly connected to a data monitoring device.
· The source device acts as the destination device to forward mirrored packets to the data monitoring device.
A local mirroring group is a mirroring group that contains the mirroring sources and the mirroring destination on the same device.
|
NOTE: A local mirroring group supports multicard mirroring. The mirroring sources and destination can reside on different cards. |
Figure 62 Local port mirroring implementation
As shown in Figure 62, the source port (HundredGigE 1/0/1) and the monitor port (HundredGigE 1/0/2) reside on the same device. Packets received on HundredGigE 1/0/1 are copied to HundredGigE 1/0/2. HundredGigE 1/0/2 then forwards the packets to the data monitoring device for analysis.
Remote port mirroring
In remote port mirroring, the following conditions exist:
· The source device is not directly connected to a data monitoring device.
· The source device copies mirrored packets to the destination device, which forwards them to the data monitoring device.
· The mirroring sources and the mirroring destination reside on different devices and are in different mirroring groups.
A remote source group is a mirroring group that contains the mirroring sources. A remote destination group is a mirroring group that contains the mirroring destination. Intermediate devices are the devices between the source device and the destination device.
Remote port mirroring includes Layer 2 and Layer 3 remote port mirroring.
· Layer 2 remote port mirroring—The mirroring sources and the mirroring destination are located on different devices on the same Layer 2 network.
Layer 2 remote port mirroring can be implemented when a reflector port or an egress port is available on the source device. The method to use the reflector port and the method to use the egress port are called reflector port method and egress port method, respectively.
? Reflector port method—Packets are mirrored as follows:
- The source device copies packets received on the mirroring sources to the reflector port.
- The reflector port broadcasts the mirrored packets in the remote probe VLAN.
- The intermediate devices transmit the mirrored packets to the destination device through the remote probe VLAN.
- Upon receiving the mirrored packets, the destination device determines whether the ID of the mirrored packets is the same as the remote probe VLAN ID. If the two VLAN IDs match, the destination device forwards the mirrored packets to the data monitoring device through the monitor port.
Figure 63 Layer 2 remote port mirroring implementation through the reflector port method
? Egress port method—Packets are mirrored as follows:
- The source device copies packets received on the mirroring sources to the egress port.
- The egress port forwards the mirrored packets to the intermediate devices.
- The intermediate devices flood the mirrored packets in the remote probe VLAN and transmit the mirrored packets to the destination device.
- Upon receiving the mirrored packets, the destination device determines whether the ID of the mirrored packets is the same as the remote probe VLAN ID. If the two VLAN IDs match, the destination device forwards the mirrored packets to the data monitoring device through the monitor port.
Figure 64 Layer 2 remote port mirroring implementation through the egress port method
In the reflector port method, the reflector port broadcasts mirrored packets in the remote probe VLAN. By assigning a non-source port on the source device to the remote probe VLAN, you can use the reflector port method to implement local port mirroring. The egress port method cannot implement local port mirroring in this way.
To ensure Layer 2 forwarding of the mirrored packets, assign the ports that connect intermediate devices to the source and destination devices to the remote probe VLAN.
· Layer 3 remote port mirroring—The mirroring sources and destination are separated by IP networks.
Layer 3 remote port mirroring is implemented through creating a local mirroring group on both the source device and the destination device. For example, in a network as shown in Figure 65, Layer 3 remote port mirroring works in the following flow:
a. The source device sends one copy of a packet received on the source port (HundredGigE 1/0/1) to the tunnel interface.
The tunnel interface acts as the monitor port in the local mirroring group created on the source device.
b. The tunnel interface on the source device forwards the mirrored packet to the tunnel interface on the destination device through the GRE tunnel.
c. The destination device receives the mirrored packet from the physical interface of the tunnel interface.
The tunnel interface acts as the source port in the local mirroring group created on the destination device.
d. The physical interface of the tunnel interface sends one copy of the packet to the monitor port (HundredGigE 1/0/2).
e. HundredGigE 1/0/2 forwards the packet to the data monitoring device.
For more information about GRE tunnels and tunnel interfaces, see Layer 3—IP Services Configuration Guide.
Figure 65 Layer 3 remote port mirroring implementation
Configuring local port mirroring
A local mirroring group takes effect only when you configure the monitor port and the source ports or source CPUs for the local mirroring group.
For mirroring to take effect on F series cards of an IRF fabric, the mirroring source ports and monitor ports must reside on the same IRF member device.
Local port mirroring configuration task list
Tasks at a glance |
1. (Required.) Creating a local mirroring group |
2. (Required.) Perform either of both of the following tasks: |
3. (Required.) Configuring the monitor port for the local mirroring group |
Creating a local mirroring group
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a local mirroring group. |
mirroring-group group-id local [ sampler sampler-name ] |
By default, no local mirroring groups exist. |
Configuring source ports for the local mirroring group
To configure source ports for a local mirroring group, use one of the following methods:
· Assign a list of source ports to the mirroring group in system view.
· Assign a port to the mirroring group as a source port in interface view.
To assign multiple ports to the mirroring group as source ports in interface view, repeat the operation.
Configuration restrictions and guidelines
When you configure source ports for a local mirroring group, follow these restrictions and guidelines:
· A mirroring group can contain multiple source ports.
· A Layer 2 or Layer 3 aggregate interface cannot be configured as a source port for a local mirroring group.
· A source port cannot be configured as a reflector port, egress port, or monitor port.
· A source port can belong to only one mirroring group.
Configuring source ports in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure source ports for a local mirroring group. |
mirroring-group group-id mirroring-port interface-list { both | inbound | outbound } |
By default, no source port is configured for a local mirroring group. |
Configuring source ports in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as a source port for a local mirroring group. |
mirroring-group group-id mirroring-port { both | inbound | outbound } |
By default, a port does not act as a source port for any local mirroring groups. |
Configuring source CPUs for the local mirroring group
A mirroring group can contain multiple source CPUs.
The device supports mirroring only inbound traffic of a source CPU.
To configure source CPUs for a local mirroring group:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure source CPUs for a local mirroring group. |
In standalone mode: In IRF mode: |
By default, no source CPU is configured for a local mirroring group. |
Configuring the monitor port for the local mirroring group
To configure the monitor port for a mirroring group, use one of the following methods:
· Configure the monitor port for the mirroring group in system view.
· Assign a port to the mirroring group as the monitor port in interface view.
Configuration restrictions and guidelines
When you configure the monitor port for a local mirroring group, follow these restrictions and guidelines:
· Do not enable the spanning tree feature on the monitor port.
· Multiple monitor ports can be configured for one local mirroring group.
· For a Layer 2 aggregate interface configured as the monitor port of a mirroring group, do not configure its member ports as source ports of the mirroring group.
· Use a monitor port only for port mirroring, so the data monitoring device receives only the mirrored traffic.
Configuring the monitor port in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the monitor port for a local mirroring group. |
mirroring-group group-id monitor-port interface-list |
By default, no monitor port is configured for a local mirroring group. |
Configuring the monitor port in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as the monitor port for a mirroring group. |
mirroring-group group-id monitor-port |
By default, a port does not act as the monitor port for any local mirroring groups. |
Configuring Layer 2 remote port mirroring
To configure Layer 2 remote port mirroring, perform the following tasks:
· Configure a remote source group on the source device.
· Configure a cooperating remote destination group on the destination device.
· If intermediate devices exist, configure the following devices and ports to allow the remote probe VLAN to pass through.
? Intermediate devices.
? Ports connected to the intermediate devices on the source and destinations devices.
When you configure Layer 2 remote port mirroring, follow these restrictions and guidelines:
· The egress port must be assigned to the remote probe VLAN. The reflector port is not necessarily assigned to the remote probe VLAN.
· For a mirrored packet to successfully arrive at the remote destination device, make sure its VLAN ID is not removed or changed.
· You can mirror the bidirectional traffic of a source port only in the following situations:
? The source and destination devices are directly connected and they are both of the current device series.
? The source and destination devices are both of the current device series and MAC address learning is disabled for the remote probe VLAN on the intermediate devices.
? Only the source device is of the current device series and MAC address learning is disabled for the remote probe VLAN on the intermediate devices and the destination device.
If MAC address learning cannot be disabled for a VLAN on a device, disable MAC address learning on the ingress port of the mirrored traffic. A port can only forward mirrored traffic after you disable MAC address learning on the port.
· Do not configure both MVRP and Layer 2 remote port mirroring. Otherwise, MVRP might register the remote probe VLAN with incorrect ports, which would cause the monitor port to receive undesired copies. For more information about MVRP, see Layer 2—LAN Switching Configuration Guide.
· As a best practice, configure devices in the order of the destination device, the intermediate devices, and the source device.
For mirroring to take effect on F series cards of an IRF fabric, the following ports must reside on the same IRF member device:
· For Layer 2 remote port mirroring in reflector port mode, the mirroring source ports and the reflector port on the source device.
· For Layer 2 remote port mirroring in egress port mode, the mirroring source ports and the egress port on the source device.
Layer 2 remote port mirroring with reflector port configuration task list
Layer 2 remote port mirroring with egress port configuration task list
Configuring a remote destination group on the destination device
Creating a remote destination group
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a remote destination group. |
mirroring-group group-id remote-destination |
By default, no remote destination groups exist. |
Configuring the monitor port for a remote destination group
To configure the monitor port for a mirroring group, use one of the following methods:
· Configure the monitor port for the mirroring group in system view.
· Assign a port to the mirroring group as the monitor port in interface view.
When you configure the monitor port for a remote destination group, follow these restrictions and guidelines:
· Do not enable the spanning tree feature on the monitor port.
· Only one monitor port can be configured for a remote destination group.
· For a Layer 2 aggregate interface configured as the monitor port of a mirroring group, do not configure its member ports as source ports of the mirroring group.
· Use a monitor port only for port mirroring, so the data monitoring device receives only the mirrored traffic.
· A monitor port can belong to only one mirroring group.
Configuring the monitor port for a remote destination group in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the monitor port for a remote destination group. |
mirroring-group group-id monitor-port interface-list |
By default, no monitor port is configured for a remote destination group. |
Configuring the monitor port for a remote destination group in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as the monitor port for a remote destination group. |
mirroring-group group-id monitor-port |
By default, a port does not act as the monitor port for any remote destination groups. |
Configuring the remote probe VLAN for a remote destination group
When you configure the remote probe VLAN for a remote destination group, follow these restrictions and guidelines:
· Only an existing static VLAN can be configured as a remote probe VLAN.
· When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port mirroring exclusively.
· Configure the same remote probe VLAN for the remote groups on the source and destination devices.
To configure the remote probe VLAN for a remote destination group:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the remote probe VLAN for a remote destination group. |
mirroring-group group-id remote-probe vlan vlan-id |
By default, no remote probe VLAN is configured for a remote destination group. |
Assigning the monitor port to the remote probe VLAN
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter the interface view of the monitor port. |
interface interface-type interface-number |
N/A |
3. Assign the port to the remote probe VLAN. |
·
For an access port: ·
For a trunk port: ·
For a hybrid port: |
For more information about the port access vlan, port trunk permit vlan, and port hybrid vlan commands, see Layer 2—LAN Switching Command Reference. |
Configuring a remote source group on the source device
Creating a remote source group
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a remote source group. |
mirroring-group group-id remote-source |
By default, no remote source groups exist. |
Configuring source ports for a remote source group
To configure source ports for a mirroring group, use one of the following methods:
· Assign a list of source ports to the mirroring group in system view.
· Assign a port to the mirroring group as a source port in interface view.
To assign multiple ports to the mirroring group as source ports in interface view, repeat the operation.
When you configure source ports for a remote source group, follow these restrictions and guidelines:
· Do not assign a source port of a mirroring group to the remote probe VLAN of the mirroring group.
· A mirroring group can contain multiple source ports.
· A Layer 2 or Layer 3 aggregate interface cannot be configured as a source port for a remote source group.
· A source port cannot be configured as a reflector port, monitor port, or egress port.
· A source port can belong to only one mirroring group.
Configuring source ports for a remote source group in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure source ports for a remote source group. |
mirroring-group group-id mirroring-port interface-list { both | inbound | outbound } |
By default, no source port is configured for a remote source group. |
Configuring a source port for a remote source group in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as a source port for a remote source group. |
mirroring-group group-id mirroring-port { both | inbound | outbound } |
By default, a port does not act as a source port for any remote source groups. |
Configuring source CPUs for a remote source group
A mirroring group can contain multiple source CPUs.
The device supports mirroring only inbound traffic of a source CPU.
To configure source CPUs for a remote source group:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure source CPUs for a remote source group. |
In standalone mode: In IRF mode: |
By default, no source CPU is configured for a remote source group. |
Configuring the reflector port for a remote source group
To configure the reflector port for a remote source group, use one of the following methods:
· Configure the reflector port for the remote source group in system view.
· Assign a port to the remote source group as the reflector port in interface view.
When you configure the reflector port for a remote source group, follow these restrictions and guidelines:
· The port to be configured as a reflector port must be a port not in use. Do not connect a network cable to a reflector port.
· When a port is configured as a reflector port, all existing configurations of the port are cleared. You cannot configure other features on the reflector port.
· If an IRF port is bound to only one physical interface, do not configure the physical interface as a reflector port. Otherwise, the IRF might split.
· A mirroring group contains only one reflector port.
· You cannot change the duplex mode or speed for a reflector port.
Configuring the reflector port for a remote source group in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the reflector port for a remote source group. |
mirroring-group group-id reflector-port interface-type interface-number |
By default, no reflector port is configured for a remote source group. |
Configuring the reflector port for a remote source group in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as the reflector port for a remote source group. |
mirroring-group group-id reflector-port |
By default, a port does not act as the reflector port for any remote source groups. |
Configuring the egress port for a remote source group
To configure the egress port for a remote source group, use one of the following methods:
· Configure the egress port for the remote source group in system view.
· Assign a port to the remote source group as the egress port in interface view.
When you configure the egress port for a remote source group, follow these restrictions and guidelines:
· Disable the following features on the egress port:
? Spanning tree.
? IGMP snooping.
? Static ARP.
? MAC address learning.
· A mirroring group contains only one egress port.
· A port of an existing mirroring group cannot be configured as an egress port.
Configuring the egress port for a remote source group in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the egress port for a remote source group. |
mirroring-group group-id monitor-egress interface-type interface-number |
By default, no egress port is configured for a remote source group. |
Configuring the egress port for a remote source group in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as the egress port for a remote source group. |
mirroring-group group-id monitor-egress |
By default, a port does not act as the egress port for any remote source groups. |
Configuring the remote probe VLAN for a remote source group
When you configure the remote probe VLAN for a remote source group, follow these restrictions and guidelines:
· Only an existing static VLAN can be configured as a remote probe VLAN.
· When a VLAN is configured as a remote probe VLAN, use the remote probe VLAN for port mirroring exclusively.
· The remote mirroring groups on the source device and destination device must use the same remote probe VLAN.
To configure the remote probe VLAN for a remote source group:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the remote probe VLAN for a remote source group. |
mirroring-group group-id remote-probe vlan vlan-id |
By default, no remote probe VLAN is configured for a remote source group. |
Configuring Layer 3 remote port mirroring
To configure Layer 3 remote port mirroring, perform the following tasks:
· Create a local mirroring group on both the source device and the destination device.
· Configure the monitor port and source ports or source CPUs for each mirroring group.
The source and destination devices are connected by a tunnel. If intermediate devices exist, configure a unicast routing protocol on the intermediate devices to ensure Layer 3 reachability between the source and destination devices.
On the source device, perform the following tasks:
· Configure source ports or source CPUs you want to monitor.
· Configure the tunnel interface as the monitor port.
On the destination device, perform the following tasks:
· Configure the physical interface corresponding to the tunnel interface as the source port.
· Configure the port that connects to the data monitoring device as the monitor port.
On an IRF fabric, the following rules apply:
· For mirroring to take effect on F series cards of an IRF fabric, the mirroring source ports and the service loopback group member ports must reside on the same IRF member device.
· On H series cards of an IRF fabric, the mirrored packets are rate-limited when the following conditions exist:
? The mirroring source ports and monitor ports are not on the same IRF member device.
· The monitor ports are unreachable or the destination of the original packets is the local IRF fabric.
Layer 3 remote port mirroring configuration task list
Tasks at a glance |
|
(Required.) Configuring the source device: 3. Configuring local mirroring groups 4. Perform either or both of the following tasks: ? Configuring source ports for a local mirroring group |
|
(Required.) Configuring the destination device: 6. Configuring local mirroring groups |
|
Configuration prerequisites
Before configuring Layer 3 remote mirroring, complete the following tasks:
· Create a tunnel interface and a GRE tunnel.
· Configure the source and destination addresses of the tunnel interface as the IP addresses of the physical interfaces on the source and destination devices, respectively.
For more information about tunnel interfaces, see Layer 3—IP Services Configuration Guide.
Configuring local mirroring groups
Configure a local mirroring group on both the source device and the destination device.
To create a local mirroring group:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a local mirroring group. |
mirroring-group group-id local [ sampler sampler-name ] |
By default, no local mirroring groups exist. |
Configuring source ports for a local mirroring group
On the source device, configure the ports you want to monitor as the source ports. On the destination device, configure the physical interface corresponding to the tunnel interface as the source port.
To configure source ports for a mirroring group, use one of the following methods:
· Assign a list of source ports to the mirroring group in system view.
· Assign a port to the mirroring group as a source port in interface view.
To assign multiple ports to the mirroring group as source ports in interface view, repeat the operation.
Configuration restrictions and guidelines
When you configure source ports for a local mirroring group, follow these restrictions and guidelines:
· A mirroring group can contain multiple source ports.
· A Layer 2 or Layer 3 aggregate interface cannot be configured as a source port for a local mirroring group.
· A source port cannot be configured as a reflector port, egress port, or monitor port.
· A source port can belong to only one mirroring group.
Configuring source ports in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure source ports for a local mirroring group. |
mirroring-group group-id mirroring-port interface-list { both | inbound | outbound } |
By default, no source port is configured for a local mirroring group. |
Configuring source ports in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as a source port for a local mirroring group. |
mirroring-group group-id mirroring-port { both | inbound | outbound } |
By default, a port does not act as a source port for any local mirroring groups. |
Configuring source CPUs for a local mirroring group
The destination device does not support source CPU configuration.
A mirroring group can contain multiple source CPUs.
The device supports mirroring only inbound traffic of a source CPU.
To configure source CPUs for a local mirroring group:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure source CPUs for a local mirroring group. |
In standalone mode: In IRF mode: |
By default, no source CPU is configured for a local mirroring group. |
Configuring the monitor port for a local mirroring group
On the source device, configure the tunnel interface as the monitor port. On the destination device, configure the port that connects to a data monitoring device as the monitor port.
To configure the monitor port for a mirroring group, use one of the following methods:
· Configure the monitor port for the mirroring group in system view.
· Assign a port to a mirroring group as the monitor port in interface view.
Configuration restrictions and guidelines
When you configure the monitor port for a local mirroring group, follow these restrictions and guidelines:
· Do not enable the spanning tree feature on the monitor port.
· On the source device, only one monitor port (tunnel interface) can be configured for a local mirroring group.
· On the destination device, multiple monitor ports can be configured for a local mirroring group.
· Use a monitor port only for port mirroring, so the data monitoring device receives only the mirrored traffic.
Configuring the monitor port in system view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the monitor port for a local mirroring group. |
mirroring-group group-id monitor-port interface-list |
By default, no monitor port is configured for a local mirroring group. |
Configuring the monitor port in interface view
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure the port as the monitor port for a local mirroring group. |
mirroring-group group-id monitor-port |
By default, a port does not act as the monitor port for any local mirroring groups. |
Displaying and maintaining port mirroring
Execute display commands in any view.
Task |
Command |
Display mirroring group information. |
display mirroring-group { group-id | all | local | remote-destination | remote-source } |
Port mirroring configuration examples
Local port mirroring configuration example (in source port mode)
Network requirements
As shown in Figure 66, configure local port mirroring in source port mode to enable the server to monitor the bidirectional traffic of the two departments.
Configuration procedure
# Create local mirroring group 1.
<Device> system-view
[Device] mirroring-group 1 local
# Configure HundredGigE 1/0/1 and HundredGigE 1/0/2 as source ports for local mirroring group 1.
[Device] mirroring-group 1 mirroring-port hundredgige 1/0/1 hundredgige 1/0/2 both
# Configure HundredGigE 1/0/3 as the monitor port for local mirroring group 1.
[Device] mirroring-group 1 monitor-port hundredgige 1/0/3
# Disable the spanning tree feature on the monitor port (HundredGigE 1/0/3).
[Device] interface hundredgige 1/0/3
[Device-HundredGigE1/0/3] undo stp enable
[Device-HundredGigE1/0/3] quit
Verifying the configuration
# Verify the mirroring group configuration.
[Device] display mirroring-group all
Mirroring group 1:
Type: Local
Status: Active
Mirroring port:
HundredGigE1/0/1 Both
HundredGigE1/0/2 Both
Monitor port: HundredGigE1/0/3
Local port mirroring configuration example (in source CPU mode)
Network requirements
As shown in Figure 67, HundredGigE 1/0/1 and HundredGigE 1/0/2 are located on slot 1.
Configure local port mirroring in source CPU mode to enable the server to monitor all packets matching the following criteria:
· Sent by the Marketing Department and the Technical Department.
· Processed by the CPU of the device.
Configuration procedure
# Create local mirroring group 1.
<Device> system-view
[Device] mirroring-group 1 local
# Configure the CPU in slot 1 of the device as a source CPU for local mirroring group 1.
[Device] mirroring-group 1 mirroring-cpu slot 1 inbound
# Configure HundredGigE 1/0/3 as the monitor port for local mirroring group 1.
[Device] mirroring-group 1 monitor-port hundredgige 1/0/3
# Disable the spanning tree feature on the monitor port (HundredGigE 1/0/3).
[Device] interface hundredgige 1/0/3
[Device-HundredGigE1/0/3] undo stp enable
[Device-HundredGigE1/0/3] quit
Verifying the configuration
# Verify the mirroring group configuration.
[Device] display mirroring-group all
Mirroring group 1:
Type: Local
Status: Active
Mirroring CPU:
Slot 1 Inbound
Monitor port: HundredGigE1/0/3
Layer 2 remote port mirroring configuration example (reflector port)
Network requirements
As shown in Figure 68, configure Layer 2 remote port mirroring to enable the server to monitor the outbound traffic of the Marketing Department.
Configuration procedure
1. Configure Device C (the destination device):
# Configure HundredGigE 1/0/1 as a trunk port, and assign the port to VLAN 2.
<DeviceC> system-view
[DeviceC] interface hundredgige 1/0/1
[DeviceC-HundredGigE1/0/1] port link-type trunk
[DeviceC-HundredGigE1/0/1] port trunk permit vlan 2
[DeviceC-HundredGigE1/0/1] quit
# Create a remote destination group.
[DeviceC] mirroring-group 2 remote-destination
# Create VLAN 2.
[DeviceC] vlan 2
[DeviceC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceC] mirroring-group 2 remote-probe vlan 2
# Configure HundredGigE 1/0/2 as the monitor port for the mirroring group.
[DeviceC] interface hundredgige 1/0/2
[DeviceC-HundredGigE1/0/2] mirroring-group 2 monitor-port
# Disable the spanning tree feature on HundredGigE 1/0/2.
[DeviceC-HundredGigE1/0/2] undo stp enable
# Assign HundredGigE 1/0/2 to VLAN 2.
[DeviceC-HundredGigE1/0/2] port access vlan 2
[DeviceC-HundredGigE1/0/2] quit
2. Configure Device B (the intermediate device):
# Create VLAN 2.
<DeviceB> system-view
[DeviceB] vlan 2
[DeviceB-vlan2] quit
# Configure HundredGigE 1/0/1 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface hundredgige 1/0/1
[DeviceB-HundredGigE1/0/1] port link-type trunk
[DeviceB-HundredGigE1/0/1] port trunk permit vlan 2
[DeviceB-HundredGigE1/0/1] quit
# Configure HundredGigE 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface hundredgige 1/0/2
[DeviceB-HundredGigE1/0/2] port link-type trunk
[DeviceB-HundredGigE1/0/2] port trunk permit vlan 2
[DeviceB-HundredGigE1/0/2] quit
3. Configure Device A (the source device):
# Create a remote source group.
<DeviceA> system-view
[DeviceA] mirroring-group 1 remote-source
# Create VLAN 2.
[DeviceA] vlan 2
[DeviceA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceA] mirroring-group 1 remote-probe vlan 2
# Configure HundredGigE 1/0/1 as a source port for the mirroring group.
[DeviceA] mirroring-group 1 mirroring-port hundredgige 1/0/1 inbound
# Configure HundredGigE 1/0/3 as the reflector port for the mirroring group.
[DeviceA] mirroring-group 1 reflector-port hundredgige 1/0/3
This operation may delete all settings made on the interface. Continue? [Y/N]: y
# Configure HundredGigE 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceA] interface hundredgige 1/0/2
[DeviceA-HundredGigE1/0/2] port link-type trunk
[DeviceA-HundredGigE1/0/2] port trunk permit vlan 2
[DeviceA-HundredGigE1/0/2] quit
Verifying the configuration
# Verify the mirroring group configuration on Device C.
[DeviceC] display mirroring-group all
Mirroring group 2:
Type: Remote destination
Status: Active
Monitor port: HundredGigE1/0/2
Remote probe VLAN: 2
# Verify the mirroring group configuration on Device A.
[DeviceA] display mirroring-group all
Mirroring group 1:
Type: Remote source
Status: Active
Mirroring port:
HundredGigE1/0/1 Inbound
Reflector port: HundredGigE1/0/3
Remote probe VLAN: 2
Layer 2 remote port mirroring configuration example (egress port)
Network requirements
On the Layer 2 network shown in Figure 69, configure Layer 2 remote port mirroring to enable the server to monitor the outbound traffic of the Marketing Department.
Configuration procedure
1. Configure Device C (the destination device):
# Configure HundredGigE 1/0/1 as a trunk port, and assign the port to VLAN 2.
<DeviceC> system-view
[DeviceC] interface hundredgige 1/0/1
[DeviceC-HundredGigE1/0/1] port link-type trunk
[DeviceC-HundredGigE1/0/1] port trunk permit vlan 2
[DeviceC-HundredGigE1/0/1] quit
# Create a remote destination group.
[DeviceC] mirroring-group 2 remote-destination
# Create VLAN 2.
[DeviceC] vlan 2
[DeviceC-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN for the mirroring group.
[DeviceC] mirroring-group 2 remote-probe vlan 2
# Configure HundredGigE 1/0/2 as the monitor port for the mirroring group.
[DeviceC] interface hundredgige 1/0/2
[DeviceC-HundredGigE1/0/2] mirroring-group 2 monitor-port
# Disable the spanning tree feature on HundredGigE 1/0/2.
[DeviceC-HundredGigE1/0/2] undo stp enable
# Assign HundredGigE 1/0/2 to VLAN 2 as an access port.
[DeviceC-HundredGigE1/0/2] port access vlan 2
[DeviceC-HundredGigE1/0/2] quit
2. Configure Device B (the intermediate device):
# Create VLAN 2.
<DeviceB> system-view
[DeviceB] vlan 2
[DeviceB-vlan2] quit
# Configure HundredGigE 1/0/1 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface hundredgige 1/0/1
[DeviceB-HundredGigE1/0/1] port link-type trunk
[DeviceB-HundredGigE1/0/1] port trunk permit vlan 2
[DeviceB-HundredGigE1/0/1] quit
# Configure HundredGigE 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceB] interface hundredgige 1/0/2
[DeviceB-HundredGigE1/0/2] port link-type trunk
[DeviceB-HundredGigE1/0/2] port trunk permit vlan 2
[DeviceB-HundredGigE1/0/2] quit
3. Configure Device A (the source device):
# Create a remote source group.
<DeviceA> system-view
[DeviceA] mirroring-group 1 remote-source
# Create VLAN 2.
[DeviceA] vlan 2
[DeviceA-vlan2] quit
# Configure VLAN 2 as the remote probe VLAN of the mirroring group.
[DeviceA] mirroring-group 1 remote-probe vlan 2
# Configure HundredGigE 1/0/1 as a source port for the mirroring group.
[DeviceA] mirroring-group 1 mirroring-port hundredgige 1/0/1 inbound
# Configure HundredGigE 1/0/2 as the egress port for the mirroring group.
[DeviceA] mirroring-group 1 monitor-egress hundredgige 1/0/2
# Configure HundredGigE 1/0/2 as a trunk port, and assign the port to VLAN 2.
[DeviceA] interface hundredgige 1/0/2
[DeviceA-HundredGigE1/0/2] port link-type trunk
[DeviceA-HundredGigE1/0/2] port trunk permit vlan 2
# Disable the spanning tree feature on the port.
[DeviceA-HundredGigE1/0/2] undo stp enable
[DeviceA-HundredGigE1/0/2] quit
Verifying the configuration
# Verify the mirroring group configuration on Device C.
[DeviceC] display mirroring-group all
Mirroring group 2:
Type: Remote destination
Status: Active
Monitor port: HundredGigE1/0/2
Remote probe VLAN: 2
# Verify the mirroring group configuration on Device A.
[DeviceA] display mirroring-group all
Mirroring group 1:
Type: Remote source
Status: Active
Mirroring port:
HundredGigE1/0/1 Inbound
Monitor egress port: Gigabitethernet1/0/2
Remote probe VLAN: 2
Layer 3 remote port mirroring configuration example
Network requirements
On a Layer 3 network shown in Figure 70, configure Layer 3 remote port mirroring to enable the server to monitor the bidirectional traffic of the Marketing Department.
Configuration procedure
1. Configure IP addresses for the tunnel interfaces and related ports on the devices. (Details not shown.)
2. Configure Device A (the source device):
# Create service loopback group 1 and specify the unicast tunnel service for the group.
<DeviceA> system-view
[DeviceA] service-loopback group 1 type tunnel
# Assign HundredGigE 1/0/3 to the service loopback group 1.
[DeviceA] interface hundredgige 1/0/3
[DeviceA-HundredGigE1/0/3] port service-loopback group 1
All configurations on the interface will be lost. Continue?[Y/N]:y
[DeviceA-HundredGigE1/0/3] quit
# Create tunnel interface Tunnel 0 that operates in GRE mode, and configure an IP address and subnet mask for the interface.
[DeviceA] interface tunnel 0 mode gre
[DeviceA-Tunnel0] ip address 50.1.1.1 24
# Configure source and destination IP addresses for Tunnel 0.
[DeviceA-Tunnel0] source 20.1.1.1
[DeviceA-Tunnel0] destination 30.1.1.2
[DeviceA-Tunnel0] quit
# Enable the OSPF protocol.
[DeviceA] ospf 1
[DeviceA-ospf-1] area 0
[DeviceA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[DeviceA-ospf-1-area-0.0.0.0] network 20.1.1.0 0.0.0.255
[DeviceA-ospf-1-area-0.0.0.0] quit
[DeviceA-ospf-1] quit
# Create local mirroring group 1.
[DeviceA] mirroring-group 1 local
# Configure HundredGigE 1/0/1 as a source port and Tunnel 0 as the monitor port of local mirroring group 1.
[DeviceA] mirroring-group 1 mirroring-port hundredgige 1/0/1 both
[DeviceA] mirroring-group 1 monitor-port tunnel 0
3. Enable the OSPF protocol on Device B (the intermediate device).
<DeviceB> system-view
[DeviceB] ospf 1
[DeviceB-ospf-1] area 0
[DeviceB-ospf-1-area-0.0.0.0] network 20.1.1.0 0.0.0.255
[DeviceB-ospf-1-area-0.0.0.0] network 30.1.1.0 0.0.0.255
[DeviceB-ospf-1-area-0.0.0.0] quit
[DeviceB-ospf-1] quit
4. Configure Device C (the destination device):
# Create service loopback group 1 and specify the unicast tunnel service for the group.
<DeviceC> system-view
[DeviceC] service-loopback group 1 type tunnel
# Assign HundredGigE 1/0/3 to service loopback group 1.
[DeviceC] interface hundredgige 1/0/3
[DeviceC-HundredGigE1/0/3] port service-loopback group 1
All configurations on the interface will be lost. Continue?[Y/N]:y
[DeviceC-HundredGigE1/0/3] quit
# Create tunnel interface Tunnel 0 that operates in GRE mode, and configure an IP address and subnet mask for the interface.
[DeviceC] interface tunnel 0 mode gre
[DeviceC-Tunnel0] ip address 50.1.1.2 24
# Configure source and destination IP addresses for Tunnel 0.
[DeviceC-Tunnel0] source 30.1.1.2
[DeviceC-Tunnel0] destination 20.1.1.1
[DeviceC-Tunnel0] quit
# Enable the OSPF protocol.
[DeviceC] ospf 1
[DeviceC-ospf-1] area 0
[DeviceC-ospf-1-area-0.0.0.0] network 30.1.1.0 0.0.0.255
[DeviceC-ospf-1-area-0.0.0.0] network 40.1.1.0 0.0.0.255
[DeviceC-ospf-1-area-0.0.0.0] quit
[DeviceC-ospf-1] quit
# Create local mirroring group 1.
[DeviceC] mirroring-group 1 local
# Configure HundredGigE 1/0/1 as a source port for local mirroring group 1.
[DeviceC] mirroring-group 1 mirroring-port hundredgige 1/0/1 inbound
# Configure HundredGigE 1/0/2 as the monitor port for local mirroring group 1.
[DeviceC] mirroring-group 1 monitor-port hundredgige 1/0/2
Verifying the configuration
# Verify the mirroring group configuration on Device A.
[DeviceA] display mirroring-group all
Mirroring group 1:
Type: Local
Status: Active
Mirroring port:
HundredGigE1/0/1 Both
Monitor port: Tunnel0
# Display information about all mirroring groups on Device C.
[DeviceC] display mirroring-group all
Mirroring group 1:
Type: Local
Status: Active
Mirroring port:
HundredGigE1/0/1 Inbound
Monitor port: HundredGigE1/0/2
Configuring flow mirroring
Flow mirroring copies packets matching a class to a destination for packet analyzing and monitoring. It is implemented through QoS policies.
To configure flow mirroring, perform the following tasks:
· Define traffic classes and configure match criteria to classify packets to be mirrored. Flow mirroring allows you to flexibly classify packets to be analyzed by defining match criteria.
· Configure traffic behaviors to mirror the matching packets to the specified destination.
You can configure an action to mirror the matching packets to one of the following destinations:
· Interface—The matching packets are copied to an interface and then forwarded to a data monitoring device for analysis.
· CPU—The matching packets are copied to the CPU of the card where they are received. The CPU analyzes the packets or delivers them to upper layers.
For more information about QoS policies, traffic classes, and traffic behaviors, see ACL and QoS Configuration Guide.
On an IRF fabric, broadcast or unknown unicast packets cannot be mirrored when the following conditions exist: The mirroring sources and the mirroring destination are not on the same IRF member device.
Flow mirroring configuration task list
Tasks at a glance |
(Required.) Configuring match criteria |
(Required.) Configuring a traffic behavior |
(Required.) Configuring a QoS policy |
(Required.) Applying a QoS policy: · Applying a QoS policy to an interface · Applying a QoS policy to a VLAN |
For more information about the following commands except the mirror-to command, see ACL and QoS Command Reference.
Configuring match criteria
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a class and enter class view. |
traffic classifier classifier-name [ operator { and | or } ] |
By default, no traffic classes exist. |
3. Configure match criteria. |
if-match match-criteria |
By default, no match criterion is configured in a traffic class. |
Configuring a traffic behavior
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a traffic behavior and enter traffic behavior view. |
traffic behavior behavior-name |
By default, no traffic behaviors exist. |
3. Configure a mirroring action for the traffic behavior. |
·
Mirror traffic to an interface: ·
Mirror traffic to the CPU: |
By default, no mirroring actions exist in a traffic behavior. You can mirror traffic to only one Ethernet interface. The mirroring source and destination must reside on the same card. |
4. (Optional.) Display traffic behavior configuration. |
display traffic behavior |
Available in any view. |
Configuring a QoS policy
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a QoS policy and enter QoS policy view. |
qos policy policy-name |
By default, no QoS policies exist. |
3. Associate a class with a traffic behavior in the QoS policy. |
classifier classifier-name behavior behavior-name |
By default, no traffic behavior is associated with a class. |
4. (Optional.) Display QoS policy configuration. |
display qos policy |
Available in any view. |
Applying a QoS policy
Applying a QoS policy to an interface
By applying a QoS policy to an interface, you can mirror the traffic in the specified direction of the interface. A policy can be applied to multiple interfaces. In one direction (inbound or outbound) of an interface, only one policy can be applied.
To apply a QoS policy to the outbound traffic of an interface, make sure the traffic mirroring action does not coexist with any other action in a traffic behavior. If non-traffic-mirroring actions are configured, all actions configured in the traffic behavior become invalid.
Flow mirroring does not support mirroring the outbound traffic of aggregate interfaces on the device.
To apply a QoS policy to an interface:
Step |
Command |
1. Enter system view. |
system-view |
2. Enter interface view. |
interface interface-type interface-number |
3. Apply a policy to the interface. |
qos apply policy policy-name { inbound | outbound } |
Applying a QoS policy to a VLAN
You can apply a QoS policy to a VLAN to mirror the traffic in the specified direction on all ports in the VLAN.
To apply the QoS policy to a VLAN:
Step |
Command |
1. Enter system view. |
system-view |
2. Apply a QoS policy to a VLAN. |
qos vlan-policy policy-name vlan vlan-id-list { inbound | outbound } |
Applying a QoS policy globally
You can apply a QoS policy globally to mirror the traffic in the specified direction on all ports.
To apply a QoS policy globally:
Step |
Command |
1. Enter system view. |
system-view |
2. Apply a QoS policy globally. |
qos apply policy policy-name global { inbound | outbound } |
Applying a QoS policy to the control plane
You can apply a QoS policy to the control plane to mirror the traffic in the specified direction of all ports on the control plane.
To apply a QoS policy to the control plane:
Step |
Command |
1. Enter system view. |
system-view |
2. Enter control plane view. |
In standalone mode: In IRF mode: |
3. Apply a QoS policy to the control plane. |
qos apply policy policy-name inbound |
Flow mirroring configuration example
Network requirements
As shown in Figure 71, configure flow mirroring so that the server can monitor the following traffic:
· All traffic that the Technical Department sends to access the Internet.
· IP traffic that the Technical Department sends to the Marketing Department during working hours (8:00 to 18:00) on weekdays.
Configuration procedure
# Create working hour range work, in which working hours are from 8:00 to 18:00 on weekdays.
<DeviceA> system-view
[DeviceA] time-range work 8:00 to 18:00 working-day
# Create IPv4 advanced ACL 3000 to allow packets from the Technical Department to access the Internet and the Marketing Department during working hours.
[DeviceA] acl advanced 3000
[DeviceA-acl-ipv4-adv-3000] rule permit tcp source 192.168.2.0 0.0.0.255 destination-port eq www
[DeviceA-acl-ipv4-adv-3000] rule permit ip source 192.168.2.0 0.0.0.255 destination 192.168.1.0 0.0.0.255 time-range work
[DeviceA-acl-ipv4-adv-3000] quit
# Create traffic class tech_c, and configure the match criterion as ACL 3000.
[DeviceA] traffic classifier tech_c
[DeviceA-classifier-tech_c] if-match acl 3000
[DeviceA-classifier-tech_c] quit
# Create traffic behavior tech_b, configure the action of mirroring traffic to HundredGigE 1/0/3.
[DeviceA] traffic behavior tech_b
[DeviceA-behavior-tech_b] mirror-to interface hundredgige 1/0/3
[DeviceA-behavior-tech_b] quit
# Create QoS policy tech_p, and associate traffic class tech_c with traffic behavior tech_b in the QoS policy.
[DeviceA] qos policy tech_p
[DeviceA-qospolicy-tech_p] classifier tech_c behavior tech_b
[DeviceA-qospolicy-tech_p] quit
# Apply QoS policy tech_p to the incoming packets of HundredGigE 1/0/4.
[DeviceA] interface hundredgige 1/0/4
[DeviceA-HundredGigE1/0/4] qos apply policy tech_p inbound
[DeviceA-HundredGigE1/0/4] quit
Verifying the configuration
# Verify that the server can monitor the following traffic:
· All traffic sent by the Technical Department to access the Internet.
· IP traffic that the Technical Department sends to the Marketing Department during working hours on weekdays.
(Details not shown.)
Configuring NetStream
Overview
NetStream is an accounting technology that provides statistics on a per-flow basis. An IPv4 flow is defined by the following 7-tuple elements:
· Destination IP address.
· Source IP address.
· Destination port number.
· Source port number.
· Protocol number.
· ToS.
· Inbound or outbound interface.
NetStream architecture
A typical NetStream system includes the following elements:
· NetStream data exporter—A device configured with NetStream. The NDE provides the following functions:
? Classifies traffic flows by using the 7-tuple elements.
? Collects data from the classified flows.
? Aggregates and exports the data to the NSC.
· NetStream collector—A program running in an operation system. The NSC parses the packets received from the NDEs, and saves the data to its database.
· NetStream data analyzer—A network traffic analyzing tool. Based on the data in NSC, the NDA generates reports for traffic billing, network planning, and attack detection and monitoring. The NDA can collect data from multiple NSCs. Typically, the NDA features a Web-based system for easy operation.
NSC and NDA are typically integrated into a NetStream server.
H3C network devices act as NDEs in the NetStream system. This document focuses on NDE configuration.
Figure 72 NetStream system
Flow aging
NetStream uses flow aging to enable the NDE to export NetStream data to NetStream servers. NetStream creates a NetStream entry for each flow for storing the flow statistics in the cache.
When a flow is aged out, the NDE performs the following operations:
· Exports the summarized data to NetStream servers in a NetStream export format.
· Clears NetStream entry information in the cache.
For more information about flow aging types and configurations, see "Configuring NetStream flow aging."
NetStream data export
Traditional data export
Traditional NetStream collects the statistics of each flow and exports the statistics to NetStream servers.
This method consumes more bandwidth and CPU than the aggregation method, and it requires a large cache size.
Aggregation data export
NetStream aggregation merges the flow statistics according to the aggregation criteria of an aggregation mode, and it sends the summarized data to NetStream servers. The NetStream aggregation data export uses less bandwidth than the traditional data export.
Table 16 lists the available aggregation modes. In each mode, the system merges statistics for multiple flows into statistics for one aggregate flow if each aggregation criterion is of the same value. The system records the statistics for the aggregate flow. These aggregation modes work independently and can take effect concurrently.
For example, when the aggregation mode configured on the NDE is protocol-port, NetStream aggregates the statistics of flow entries by protocol number, source port, and destination port. Four NetStream entries record four TCP flows with the same destination address, source port, and destination port, but with different source addresses. In the aggregation mode, only one NetStream aggregation entry is created and sent to NetStream servers.
Table 16 NetStream aggregation modes
Aggregation mode |
Aggregation criteria |
Protocol-port aggregation |
· Protocol number · Source port · Destination port |
Source-prefix aggregation |
· Source AS number · Source address mask length · Source prefix (source network address) · Inbound interface index |
Destination-prefix aggregation |
· Destination AS number · Destination address mask length · Destination prefix (destination network address) · Outbound interface index |
Prefix aggregation |
· Source AS number · Destination AS number · Source address mask length · Destination address mask length · Source prefix · Destination prefix · Inbound interface index · Outbound interface index |
Prefix-port aggregation |
· Source prefix · Destination prefix · Source address mask length · Destination address mask length · ToS · Protocol number · Source port · Destination port · Inbound interface index · Outbound interface index |
ToS-source-prefix aggregation |
· ToS · Source AS number · Source prefix · Source address mask length · Inbound interface index |
ToS-destination-prefix aggregation |
· ToS · Destination AS number · Destination address mask length · Destination prefix · Outbound interface index |
ToS-prefix aggregation |
· ToS · Source AS number · Source prefix · Source address mask length · Destination AS number · Destination address mask length · Destination prefix · Inbound interface index · Outbound interface index |
ToS-protocol-port aggregation |
· ToS · Protocol type · Source port · Destination port · Inbound interface index · Outbound interface index |
NetStream export formats
NetStream exports data in UDP datagrams in one of the following formats:
· Version 5—Exports original statistics collected based on the 7-tuple elements and does not support the NetStream aggregation data export. The packet format is fixed and cannot be extended.
· Version 8—Supports the NetStream aggregation data export. The packet format is fixed and cannot be extended.
· Version 9—Based on template that can be configured according to the template formats defined in RFCs. Version 9 supports exporting the NetStream aggregation data and collecting statistics about BGP next hop and MPLS packets.
· Version 10—Similar to version 9. The difference between version 9 and version 10 is that version 10 export format is compliant with the IPFIX standard.
NetStream filtering
NetStream filtering uses an ACL to identify packets. Whether NetStream collects data for identified packets depends on the action in the matching rule.
· NetStream collects data for packets that match permit rules in the ACL.
· NetStream does not collect data for packets that match deny rules in the ACL.
For more information about ACL, see ACL and QoS Configuration Guide.
NetStream sampling
NetStream sampling collects statistics on fewer packets and is useful when the network has a large amount of traffic. NetStream on sampled traffic lessens the impact on the device's performance. For more information about sampling, see "Configuring samplers."
Protocols and standards
RFC 5101, Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information
Feature and hardware compatibility
NetStream is supported on the following interface modules:
· LSXM1TGS48C2HB1.
· LSXM1QGS48HB1.
· LSXM1CGQ18QGHB1.
· LSXM1CGQ36HB1.
NetStream configuration task list
Tasks at a glance |
(Required.) Enabling NetStream |
(Optional.) Configuring NetStream filtering |
(Optional.) Configuring NetStream sampling |
(Optional.) Configuring attributes of the NetStream data export |
(Optional.) Configuring NetStream flow aging |
(Required.) Perform at least one of the following tasks to configure NetStream data export: |
Enabling NetStream
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Enable NetStream on the interface. |
ip netstream { inbound | outbound } |
By default, NetStream is disabled on an interface. |
Configuring NetStream filtering
When you configure NetStream filtering, follow these restrictions and guidelines:
· When NetStream filtering and sampling are both configured, packets are filtered first, and then the permitted packets are sampled.
· The NetStream filtering feature does not take effect on MPLS packets.
To configure NetStream filtering:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Enable NetStream filtering on the interface. |
ip netstream { inbound | outbound } filter acl ipv4-acl-number |
By default, NetStream filtering is disabled. NetStream collects statistics of all IPv4 packets passing through the interface. |
Configuring NetStream sampling
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a sampler. |
sampler sampler-name mode random packet-interval n-power rate |
For more information about a sampler, see "Configuring samplers." |
3. Enter interface view. |
interface interface-type interface-number |
N/A |
4. Enable NetStream sampling. |
ip netstream { inbound | outbound } sampler sampler-name |
By default, NetStream sampling is disabled. |
Configuring attributes of the NetStream data export
Configuring the NetStream data export format
You can configure NetStream data export version, and expand the export data to include the following additional information:
· Statistics about source AS, destination AS, and peer ASs.
· Statistics about BGP next hop (available only in version 9 and version 10 formats).
A NetStream entry records the source IP address and destination IP address, and two AS numbers for each IP address. The origin-as and peer-as keywords in the ip netstream export version command specify the AS numbers to be exported.
· origin-as—Specifies the source AS of the source address and the destination AS of the destination address.
· peer-as—Specifies the ASs before and after the AS where the NetStream device resides as the source AS and the destination AS, respectively.
For example, as shown in Figure 73, a flow starts at AS 20, passes AS 21 through AS 23, and then reaches AS 24. NetStream is enabled on the device in AS 22.
· The origin-as keyword defines AS 20 as the source AS and AS 24 as the destination AS.
· The peer-as keyword defines AS 21 as the source AS and AS 23 as the destination AS.
Figure 73 Recorded AS information varies by different keyword configurations
To configure the NetStream data export format:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the NetStream data export format, and specify whether to record AS and BGP next hop information. |
·
Configure the version 5 format: ·
Configure the version 9 format: ·
Configure the version 10 format: |
By default: · The NetStream data export uses the version 9 format. · The peer AS numbers are exported for the source and destination. · The BGP next hop information is not exported. |
Configuring the refresh rate for NetStream version 9 or version 10 template
Version 9 and version 10 are template-based and support user-defined formats. A NetStream device must send the template to NetStream servers regularly, because the servers do not permanently save the templates.
For a NetStream server to use correct version 9 or version 10 template, configure the time-based or packet count-based refresh rate. If both settings are configured, the template is sent when either of the conditions is met.
To configure the refresh rate for NetStream version 9 or version 10 template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the refresh rate for NetStream version 9 or version 10 template. |
ip netstream export template refresh-rate { packet packets | time minutes } |
By default, the packet count-based refresh rate is 20 packets, and the time-based refresh interval is 30 minutes. |
Configuring MPLS-aware NetStream
An MPLS flow is identified by the same labels in the same position and the same 7-tuple elements. MPLS-aware NetStream collects statistics on a maximum of three labels in the label stack, with or without IP fields.
To configure MPLS-aware NetStream:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Collect statistics on MPLS packets. |
ip netstream mpls [ label-positions label-position1 [ label-position2 [ label-position3 ] ] ] [ no-ip-fields ] |
By default, statistics about MPLS packets are not collected. |
Configuring VXLAN-aware NetStream
A VXLAN flow is identified by the same destination UDP port number. VXALN-aware NetStream collects statistics on the VNI information in the VXLAN packets.
To configure VXLAN-aware NetStream:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Collect statistics on VXLAN packets. |
ip netstream vxlan udp-port port-number |
By default, statistics about VXLAN packets are not collected. |
Configuring NetStream flow aging
Flow aging methods
Periodical aging
Periodical aging uses the following methods:
· Inactive flow aging—A flow is inactive if no packet arrives for this NetStream entry within the period specified by using the ip netstream timeout inactive command. When the inactive flow aging timer expires, the following events occur:
? The inactive flow entry is aged out.
? The statistics of the flow are sent to NetStream servers and are cleared in the cache. The statistics can no longer be displayed by using the display ip netstream cache command.
When you use the inactive flow aging method, the cache is large enough for new flow entries.
· Active flow aging—A flow is active if packets arrive for the NetStream entry within the period specified by using the ip netstream timeout active command. When the active flow aging timer expires, the statistics of the active flow are exported to NetStream servers. The device continues to collect active flow statistics, which can be displayed by using the display ip netstream cache command. The active flow aging method periodically exports the statistics of active flows to NetStream servers.
Forced aging
To implement forced aging, use one of the following commands:
· Use the reset ip netstream statistics command. This command ages out all NetStream entries, and exports and clears the statistics.
· Use the ip netstream max-entry command to set the maximum number of NetStream entries that can be cached.
Configuration procedure
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure periodical aging. |
·
Set the aging timer for active flows: ·
Set the aging timer for inactive flows: |
By default: · The aging timer for active flows is 5 minutes. · The aging timer for inactive flows is 300 seconds. |
3. Configure forced aging. |
·
Set the entry upper limit: · Manually age out NetStream entries: a. Return to user view: b.
Age out NetStream entries: |
N/A |
Configuring the NetStream data export
Configuring the NetStream traditional data export
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Specify a destination host for NetStream traditional data export. |
ip netstream export host ip-address udp-port [ vpn-instance vpn-instance-name ] |
By default, no destination host is specified. No NetStream traditional data is exported. |
3. (Optional.) Specify the source interface for NetStream data packets sent to NetStream servers. |
ip netstream export source interface interface-type interface-number |
By default, no source interface is specified for NetStream data packets. The packets take the IP address of the output interface as the source IP address. |
4. (Optional.) Limit the data export rate. |
ip netstream export rate rate |
By default, the data export rate is not limited. |
Configuring the NetStream aggregation data export
NetStream aggregation is implemented by software. It merges the flow statistics according to the aggregation mode criteria, and stores the data in the cache.
Configuration restrictions and guidelines
When you configure the NetStream aggregation data export, follow these restrictions and guidelines:
· Configurations in NetStream aggregation mode view apply only to the NetStream aggregation data export, and those in system view apply to the NetStream traditional data export. If configurations in NetStream aggregation mode view are not provided, the configurations in system view apply to the NetStream aggregation data export.
· If the version 5 format is configured to export NetStream data, NetStream aggregation data export uses the version 8 format.
Configuration procedure
To configure the NetStream aggregation data export:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Specify a NetStream aggregation mode and enter its view. |
ip netstream aggregation { destination-prefix | prefix | prefix-port | protocol-port | source-prefix tos-destination-prefix | tos-prefix | tos-protocol-port | tos-source-prefix } |
By default, no NetStream aggregation mode is specified. |
3. Specify a destination host for NetStream aggregation data export. |
ip netstream export host ip-address udp-port [ vpn-instance vpn-instance-name ] |
By default, no destination host is specified. If you expect only NetStream aggregation data, specify the destination host only in the related NetStream aggregation mode view. |
4. (Optional.) Specify the source interface for NetStream data packets sent to NetStream servers. |
ip netstream export source interface interface-type interface-number |
By default, no source interface is specified for NetStream data packets. The packets take the IP address of the output interface as the source IP address. Source interfaces in different NetStream aggregation mode views can be different. If no source interface is configured in NetStream aggregation mode view, the source interface configured in system view applies. |
5. Enable the NetStream aggregation mode. |
enable |
By default, NetStream aggregation is disabled. |
Displaying and maintaining NetStream
Execute display commands in any view and reset commands in user view.
Task |
Command |
(In standalone mode.) Display NetStream entry information. |
display ip netstream cache [ verbose ] [ type { ip | ipl2 | l2 | mpls [ label-position1 label-value1 [ label-position2 label-value2 [ label-position3 label-value3 ] ] ] } ] [ destination destination-ip | interface interface-type interface-number | source source-ip ] * [ slot slot-number ] ] |
(In IRF mode.) Display NetStream entry information. |
display ip netstream cache [ verbose ] [ type { ip | ipl2 | l2 | mpls [ label-position1 label-value1 [ label-position2 label-value2 [ label-position3 label-value3 ] ] ] } ] [ destination destination-ip | interface interface-type interface-number | source source-ip ] * [ chassis chassis-number slot slot-number ] |
Display information about the NetStream data export. |
display ip netstream export |
(In standalone mode.) Display NetStream template information. |
display ip netstream template [ slot slot-number ] |
(In IRF mode.) Display NetStream template information. |
display ip netstream template [ chassis chassis-number slot slot-number ] |
Age out and export all NetStream data, and clear the cache. |
reset ip netstream statistics |
NetStream configuration examples
NetStream traditional data export configuration example
Network requirements
As shown in Figure 74, configure NetStream on Switch A to collect statistics on packets passing through Switch A.
· Enable NetStream for incoming and outgoing traffic on HundredGigE 1/0/1.
· Configure the switch to export NetStream traditional data to UDP port 5000 of the NetStream server.
Configuration procedure
# Assign an IP address to each interface, as shown in Figure 74. (Details not shown.)
# Enable NetStream for incoming and outgoing traffic on HundredGigE 1/0/1.
<SwitchA> system-view
[SwitchA] interface hundredgige 1/0/1
[SwitchA-HundredGigE1/0/1] ip netstream inbound
[SwitchA-HundredGigE1/0/1] ip netstream outbound
[SwitchA-HundredGigE1/0/1] quit
# Specify 12.110.2.2 as the IP address of the destination host and UDP port 5000 as the export destination port number.
[SwitchA] ip netstream export host 12.110.2.2 5000
Verifying the configuration
# Display NetStream entry information.
[SwitchA] display ip netstream cache
IP NetStream cache information:
Active flow timeout : 5 min
Inactive flow timeout : 300 sec
Max number of entries : 1024
IP active flow entries : 2
MPLS active flow entries : 0
L2 active flow entries : 0
IPL2 active flow entries : 0
IP flow entries counted : 0
MPLS flow entries counted : 0
L2 flow entries counted : 0
IPL2 flow entries counted : 0
Last statistics resetting time : Never
IP packet size distribution (11 packets in total):
1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480
.000 .000 .909 .000 .000 .090 .000 .000 .000 .000 .000 .000 .000 .000 .000
512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 >4608
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
Protocol Total Packets Flows Packets Active(sec) Idle(sec)
Flows /sec /sec /flow /flow /flow
---------------------------------------------------------------------------
Type DstIP(Port) SrcIP(Port) Pro ToS If(Direct) Pkts
DstMAC(VLAN) SrcMAC(VLAN)
TopLblType(IP/MASK) Lbl-Exp-S-List
---------------------------------------------------------------------------
IP 10.1.1.1 (21) 100.1.1.2 (1024) 1 0 HGE1/0/1(I) 5
IP 100.1.1.2 (1024) 10.1.1.1 (21) 1 0 HGE1/0/1(O) 5
# Display information about the NetStream data export.
[SwitchA] display ip netstream export
IP export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 12.110.2.2 (5000)
Version 5 exported flows number : 0
Version 5 exported UDP datagram number (failed) : 0 (0)
Version 9 exported flows number : 10
Version 9 exported UDP datagram number (failed) : 10 (0)
NetStream aggregation data export configuration example
Network requirements
As shown in Figure 75, all devices in the network are running EBGP. Configure NetStream on Switch A to meet the following requirements:
· Use version 5 format to export NetStream traditional data to port 5000 of the NetStream server at 4.1.1.1/16.
· Perform NetStream aggregation in the modes of protocol-port, source-prefix, destination-prefix, and prefix.
· Export the aggregation data of different modes to 4.1.1.1, with UDP ports 3000, 4000, 6000, and 7000.
Configuration procedure
# Assign an IP address to each interface, as shown in Figure 75. (Details not shown.)
# Specify version 5 format to export NetStream traditional data.
<SwitchA> system-view
[SwitchA] ip netstream export version 5
# Enable NetStream for incoming and outgoing traffic on HundredGigE 1/0/1.
[SwitchA] interface hundredgige 1/0/1
[SwitchA-HundredGigE1/0/1] ip netstream inbound
[SwitchA-HundredGigE1/0/1] ip netstream outbound
[SwitchA-HundredGigE1/0/1] quit
# Specify 4.1.1.1 as the IP address of the destination host and UDP port 5000 as the export destination port number.
[SwitchA] ip netstream export host 4.1.1.1 5000
# Set the aggregation mode to protocol-port, and specify the destination host for the aggregation data export.
[SwitchA] ip netstream aggregation protocol-port
[SwitchA-ns-aggregation-protport] enable
[SwitchA-ns-aggregation-protport] ip netstream export host 4.1.1.1 3000
[SwitchA-ns-aggregation-protport] quit
# Set the aggregation mode to source-prefix, and specify the destination host for the aggregation data export.
[SwitchA] ip netstream aggregation source-prefix
[SwitchA-ns-aggregation-srcpre] enable
[SwitchA-ns-aggregation-srcpre] ip netstream export host 4.1.1.1 4000
[SwitchA-ns-aggregation-srcpre] quit
# Set the aggregation mode to destination-prefix, and specify the destination host for the aggregation data export.
[SwitchA] ip netstream aggregation destination-prefix
[SwitchA-ns-aggregation-dstpre] enable
[SwitchA-ns-aggregation-dstpre] ip netstream export host 4.1.1.1 6000
[SwitchA-ns-aggregation-dstpre] quit
# Set the aggregation mode to prefix, and specify the destination host for the aggregation data export.
[SwitchA] ip netstream aggregation prefix
[SwitchA-ns-aggregation-prefix] enable
[SwitchA-ns-aggregation-prefix] ip netstream export host 4.1.1.1 7000
[SwitchA-ns-aggregation-prefix] quit
Verifying the configuration
# Display information about the NetStream data export.
[SwitchA] display ip netstream export
protocol-port aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 4.1.1.1 (3000)
Version 8 exported flows number : 2
Version 8 exported UDP datagram number (failed) : 2 (0)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
source-prefix aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 4.1.1.1 (4000)
Version 8 exported flows number : 2
Version 8 exported UDP datagram number (failed) : 2 (0)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
destination-prefix aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 4.1.1.1 (6000)
Version 8 exported flows number : 2
Version 8 exported UDP datagram number (failed) : 2 (0)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
prefix aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 4.1.1.1 (7000)
Version 8 exported flows number : 2
Version 8 exported UDP datagram number (failed) : 2 (0)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
IP export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 4.1.1.1 (5000)
Version 5 exported flows number : 10
Version 5 exported UDP datagram number (failed) : 10 (0)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
Configuring IPv6 NetStream
Overview
IPv6 NetStream is an accounting technology that provides statistics on a per-flow basis. An IPv6 flow is defined by the following 8-tuple elements:
· Destination IPv6 address.
· Source IPv6 address.
· Destination port number.
· Source port number.
· Protocol number.
· Traffic class.
· Flow label.
· Input or output interface.
IPv6 NetStream architecture
A typical IPv6 NetStream system includes the following elements:
· IPv6 NetStream data exporter—A device configured with IPv6 NetStream. The NDE provides the following functions:
? Classifies traffic flows by using the 8-tuple elements.
? Collects data from the classified flows.
? Aggregates and exports the data to the NSC.
· IPv6 NetStream collector—A program running in a Unix or Windows operating system. The NSC parses the packets received from the NDEs, and saves the data to its database.
· IPv6 NetStream data analyzer—A network traffic analyzing tool. Based on the data in NSC, the NDA generates reports for traffic billing, network planning, and attack detection and monitoring. The NDA can collect data from multiple NSCs. Typically, the NDA features a Web-based system for easy operation.
NSC and NDA are typically integrated into an IPv6 NetStream server.
H3C network devices act as NDEs in the IPv6 NetStream system. This document focuses on NDE configuration.
Figure 76 IPv6 NetStream system
Flow aging
IPv6 NetStream uses flow aging to enable the NDE to export IPv6 NetStream data to IPv6 NetStream servers. IPv6 NetStream creates an IPv6 NetStream entry for each flow for storing the flow statistics in the cache.
When a flow is aged out, the NDE does the following operations:
· Exports the summarized data to IPv6 NetStream servers in an IPv6 NetStream data export format.
· Clears IPv6 NetStream entry information in the cache.
For more information about flow aging types and configurations, see "Configuring IPv6 NetStream flow aging."
IPv6 NetStream data export
Traditional data export
IPv6 NetStream collects the statistics of each flow and exports the statistics to IPv6 NetStream servers.
This method consumes a lot of bandwidth and CPU usage, and requires a large cache size. In addition, you do not need all of the data in most cases.
Aggregation data export
An IPv6 NetStream aggregation mode merges the flow statistics according to the aggregation criteria of the aggregation mode, and it sends the summarized data to IPv6 NetStream servers. The IPv6 NetStream aggregation data export uses less bandwidth than the traditional data export.
Table 17 lists the available IPv6 NetStream aggregation modes. In each mode, the system merges multiple flows with the same values for all aggregation criteria into one aggregate flow. The system records the statistics for the aggregate flow. These aggregation modes work independently and can take effect concurrently.
Table 17 IPv6 NetStream aggregation modes
Aggregation mode |
Aggregation criteria |
Protocol-port aggregation |
· Protocol number · Source port · Destination port |
Source-prefix aggregation |
· Source AS number · Source mask · Source prefix (source network address) · Input interface index |
Destination-prefix aggregation |
· Destination AS number · Destination mask · Destination prefix (destination network address) · Output interface index |
Source-prefix and destination-prefix aggregation |
· Source AS number · Source mask · Source prefix (source network address) · Input interface index · Destination AS number · Destination mask · Destination prefix (destination network address) · Output interface index |
IPv6 NetStream data export format
IPv6 NetStream exports data in the version 9 or version 10 format.
Both formats are template-based and support exporting the IPv6 NetStream aggregation data and collecting statistics about BGP next hop and MPLS packets.
The version 10 export format is compliant with the IPFIX standard.
IPv6 NetStream filtering
IPv6 NetStream filtering uses an ACL to identify packets. Whether IPv6 NetStream collects data for identified packets depends on the action in the matching rule.
· IPv6 NetStream collects data for packets that match permit rules in the ACL.
· IPv6 NetStream does not collect data for packets that match deny rules in the ACL.
For more information about ACLs, see ACL and QoS Configuration Guide.
IPv6 NetStream sampling
IPv6 NetStream sampling collects statistics on fewer packets and is useful when the network has a large amount of traffic. IPv6 NetStream on sampled traffic lessens the impact on the device's performance. For more information about sampling, see "Configuring samplers."
Protocols and standards
RFC 5101, Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information
Feature and hardware compatibility
IPv6 NetStream is supported on the following interface modules:
· LSXM1TGS48C2HB1.
· LSXM1QGS48HB1.
· LSXM1CGQ18QGHB1.
· LSXM1CGQ36HB1.
IPv6 NetStream configuration task list
Tasks at a glance |
(Required.) Enabling IPv6 NetStream |
(Optional.) Configuring IPv6 NetStream filtering |
(Optional.) Configuring IPv6 NetStream sampling |
(Optional.) Configuring attributes of the IPv6 NetStream data export |
(Optional.) Configuring IPv6 NetStream flow aging |
(Required.) Perform at least one of the following tasks to configure the IPv6 NetStream data export: |
Enabling IPv6 NetStream
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Enable IPv6 NetStream on the interface. |
ipv6 netstream { inbound | outbound } |
By default, IPv6 NetStream is disabled on an interface. |
Configuring IPv6 NetStream filtering
When you configure IPv6 NetStream filtering, follow these restrictions and guidelines:
· The IPv6 NetStream filtering feature does not take effect on MPLS packets.
· If IPv6 NetStream filtering and sampling are both configured, IPv6 packets are filtered first, and then the permitted packets are sampled.
To configure IPv6 NetStream filtering:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Configure IPv6 NetStream filtering on the interface. |
ipv6 netstream { inbound | outbound } filter acl ipv6-acl-number |
By default, IPv6 NetStream filtering is disabled. IPv6 NetStream collects statistics of all IPv6 packets passing through the interface. |
Configuring IPv6 NetStream sampling
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Create a sampler. |
sampler sampler-name mode random packet-interval n-power rate |
For more information about samplers, see "Configuring samplers." |
3. Enter interface view. |
interface interface-type interface-number |
N/A |
4. Configure IPv6 NetStream sampling. |
ipv6 netstream { inbound | outbound } sampler sampler-name |
By default, IPv6 NetStream sampling is disabled. |
Configuring attributes of the IPv6 NetStream data export
Configuring the IPv6 NetStream data export format
An IPv6 NetStream entry for a flow records the source IPv6 address, destination IPv6 address, and their respective AS numbers. The origin-as and peer-as keywords in the ipv6 netstream export version command specify the AS numbers to be exported.
· origin-as—Specifies the source AS of the source address and the destination AS of the destination address.
· peer-as—Specifies the ASs before and after the AS where the NetStream device resides as the source AS and the destination AS, respectively.
For example, as shown in Figure 77, a flow starts at AS 20, passes AS 21 through AS 23, and then reaches AS 24. IPv6 NetStream is enabled on the device in AS 22.
· The origin-as keyword defines AS 20 as the source AS and AS 24 as the destination AS.
· The peer-as keyword defines AS 21 as the source AS and AS 23 as the destination AS.
Figure 77 Recorded AS information varies by different keyword configurations
To configure the IPv6 NetStream data export format:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the IPv6 NetStream data export format, and specify whether to record AS and BGP next hop information. |
·
Configure the version 10 format: |
By default: · The version 9 format is used to export IPv6 NetStream data. · The peer AS numbers are recorded. · The BGP next hop information is not recorded. |
Configuring the refresh rate for IPv6 NetStream version 9 or version 10 template
Version 9 and version 10 are template-based and supports user-defined formats. An IPv6 NetStream device must send the updated template to IPv6 NetStream servers regularly, because the servers do not permanently save templates.
For an IPv6 NetStream server to use correct version 9 or version 10 template, configure the time-based or packet count-based refresh rate. If both settings are configured, the template is sent when either of the conditions is met.
To configure the refresh rate for IPv6 NetStream version 9 or version 10 template:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the refresh rate for IPv6 NetStream version 9 or version 10 template. |
ipv6 netstream export template refresh-rate { packet packets | time minutes } |
By default, the packet count-based refresh rate is 20 packets, and the time-based refresh interval is 30 minutes. |
Configuring MPLS-aware IPv6 NetStream
An MPLS flow is identified by the same labels in the same position and the same 8-tuple elements. MPLS-aware IPv6 NetStream collects and exports statistics on a maximum of three labels in the label stack, with or without IP fields.
To configure MPLS-aware IPv6 NetStream:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Collect and export statistics on MPLS packets. |
ip netstream mpls [ label-positions label-position1 [ label-position2 [ label-position3 ] ] ] [ no-ip-fields ] |
By default, statistics of MPLS packets are not collected or exported. For more information about the ip netstream mpls command, see Network Management and Monitoring Command Reference. |
Configuring IPv6 NetStream flow aging
Flow aging methods
Periodical aging
Periodical aging has the following methods:
· Inactive flow aging—A flow is inactive if no packet arrives for the IPv6 NetStream entry within the period specified by using the ipv6 netstream timeout inactive command. When the inactive flow aging timer expires, the following events occur:
? The inactive flow entry is aged out.
? The statistics of the flow are sent to IPv6 NetStream servers and are cleared in the cache. The statistics can no longer be displayed by using the display ipv6 netstream cache command.
When you use the inactive flow aging method, the cache is large enough for new flow entries.
· Active flow aging—A flow is active if packets arrive for the IPv6 NetStream entry within the period specified by using the ipv6 netstream timeout active command. When the active flow aging timer expires, the statistics of the active flow are exported to IPv6 NetStream servers. The device continues to collect its statistics, which can be displayed by using the display ipv6 netstream cache command. The active flow aging method periodically exports the statistics of active flows to IPv6 NetStream servers.
Forced aging
To implement forced aging, use one of the following commands:
· Use the reset ipv6 netstream statistics command. This command ages out all IPv6 NetStream entries, and exports and clears the statistics.
· Use the ipv6 netstream max-entry command to set the maximum number of IPv6 NetStream entries that can be cached.
Configuration procedure
To configure IPv6 NetStream flow aging:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure periodical aging. |
·
Set the active flow aging timer: ·
Set the inactive flow aging timer: |
By default: · The active flow aging timer is 5 minutes. · The inactive flow aging timer is 300 seconds. |
3. Configure forced aging. |
·
Set the entry upper limit: · Manually age out IPv6 NetStream entries: a. Return to user view: b. Age out IPv6 NetStream entries: |
N/A |
Configuring the IPv6 NetStream data export
Configuring the IPv6 NetStream traditional data export
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Specify a destination host for IPv6 NetStream traditional data export. |
ipv6 netstream export host { ipv4-address | ipv6-address } udp-port [ vpn-instance vpn-instance-name ] |
By default, no destination host is specified. |
3. (Optional.) Specify the source interface for IPv6 NetStream data packets sent to the IPv6 NetStream servers. |
ipv6 netstream export source interface interface-type interface-number |
By default, no source interface is specified for IPv6 NetStream data packets. The packets take the IPv6 address of the output interface as the source IPv6 address. As a best practice, connect the management Ethernet interface to an IPv6 NetStream server, and configure the interface as the source interface. |
4. (Optional.) Limit the IPv6 NetStream data export rate. |
ipv6 netstream export rate rate |
By default, the data export rate is not limited. |
Configuring the IPv6 NetStream aggregation data export
IPv6 NetStream aggregation is implemented by software. It merges the flow statistics according to the aggregation mode criteria, and stores the data in the cache.
Configurations in IPv6 NetStream aggregation mode view apply only to the IPv6 NetStream aggregation data export. Configurations in system view apply to the IPv6 NetStream traditional data export. When no configuration in IPv6 NetStream aggregation mode view is provided, the configurations in system view apply to the IPv6 NetStream aggregation data export.
Configuration procedure
To configure the IPv6 NetStream aggregation data export:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Specify an IPv6 NetStream aggregation mode and enter its view. |
ipv6 netstream aggregation { destination-prefix | prefix | protocol-port | source-prefix } |
By default, no IPv6 NetStream aggregation mode is specified. |
3. Specify a destination host for IPv6 NetStream aggregation data export. |
ipv6 netstream export host { ipv4-address | ipv6-address } udp-port [ vpn-instance vpn-instance-name ] |
By default, no destination host is specified. If you expect only IPv6 NetStream aggregation data, specify the destination host only in the related IPv6 NetStream aggregation mode view. |
4. (Optional.) Specify the source interface for IPv6 NetStream data packets sent to IPv6 NetStream servers. |
ipv6 netstream export source interface interface-type interface-number |
By default, no source interface is specified for IPv6 NetStream data packets. The packets take the IPv6 address of the output interface as the source IPv6 address. You can configure different source interfaces in different IPv6 NetStream aggregation mode views. If no source interface is configured in IPv6 NetStream aggregation mode view, the source interface configured in system view applies. |
5. Enable the IPv6 NetStream aggregation mode. |
enable |
By default, the IPv6 NetStream aggregation is disabled. |
Displaying and maintaining IPv6 NetStream
Execute display commands in any view and reset commands in user view.
Task |
Command |
(In standalone mode.) Display IPv6 NetStream entry information. |
display ipv6 netstream cache [ verbose ] [ type { ip | ipl2 | l2 | mpls [ label-position1 label-value1 [ label-position2 label-value2 [ label-position3 label-value3 ] ] ] } ] [ destination destination-ipv6 | interface interface-type interface-number | source source-ipv6 ] * [ slot slot-number ] |
(In IRF mode.) Display IPv6 NetStream entry information. |
display ipv6 netstream cache [ verbose ] [ type { ip | ipl2 | l2 | mpls [ label-position1 label-value1 [ label-position2 label-value2 [ label-position3 label-value3 ] ] ] } ] [ destination destination-ipv6 | interface interface-type interface-number | source source-ipv6 ] * [ chassis chassis-number slot slot-number ] |
Display information about the IPv6 NetStream data export. |
display ipv6 netstream export |
(In standalone mode.) Display IPv6 NetStream template information. |
display ipv6 netstream template [ slot slot-number ] |
(In IRF mode.) Display IPv6 NetStream template information. |
display ipv6 netstream template [ chassis chassis-number slot slot-number ] |
Age out, export all IPv6 NetStream data, and clear the cache. |
reset ipv6 netstream statistics |
IPv6 NetStream configuration examples
IPv6 NetStream traditional data export configuration example
Network requirements
As shown in Figure 78, configure IPv6 NetStream on Switch A to collect statistics on packets passing through Switch A.
· Enable IPv6 NetStream for incoming and outgoing traffic on HundredGigE 1/0/1.
· Configure Switch A to export the IPv6 NetStream traditional data to UDP port 5000 of the IPv6 NetStream server.
Configuration procedure
# Assign an IP address to each interface, as shown in Figure 78. (Details not shown.)
# Enable IPv6 NetStream for incoming and outgoing traffic on HundredGigE 1/0/1.
<SwitchA> system-view
[SwitchA] interface hundredgige 1/0/1
[SwitchA-HundredGigE1/0/1] ipv6 netstream inbound
[SwitchA-HundredGigE1/0/1] ipv6 netstream outbound
[SwitchA-HundredGigE1/0/1] quit
# Specify 40::1 as the IP address of the destination host and UDP port 5000 as the export destination port number.
[SwitchA] ipv6 netstream export host 40::1 5000
Verifying the configuration
# Display information about IPv6 NetStream entries.
<Sysname> display ipv6 netstream cache
IPv6 NetStream cache information:
Active flow timeout : 5 min
Inactive flow timeout : 300 sec
Max number of entries : 1000
IPv6 active flow entries : 2
MPLS active flow entries : 0
IPL2 active flow entries : 0
IPv6 flow entries counted : 10
MPLS flow entries counted : 0
IPL2 flow entries counted : 0
Last statistics resetting time : 01/01/2000 at 00:01:02
IPv6 packet size distribution (1103746 packets in total):
1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480
.249 .694 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 >4608
.000 .000 .027 .000 .027 .000 .000 .000 .000 .000 .000 .000
Protocol Total Packets Flows Packets Active(sec) Idle(sec)
Flows /sec /sec /flow /flow /flow
--------------------------------------------------------------------------
TCP-Telnet 2656855 372 4 86 49 27
TCP-FTP 5900082 86 9 9 11 33
TCP-FTPD 3200453 1006 5 193 45 33
TCP-WWW 546778274 11170 887 12 8 32
TCP-other 49148540 3752 79 47 30 32
UDP-DNS 117240379 570 190 3 7 34
UDP-other 45502422 2272 73 30 8 37
ICMP 14837957 125 24 5 12 34
IP-other 77406 5 0 47 52 27
Type DstIP(Port) SrcIP(Port) Pro TC FlowLbl If(Direct) Pkts
DstMAC(VLAN) SrcMAC(VLAN)
TopLblType(IP/MASK)Lbl-Exp-S-List
--------------------------------------------------------------------------
IP 2001::1(1024) 2002::1(21) 6 0 0x0 HGE1/0/1(I) 42996
IP 2002::1(21) 2001::1(1024) 6 0 0x0 HGE1/0/1(O) 42996
# Display information about the IPv6 NetStream data export.
[RouterA] display ipv6 netstream export
IPv6 export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 40::1 (5000)
Version 9 exported flows number : 10
Version 9 exported UDP datagram number (failed) : 10 (0)
IPv6 NetStream aggregation data export configuration example
Network requirements
As shown in Figure 79, all routers in the network are running IPv6 EBGP. Configure IPv6 NetStream on Switch A to meet the following requirements:
· Export the IPv6 NetStream traditional data to port 5000 of the IPv6 NetStream server.
· Perform the IPv6 NetStream aggregation in the modes of protocol-port, source-prefix, destination-prefix, and prefix.
· Export the aggregation data of different modes to the UDP ports 3000, 4000, 6000, and 7000.
Configuration procedure
# Assign an IP address to each interface, as shown in Figure 79. (Details not shown.)
# Enable IPv6 NetStream for incoming and outgoing traffic on HundredGigE 1/0/1.
<SwitchA> system-view
[SwitchA] interface hundredgige 1/0/1
[SwitchA-HundredGigE1/0/1] ipv6 netstream inbound
[SwitchA-HundredGigE1/0/1] ipv6 netstream outbound
[SwitchA-HundredGigE1/0/1] quit
# Specify 40::1 as the IP address of the destination host and UDP port 5000 as the export destination port number.
[SwitchA] ipv6 netstream export host 40::1 5000
# Set the aggregation mode to protocol-port, and specify the destination host for the aggregation data export.
[SwitchA] ipv6 netstream aggregation protocol-port
[SwitchA-ns6-aggregation-protport] enable
[SwitchA-ns6-aggregation-protport] ipv6 netstream export host 40::1 3000
[SwitchA-ns6-aggregation-protport] quit
# Set the aggregation mode to source-prefix, and specify the destination host for the aggregation data export.
[SwitchA] ipv6 netstream aggregation source-prefix
[SwitchA-ns6-aggregation-srcpre] enable
[SwitchA-ns6-aggregation-srcpre] ipv6 netstream export host 40::1 4000
[SwitchA-ns6-aggregation-srcpre] quit
# Set the aggregation mode to destination-prefix, and specify the destination host for the aggregation data export.
[SwitchA] ipv6 netstream aggregation destination-prefix
[SwitchA-ns6-aggregation-dstpre] enable
[SwitchA-ns6-aggregation-dstpre] ipv6 netstream export host 40::1 6000
[SwitchA-ns6-aggregation-dstpre] quit
# Set the aggregation mode to prefix, and specify the destination host for the aggregation data export.
[SwitchA] ipv6 netstream aggregation prefix
[SwitchA-ns6-aggregation-prefix] enable
[SwitchA-ns6-aggregation-prefix] ipv6 netstream export host 40::1 7000
[SwitchA-ns6-aggregation-prefix] quit
Verifying the configuration
# Display information about the IPv6 NetStream data export.
[SwitchA] display ipv6 netstream export
protocol-port aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 40::1 (3000)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
source-prefix aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 40::1 (4000)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
destination-prefix aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 40::1 (6000)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
prefix aggregation export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 40::1 (7000)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
IPv6 export information:
Flow source interface : Not specified
Flow destination VPN instance : Not specified
Flow destination IP address (UDP) : 40::1 (5000)
Version 9 exported flows number : 0
Version 9 exported UDP datagram number (failed) : 0 (0)
Configuring sFlow
sFlow is a traffic monitoring technology.
As shown in Figure 80, the sFlow system involves an sFlow agent embedded in a device and a remote sFlow collector. The sFlow agent collects interface counter information and packet information and encapsulates the sampled information in sFlow packets. When the sFlow packet buffer is full, or the aging timer (fixed to 1 second) expires, the sFlow agent performs the following actions:
· Encapsulates the sFlow packets in the UDP datagrams.
· Sends the UDP datagrams to the specified sFlow collector.
The sFlow collector analyzes the information and displays the results. One sFlow collector can monitor multiple sFlow agents.
sFlow provides the following sampling mechanisms:
· Flow sampling—Obtains packet information.
· Counter sampling—Obtains interface counter information.
Figure 80 sFlow system
Protocols and standards
· RFC 3176, InMon Corporation's sFlow: A Method for Monitoring Traffic in Switched and Routed Networks
· sFlow.org, sFlow Version 5
sFlow configuration task list
Tasks at a glance |
(Required.) Configuring the sFlow agent and sFlow collector information |
Perform one or both of the following tasks: |
Configuring the sFlow agent and sFlow collector information
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Configure an IP address for the sFlow agent. |
sflow agent { ip ipv4-address | ipv6 ipv6-address } |
By default, no IP address is configured for the sFlow agent. The device periodically checks whether the sFlow agent has an IP address. If the sFlow agent does not have an IP address, the device automatically selects an IPv4 address for the sFlow agent but does not save the IPv4 address in the configuration file. NOTE: · As a best practice, manually configure an IP address for the sFlow agent. · Only one IP address can be configured for the sFlow agent on the device, and a newly configured IP address overwrites the existing one. |
3. Configure the sFlow collector information. |
sflow collector collector-id [ vpn-instance vpn-instance-name ] { ip ipv4-address | ipv6 ipv6-address } [ port port-number | datagram-size size | time-out seconds | description string ] * |
By default, no sFlow collector information is configured. |
4. (Optional.) Specify the source IP address of sFlow packets. |
sflow source { ip ipv4-address | ipv6 ipv6-address } * |
By default, the source IP address is determined by routing. |
Configuring flow sampling
Perform this task to configure flow sampling on an Ethernet interface. The sFlow agent performs the following tasks:
1. Samples packets on that interface according to the configured parameters.
2. Encapsulates the packets into sFlow packets.
3. Encapsulates the sFlow packets in the UDP packets and sends the UDP packets to the specified sFlow collector.
To configure flow sampling:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter Layer 2 Ethernet interface view or Layer 3 Ethernet interface view. |
interface interface-type interface-number |
N/A |
3. (Optional.) Set the flow sampling mode. |
sflow sampling-mode random |
N/A |
4. Enable flow sampling and specify the number of packets out of which flow sampling samples a packet on the interface. |
sflow sampling-rate rate |
By default, flow sampling is disabled. As a best practice, set the sampling interval to 2n that is greater than or equal to 8192, for example, 32768. |
5. (Optional.) Set the maximum number of bytes (starting from the packet header) that flow sampling can copy per packet. |
sflow flow max-header length |
The default setting is 128 bytes. As a best practice, use the default setting. |
6. Specify the sFlow collector for flow sampling. |
sflow flow collector collector-id |
By default, no sFlow collector is specified for flow sampling. |
Configuring counter sampling
Perform this task to configure counter sampling on an Ethernet interface. The sFlow agent performs the following tasks:
1. Periodically collects the counter information on that interface.
2. Encapsulates the counter information into sFlow packets.
3. Encapsulates the sFlow packets in the UDP packets and sends the UDP packets to the specified sFlow collector.
To configure counter sampling:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter Layer 2 Ethernet interface view or Layer 3 Ethernet interface view. |
interface interface-type interface-number |
N/A |
3. Enable counter sampling and set the counter sampling interval. |
sflow counter interval interval |
By default, counter sampling is disabled. |
4. Specify the sFlow collector for counter sampling. |
sflow counter collector collector-id |
By default, no sFlow collector is specified for counter sampling. |
Displaying and maintaining sFlow
Execute display commands in any view.
Task |
Command |
Display sFlow configuration. |
display sflow |
sFlow configuration example
Network requirements
As shown in Figure 81, perform the following tasks:
· Configure flow sampling in random mode and counter sampling on HundredGigE 1/0/1 of the device to monitor traffic on the port.
· Configure the device to send sampled information in sFlow packets through HundredGigE 1/0/3 to the sFlow collector.
Configuration procedure
1. Configure the IP addresses and subnet masks for interfaces, as shown in Figure 81. (Details not shown.)
2. Configure the sFlow agent and configure information about the sFlow collector:
# Configure the IP address for the sFlow agent.
<Device> system-view
[Device] sflow agent ip 3.3.3.1
# Configure information about the sFlow collector. Specify the sFlow collector ID as 1, IP address as 3.3.3.2, port number as 6343 (default), and description as netserver.
[Device] sflow collector 1 ip 3.3.3.2 description netserver
3. Configure counter sampling:
# Enable counter sampling and set the counter sampling interval to 120 seconds on HundredGigE 1/0/1.
[Device] interface hundredgige 1/0/1
[Device-HundredGigE1/0/1] sflow counter interval 120
# Specify sFlow collector 1 for counter sampling.
[Device-HundredGigE1/0/1] sflow counter collector 1
4. Configure flow sampling:
# Enable flow sampling and set the flow sampling mode to random and sampling interval to 32768.
[Device-HundredGigE1/0/1] sflow sampling-mode random
[Device-HundredGigE1/0/1] sflow sampling-rate 32768
# Specify sFlow collector 1 for flow sampling.
[Device-HundredGigE1/0/1] sflow flow collector 1
Verifying the configurations
# Verify the following items:
· HundredGigE 1/0/1 enabled with sFlow is active.
· The counter sampling interval is 120 seconds.
· The flow sampling interval is 32768 (one packet is sampled from every 32768 packets).
[Device-HundredGigE1/0/1] display sflow
sFlow datagram version: 5
Global information:
Agent IP: 3.3.3.1(CLI)
Source address:
Collector information:
ID IP Port Aging Size VPN-instance Description
1 3.3.3.2 6343 N/A 1400 netserver
Port information:
Interface CID Interval(s) FID MaxHLen Rate Mode Status
HGE1/0/1 1 120 1 128 32768 Random Active
Troubleshooting sFlow configuration
The remote sFlow collector cannot receive sFlow packets
Symptom
The remote sFlow collector cannot receive sFlow packets.
Analysis
The possible reasons include:
· The sFlow collector is not specified.
· sFlow is not configured on the interface.
· The IP address of the sFlow collector specified on the sFlow agent is different from that of the remote sFlow collector.
· No IP address is configured for the Layer 3 interface that sends sFlow packets.
· An IP address is configured for the Layer 3 interface that sends sFlow packets. However, the UDP datagrams with this source IP address cannot reach the sFlow collector.
· The physical link between the device and the sFlow collector fails.
· The sFlow collector is bound to a non-existent VPN.
· The length of an sFlow packet is less than the sum of the following two values:
? The length of the sFlow packet header.
? The number of bytes that flow sampling can copy per packet.
Solution
To resolve the problem:
1. Use the display sflow command to verify that sFlow is correctly configured.
2. Verify that a correct IP address is configured for the device to communicate with the sFlow collector.
3. Verify that the physical link between the device and the sFlow collector is up.
4. Verify that the VPN bound to the sFlow collector already exists.
5. Verify that the length of an sFlow packet is greater than the sum of the following two values:
? The length of the sFlow packet header.
? The number of bytes (as a best practice, use the default setting) that flow sampling can copy per packet.
Configuring the information center
The information center on a device classifies and manages logs for all modules so that network administrators can monitor network performance and troubleshoot network problems.
Overview
The information center receives logs generated by source modules and outputs logs to different destinations according to user-defined output rules. You can classify, filter, and output logs based on source modules. To view the supported source modules, use info-center source ?.
Figure 82 Information center diagram
By default, the information center is enabled. It affects system performance to some degree while processing large amounts of information. If the system resources are insufficient, disable the information center to save resources.
Log types
Logs are classified into the following types:
· Common logs—Record common system information. Unless otherwise specified, the term "logs" in this document refers to common logs.
· Diagnostic logs—Record debug messages.
· Security logs—Record security information, such as authentication and authorization information.
· Hidden logs—Record log information not displayed on the terminal, such as input commands.
· Trace logs—Record system tracing and debug messages, which can be viewed only after the devkit package is installed.
Log levels
Logs are classified into eight severity levels from 0 through 7 in descending order. The information center outputs logs with a severity level that is higher than or equal to the specified level. For example, if you specify a severity level of 6 (informational), logs that have a severity level from 0 to 6 are output.
Severity value |
Level |
Description |
0 |
Emergency |
The system is unusable. For example, the system authorization has expired. |
1 |
Alert |
Action must be taken immediately. For example, traffic on an interface exceeds the upper limit. |
2 |
Critical |
Critical condition. For example, the device temperature exceeds the upper limit, the power module fails, or the fan tray fails. |
3 |
Error |
Error condition. For example, the link state changes. |
4 |
Warning |
Warning condition. For example, an interface is disconnected, or the memory resources are used up. |
5 |
Notification |
Normal but significant condition. For example, a terminal logs in to the device, or the device reboots. |
6 |
Informational |
Informational message. For example, a command or a ping operation is executed. |
7 |
Debugging |
Debug message. |
Log destinations
The system outputs logs to the following destinations: console, monitor terminal, log buffer, log host, and log file. Log output destinations are independent and you can configure them after enabling the information center. One log can be output to multiple destinations.
Default output rules for logs
A log output rule specifies the source modules and severity level of logs that can be output to a destination. Logs matching the output rule are output to the destination. Table 19 shows the default log output rules.
Destination |
Log source modules |
Output switch |
Severity |
Console |
All supported modules |
Enabled |
Debug |
Monitor terminal |
All supported modules |
Disabled |
Debug |
Log host |
All supported modules |
Enabled |
Informational |
Log buffer |
All supported modules |
Enabled |
Informational |
Log file |
All supported modules |
Enabled |
Informational |
Default output rules for diagnostic logs
Diagnostic logs can only be output to the diagnostic log file, and cannot be filtered by source modules and severity levels. Table 20 shows the default output rule for diagnostic logs.
Table 20 Default output rule for diagnostic logs
Destination |
Log source modules |
Output switch |
Severity |
Diagnostic log file |
All supported modules |
Enabled |
Debug |
Default output rules for security logs
Security logs can only be output to the security log file, and cannot be filtered by source modules and severity levels. Table 21 shows the default output rule for security logs.
Table 21 Default output rule for security logs
Destination |
Log source modules |
Output switch |
Severity |
Security log file |
All supported modules |
Disabled |
Debug |
Default output rules for hidden logs
Hidden logs can be output to the log host, the log buffer, and the log file. Table 22 shows the default output rules for hidden logs.
Table 22 Default output rules for hidden logs
Destination |
Log source modules |
Output switch |
Severity |
Log host |
All supported modules |
Enabled |
Informational |
Log buffer |
All supported modules |
Enabled |
Informational |
Log file |
All supported modules |
Enabled |
Informational |
Default output rules for trace logs
Trace logs can only be output to the trace log file, and cannot be filtered by source modules and severity levels. Table 23 shows the default output rules for trace logs.
Table 23 Default output rules for trace logs
Destination |
Log source modules |
Output switch |
Severity |
Trace log file |
All supported modules |
Enabled |
Debugging |
Log formats
The format of logs varies by output destinations. Table 24 shows the original format of log information, which might be different from what you see. The actual format varies by the log resolution tool used.
Output destination |
Format |
Example |
Console, monitor terminal, log buffer, or log file |
Prefix Timestamp Sysname Module/Level/Mnemonic: Content |
%Nov 24 14:21:43:502 2010 Sysname SHELL/5/SHELL_LOGIN: VTY logged in from 192.168.1.26 |
Log host |
·
Standard format: ·
unicom format: ·
cmcc format: |
·
Standard format: ·
unicom format: ·
cmcc format: |
Table 25 describes the fields in a log message.
Table 25 Log field description
Field |
Description |
Prefix (information type) |
A log to a destination other than the log host has an identifier in front of the timestamp: · An identifier of percent sign (%) indicates a log with a level equal to or higher than informational. · An identifier of asterisk (*) indicates a debug log or a trace log. · An identifier of caret (^) indicates a diagnostic log. |
PRI (priority) |
A log destined to the log host has a priority identifier in front of the timestamp. The priority is calculated by using this formula: facility*8+level, where: · facility is the facility name. Facility names local0 through local7 correspond to values 16 through 23. The facility name can be configured using the info-center loghost command. It is used to identify log sources on the log host, and to query and filter the logs from specific log sources. · level is in the range of 0 to 7. See Table 18 for more information about severity levels. |
Timestamp |
Records the time when the log was generated. Logs sent to the log host and those sent to the other destinations have different timestamp precisions, and their timestamp formats are configured with different commands. For more information, see Table 26 and Table 27. |
Hostip |
Source IP address of the log. If the info-center loghost source command is configured, this field displays the IP address of the specified source interface. Otherwise, this field displays the sysname. This field exists only in logs in unicom format that are sent to the log host. |
Serial number |
Serial number of the device that generated the log. This field exists only in logs in unicom format that are sent to the log host. |
Sysname (host name or host IP address) |
The sysname is the host name or IP address of the device that generated the log. You can use the sysname command to modify the name of the device. |
%% (vendor ID) |
Identifies the vendor of the device that generated the log. This field exists only in logs sent to the log host. |
vv (version information) |
Identifies the version of the log. Its value is 10. This field exists only in logs that are sent to the log host. |
Module |
Specifies the name of the module that generated the log. You can enter the info-center source ? command in system view to view the module list. |
Level |
Identifies the level of the log. See Table 18 for more information about severity levels. |
Mnemonic |
Describes the content of the log. It contains a string of up to 32 characters. |
Source |
(In standalone mode.) Optional field that identifies the source of the log. The value contains a card slot number and the IP address of the log sender. (In IRF mode.) Optional field that identifies the source of the log. The value contains an IRF member device ID, a card slot number, and the IP address of the log sender. |
Content |
Provides the content of the log. |
Table 26 Timestamp precisions and configuration commands
Item |
Destined to the log host |
Destined to the console, monitor terminal, log buffer, and log file |
Precision |
Seconds |
Milliseconds |
Command used to set the timestamp format |
info-center timestamp loghost |
info-center timestamp |
Table 27 Description of the timestamp parameters
Timestamp parameters |
Description |
Example |
boot |
Time that has elapsed since system startup, in the format of xxx.yyy. xxx represents the higher 32 bits, and yyy represents the lower 32 bits, of milliseconds elapsed. Logs that are sent to all destinations other than a log host support this parameter. |
%0.109391473 Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged in successfully. 0.109391473 is a timestamp in the boot format. |
date |
Current date and time, in the format of mmm dd hh:mm:ss yyy for logs that are output to a log host, or MMM DD hh:mm:ss:xxx YYYY for logs that are output to other destinations. All logs support this parameter. |
%May 30 05:36:29:579 2003 Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged in successfully. May 30 05:36:29:579 2003 is a timestamp in the date format. |
iso |
Timestamp format stipulated in ISO 8601. Only logs that are sent to a log host support this parameter. |
<189>2003-05-30T06:42:44 Sysname %%10FTPD/5/FTPD_LOGIN(l): User ftp (192.168.1.23) has logged in successfully. 2003-05-30T06:42:44 is a timestamp in the iso format. |
none |
No timestamp is included. All logs support this parameter. |
% Sysname FTPD/5/FTPD_LOGIN: User ftp (192.168.1.23) has logged in successfully. No timestamp is included. |
no-year-date |
Current date and time without year information, in the format of MMM DD hh:mm:ss:xxx. Only logs that are sent to a log host support this parameter. |
<189>May 30 06:44:22 Sysname %%10FTPD/5/FTPD_LOGIN(l): User ftp (192.168.1.23) has logged in successfully. May 30 06:44:22 is a timestamp in the no-year-date format. |
FIPS compliance
The device supports the FIPS mode that complies with NIST FIPS 140-2 requirements. Support for features, commands, and parameters might differ in FIPS mode and non-FIPS mode. For more information about FIPS mode, see Security Configuration Guide.
Information center configuration task list
Tasks at a glance |
|
Perform at least one of the following tasks: · Outputting logs to the console · Outputting logs to the monitor terminal · Outputting logs to log hosts |
|
(Optional.) Managing security logs |
|
(Optional.) Saving diagnostic logs to the diagnostic log file |
|
(Optional.) Configuring the maximum size of the trace log file |
|
(Optional.) Setting the minimum storage period for log files and logs in the log buffer |
|
(Optional.) Enabling synchronous information output |
|
(Optional.) Enabling duplicate log suppression |
|
(Optional.) Configuring log suppression for a module |
|
(Optional.) Disabling an interface from generating link up or link down logs |
|
(Optional.) Enabling SNMP notifications for system logs |
Outputting logs to the console
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the information center. |
info-center enable |
By default, the information center is enabled. |
3. Configure an output rule for the console. |
info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity } |
For information about default output rules, see "Default output rules for logs." |
4. (Optional.) Configure the timestamp format. |
info-center timestamp { boot | date | none } |
By default, the timestamp format is date. |
5. Return to user view. |
quit |
N/A |
6. (Optional.) Enable log output to the console. |
terminal monitor |
The default setting is enabled. |
7. Enable the display of debug information on the current terminal. |
terminal debugging |
By default, the display of debug information is disabled on the current terminal. |
8. (Optional.) Set the lowest severity level of logs that can be output to the console. |
terminal logging level severity |
The default setting is 6 (informational). |
Outputting logs to the monitor terminal
Monitor terminals refer to terminals that log in to the device through the AUX, VTY, or TTY line.
To output logs to the monitor terminal:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the information center. |
info-center enable |
By default, the information center is enabled. |
3. Configure an output rule for the monitor terminal. |
info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity } |
For information about default output rules, see "Default output rules for logs." |
4. (Optional.) Configure the timestamp format. |
info-center timestamp { boot | date | none } |
The default timestamp format is date. |
5. Return to user view. |
quit |
N/A |
6. Enable log output to the monitor terminal. |
terminal monitor |
By default, log output to the monitor terminal is enabled. |
7. Enable the display of debug information on the current terminal. |
terminal debugging |
By default, the display of debug information is disabled on the current terminal. |
8. (Optional.) Set the lowest level of logs that can be output to the monitor terminal. |
terminal logging level severity |
The default setting is 6 (informational). |
Outputting logs to log hosts
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the information center. |
info-center enable |
By default, the information center is enabled. |
3. Configure an output rule for log hosts. |
info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity } |
For information about default output rules, see "Default output rules for logs." |
4. (Optional.) Specify the source IP address for output logs. |
info-center loghost source interface-type interface-number |
By default, the source IP address of output logs is the primary IP address of the outgoing interface. |
5. (Optional.) Specify the format in which logs are output to log hosts. |
info-center format { unicom | cmcc } |
By default, logs are output to log hosts in standard format. |
6. (Optional.) Configure the timestamp format. |
info-center timestamp loghost { date | iso [ with-timezone ] | no-year-date | none } |
The default timestamp format is date. |
7. Specify a log host and configure related parameters. |
info-center loghost [ vpn-instance vpn-instance-name ] { hostname | ipv4-address | ipv6 ipv6-address } [ port port-number ] [ facility local-number ] |
By default, no log hosts or related parameters are specified. The value for the port-number argument must be the same as the value configured on the log host. Otherwise, the log host cannot receive logs. You can specify a maximum of 20 log hosts. |
Outputting logs to the log buffer
Step |
Command |
Remarks |
1. Enter system view. |
N/A |
|
2. Enable the information center. |
info-center enable |
By default, the information center is enabled. |
3. Enable log output to the log buffer. |
info-center logbuffer |
By default, log output to the log buffer is enabled. |
4. (Optional.) Set the maximum number of logs that can be stored in the log buffer. |
info-center logbuffer size buffersize |
By default, the log buffer can store 512 logs. |
5. Configure an output rule for the log buffer. |
info-center source { module-name | default } { console | monitor | logbuffer | logfile | loghost } { deny | level severity } |
For information about default output rules, see "Default output rules for logs." |
6. (Optional.) Configure the timestamp format. |
info-center timestamp { boot | date | none } |
The default timestamp format is date. |
Saving logs to the log file
By default, the log file feature saves logs from the log file buffer to the log file every 24 hours. You can adjust the saving interval or manually save logs to the log file. After saving logs to the log file, the system clears the log file buffer.
The device support multiple log files. Each log file has a maximum capacity. The log files are named as logfile1.log, logfile2.log, and so on.
When logfile1.log is full, the system compresses logfile1.log as logfile1.log.gz and creates a new log file named logfile2.log. The process repeats until the last log file is full.
After the last log file is full, the device repeats the following process:
1. The device locates the oldest compressed log file logfileX.log.gz and creates a new file using the same name (logfileX.log).
2. When logfileX.log is full, the device compresses the log file as logfileX.log.gz to replace the existing file logfileX.log.gz.
As a best practice, back up the log files regularly to avoid loss of important logs.
You can enable log file overwrite-protection to stop the device from saving new logs when the last log file is full or the storage device runs out of space.
To save logs to the log file:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the information center. |
info-center enable |
By default, the information center is enabled. |
3. Enable the log file feature. |
info-center logfile enable |
By default, the log file feature is enabled. |
4. (Optional.) Enable log file overwrite-protection. |
info-center logfile overwrite-protection [ all-port-powerdown ] |
By default, log file overwrite-protection is disabled. This feature is supported only in FIPS mode. |
5. (Optional.) Set the maximum size for the log file. |
info-center logfile size-quota size |
The default setting is 10 MB. To ensure normal operation, set the size argument to a value between 1 MB and 10 MB. |
6. (Optional.) Specify the directory to save the log file. |
info-center logfile directory dir-name |
The default log file directory is flash:/logfile. (In standalone mode.) This command cannot survive a reboot or an active/standby switchover. (In IRF mode.) This command cannot survive an IRF reboot or a global active/standby switchover in an IRF fabric. |
7. Save the logs in the log file buffer to the log file. |
·
Configure the interval to perform the
save operation: ·
Manually save the logs in the log file buffer
to the log file: |
The default log file saving interval is 86400 seconds. The logfile save command is available in any view. |
Managing security logs
Security logs are very important for locating and troubleshooting network problems. Generally, security logs are output together with other logs. It is difficult to identify security logs among all logs.
To solve this problem, you can save security logs to the security log file without affecting the current log output rules.
Saving security logs to the security log file
After you enable the saving of the security logs to the security log file:
· The system first outputs security logs to the security log file buffer.
· The system saves logs from the security log file buffer to the security log file at the specified interval (a user authorized the security-audit role can also manually save security logs to the security log file).
· After the security logs are saved, the buffer is cleared immediately.
The device supports only one security log file. To avoid security log loss, you can set an alarm threshold for the security log file usage. When the alarm threshold is reached, the system outputs a message to inform the administrator. The administrator can log in to the device with the security-audit user role and back up the security log file to prevent the loss of important data.
To save security logs to the security log file:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the information center. |
info-center enable |
By default, the information center is enabled. |
3. Enable the saving of the security logs to the security log file. |
info-center security-logfile enable |
By default, saving security logs to the security log file is disabled. |
4. Set the interval at which the system saves security logs. |
info-center security-logfile frequency freq-sec |
The default security log file saving interval is 86400 seconds. |
5. (Optional.) Set the maximum size for the security log file. |
info-center security-logfile size-quota size |
The default setting is 10 MB. |
6. (Optional.) Set the alarm threshold of the security log file usage. |
info-center security-logfile alarm-threshold usage |
By default, the alarm threshold of the security log file usage is 80. When the usage of the security log file reaches 80%, the system will inform the user. |
Managing the security log file
To use the security log file management commands in this section, a local user must be authorized the security-audit user role. For information about configuring the security-audit user role, see Security Command Reference.
To manage the security log file:
Task |
Command |
Remarks |
Display a summary of the security log file. |
display security-logfile summary |
Available in user view. |
Change the directory of the security log file. |
a system-view b info-center security-logfile directory dir-name |
The default security log file directory is flash:/seclog. (In standalone mode.) This command cannot survive a reboot or an active/standby switchover. (In IRF mode.) This command cannot survive an IRF reboot or a global active/standby switchover in an IRF fabric. |
Manually save all the contents in the security log file buffer to the security log file. |
security-logfile save |
Available in any view. |
Saving diagnostic logs to the diagnostic log file
By default, the device saves diagnostic logs from the diagnostic log file buffer to the diagnostic log file every 24 hours. You can adjust the saving interval or manually save diagnostic logs to the diagnostic log file. After saving diagnostic logs to the diagnostic log file, the system clears the diagnostic log file buffer.
The device supports only one diagnostic log file. The diagnostic log file has a maximum capacity. When the capacity is reached, the system replaces the oldest diagnostic logs with new logs.
To enable saving diagnostic logs to the diagnostic log file:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable the information center. |
info-center enable |
By default, the information center is enabled. |
3. Enable saving diagnostic logs to the diagnostic log file. |
info-center diagnostic-logfile enable |
By default, saving diagnostic logs to the diagnostic log file is enabled. |
4. (Optional.) Set the maximum size for the diagnostic log file. |
info-center diagnostic-logfile quota size |
The default setting is 10 MB. |
5. (Optional.) Specify the directory to save the diagnostic log file. |
info-center diagnostic-logfile directory dir-name |
The default diagnostic log file directory is flash:/diagfile. (In standalone mode.) This command cannot survive a reboot or an active/standby switchover. (In IRF mode.) This command cannot survive an IRF reboot or a global active/standby switchover in an IRF fabric. |
6. Save diagnostic logs in the diagnostic log file buffer to the diagnostic log file. |
·
Configure the interval to perform the
saving operation: ·
Manually save diagnostic logs in the buffer to
the diagnostic log file: |
The default saving interval is 86400 seconds. The diagnostic-logfile save command is available in any view. |
Configuring the maximum size of the trace log file
The device has only one trace log file. When the trace log file is full, the device overwrites the oldest trace logs with new ones.
To set the maximum size for the trace log file:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Set the maximum size for the trace log file. |
info-center trace-logfile quota size |
By default, the maximum size of the trace log file is 1 MB. |
Setting the minimum storage period for log files and logs in the log buffer
Use this feature to set the minimum storage period for log files and logs in the log buffer.
Logs in the log buffer
By default, when the log buffer is full, new logs will automatically overwrite the oldest logs. After the minimum storage period is set, the system identifies the storage period of a log to determine whether to delete the log. The system current time minus a log's generation time is the log's storage period.
· If the storage period of a log is shorter than or equal to the minimum storage period, the system does not delete the log. The new log will not be saved.
· If the storage period of a log is longer than the minimum storage period, the system deletes the log to save the new log.
Log files
By default, when the last log file is full, the device locates the oldest compressed log file logfileX.log.gz and creates a new file using the same name (logfileX.log).
After the minimum storage period is set, the system identifies the storage period of the compressed log file before creating a new log file with the same name. The system current time minus the log file's last modification time is the log file's storage period.
· If the storage period of the compressed log file is shorter than or equal to the minimum storage period, the system stops saving new logs.
· If the storage period of the compressed log file is longer than the minimum storage period, the system creates a new file to save new logs.
For more information about log saving, see "Saving logs to the log file."
Configuration procedure
To set the minimum storage period for log files and logs in the log buffer:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Set the log minimum storage period. |
info-center syslog min-age min-age |
By default, the log minimum storage period is not set. |
Enabling synchronous information output
System log output interrupts ongoing configuration operations, obscuring previously entered commands. Synchronous information output shows the obscured commands. It also provides a command prompt in command editing mode, or a [Y/N] string in interaction mode so you can continue your operation from where you were stopped.
To enable synchronous information output:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable synchronous information output. |
info-center synchronous |
By default, synchronous information output is disabled. |
Enabling duplicate log suppression
The output of consecutive duplicate logs at an interval of less than 30 seconds wastes system and network resources.
With this feature enabled, the system starts a suppression period upon outputting a log:
· During the suppression period, the system does not output logs that have the same module name, level, mnemonic, location, and text as the previous log.
· After the suppression period expires, if the same log continues to appear, the system outputs the suppressed logs and the log number and starts another suppression period. The suppression period is 30 seconds for the first time, 2 minutes for the second time, and 10 minutes for subsequent times.
· If a different log is generated during the suppression period, the system aborts the current suppression period, outputs suppressed logs and the log number and then the different log, and starts another suppression period.
To enable duplicate log suppression:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
info-center logging suppress duplicates |
By default, duplicate log suppression is disabled. |
Configuring log suppression for a module
Perform this task to configure a log suppression rule to suppress output of all logs or logs with a specific mnemonic value for a module.
To configure a log suppression rule for a module:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure a log suppression rule for a module. |
info-center logging suppress module module-name mnemonic { all | mnemonic-content } |
By default, the device does not suppress output of any logs from any modules. |
Disabling an interface from generating link up or link down logs
By default, all interfaces generate link up or link down log information when the interface state changes. In some cases, you might want to disable certain interfaces from generating this information. For example:
· You are concerned only about the states of some interfaces. In this case, you can use this function to disable other interfaces from generating link up and link down log information.
· An interface is unstable and continuously outputs log information. In this case, you can disable the interface from generating link up and link down log information.
Use the default setting in normal cases to avoid affecting interface status monitoring.
To disable an interface from generating link up or link down logs:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enter interface view. |
interface interface-type interface-number |
N/A |
3. Disable the interface from generating link up or link down logs. |
undo enable log updown |
By default, all interfaces generate link up and link down logs when the interface state changes. |
Enabling SNMP notifications for system logs
This feature enables the device to send an SNMP notification for each log message it outputs. The device encapsulates the logs in SNMP notifications and then sends them to the SNMP module and the log trap buffer.
You can configure the SNMP module to send received SNMP notifications in SNMP traps or informs to remote hosts. For more information, see "Configuring SNMP."
To view the traps in the log trap buffer, access the MIB corresponding to the log trap buffer.
To enable SNMP notifications for system logs:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable SNMP notifications for system logs. |
snmp-agent trap enable syslog |
N/A |
3. Set the maximum number of traps that can be stored in the log trap buffer. |
info-center syslog trap buffersize buffersize |
By default, the log trap buffer can store a maximum of 1024 traps. |
Displaying and maintaining information center
Execute display commands in any view and reset commands in user view.
Task |
Command |
Display the information of each output destination. |
display info-center |
(In standalone mode.) Display the state and the log information of the log buffer. |
display logbuffer [ reverse ] [ level severity | size buffersize | slot slot-number ]* |
(In IRF mode.) Display the state and the log information of the log buffer. |
display logbuffer [ reverse ] [ level severity | size buffersize | chassis chassis-number slot slot-number ] * |
(In standalone mode.) Display a summary of the log buffer. |
display logbuffer summary [ level severity | slot slot-number ] * |
(In IRF mode.) Display a summary of the log buffer. |
display logbuffer summary [ level severity | chassis chassis-number slot slot-number ] * |
Display the log file configuration. |
display logfile summary |
Display the diagnostic log file configuration. |
display diagnostic-logfile summary |
Clear the log buffer. |
reset logbuffer |
Information center configuration examples
Configuration example for outputting logs to the console
Network requirements
Configure the device to output to the console FTP logs that have a severity level of at least warning.
Figure 83 Network diagram
Configuration procedure
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Disable log output to the console.
[Device] info-center source default console deny
To avoid output of unnecessary information, disable all modules from outputting log information to the specified destination (console in this example) before you configure the output rule.
# Configure an output rule to output to the console FTP logs that have a severity level of at least warning.
[Device] info-center source ftp console level warning
[Device] quit
# Enable the display of logs on the console. (This function is enabled by default.)
<Device> terminal logging level 6
<Device> terminal monitor
The current terminal is enabled to display logs.
Now, if the FTP module generates logs, the information center automatically sends the logs to the console, and the console displays the logs.
Configuration example for outputting logs to a UNIX log host
Network requirements
Configure the device to output to the UNIX log host FTP logs that have a severity level of at least informational.
Figure 84 Network diagram
Configuration procedure
Before the configuration, make sure the device and the log host can reach each other. (Details not shown.)
1. Configure the device:
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Specify the log host 1.2.0.1/16 and specify local4 as the logging facility.
[Device] info-center loghost 1.2.0.1 facility local4
# Disable log output to the log host.
[Device] info-center source default loghost deny
To avoid output of unnecessary information, disable all modules from outputting logs to the specified destination (loghost in this example) before you configure an output rule.
# Configure an output rule to output to the log host FTP logs that have a severity level of at least informational.
[Device] info-center source ftp loghost level informational
2. Configure the log host:
The following configurations were performed on Solaris. Other UNIX operating systems have similar configurations.
a. Log in to the log host as a root user.
b. Create a subdirectory named Device in directory /var/log/, and then create file info.log in the Device directory to save logs from Device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit the file syslog.conf in directory /etc/ and add the following contents.
# Device configuration messages
local4.info /var/log/Device/info.log
In this configuration, local4 is the name of the logging facility that the log host uses to receive logs. info is the informational level. The UNIX system records the log information that has a severity level of at least informational to the file /var/log/Device/info.log.
|
NOTE: Follow these guidelines while editing the file /etc/syslog.conf: · Comments must be on a separate line and must begin with a pound sign (#). · No redundant spaces are allowed after the file name. · The logging facility name and the severity level specified in the /etc/syslog.conf file must be identical to those configured on the device by using the info-center loghost and info-center source commands. Otherwise, the log information might not be output to the log host correctly. |
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd using the –r option to make the new configuration take effect.
# ps -ae | grep syslogd
147
# kill -HUP 147
# syslogd -r &
Now, the device can output FTP logs to the log host, which stores the logs to the specified file.
Configuration example for outputting logs to a Linux log host
Network requirements
Configure the device to output to the Linux log host 1.2.0.1/16 FTP logs that have a severity level of at least informational.
Figure 85 Network diagram
Configuration procedure
Before the configuration, make sure the device and the log host can reach each other. (Details not shown.)
1. Configure the device:
# Enable the information center.
<Device> system-view
[Device] info-center enable
# Specify the log host 1.2.0.1/16, and specify local5 as the logging facility.
[Device] info-center loghost 1.2.0.1 facility local5
# Disable log output to the log host.
[Device] info-center source default loghost deny
To avoid outputting unnecessary information, disable all modules from outputting log information to the specified destination (loghost in this example) before you configure an output rule.
# Configure an output rule to enable output to the log host FTP logs that have a severity level of at least informational.
[Device] info-center source ftp loghost level informational
2. Configure the log host:
The following configurations were performed on Solaris. Other UNIX operating systems have similar configurations.
a. Log in to the log host as a root user.
b. Create a subdirectory named Device in the directory /var/log/, and create file info.log in the Device directory to save logs of Device.
# mkdir /var/log/Device
# touch /var/log/Device/info.log
c. Edit the file syslog.conf in directory /etc/ and add the following contents.
# Device configuration messages
local5.info /var/log/Device/info.log
In the above configuration, local5 is the name of the logging facility used by the log host to receive logs. info is the informational level. The Linux system will store the log information with a severity level equal to or higher than informational to the file /var/log/Device/info.log.
|
NOTE: Follow these guidelines while editing the file /etc/syslog.conf: · Comments must be on a separate line and must begin with a pound sign (#). · No redundant spaces are allowed after the file name. · The logging facility name and the severity level specified in the /etc/syslog.conf file must be identical to those configured on the device by using the info-center loghost and info-center source commands. Otherwise, the log information might not be output to the log host correctly. |
d. Display the process ID of syslogd, kill the syslogd process, and then restart syslogd by using the -r option to apply the new configuration.
Make sure the syslogd process is started with the -r option on a Linux log host.
# ps -ae | grep syslogd
147
# kill -9 147
# syslogd -r &
Now, the system can record log information to the specified file.
Configuring GOLD
Generic Online Diagnostics (GOLD) performs the following operations:
· Runs diagnostic tests on a device to inspect device ports, RAM, chip, connectivity, forwarding paths, and control paths for hardware faults.
· Reports the problems to the system.
GOLD diagnostics are divided into the following categories:
· Monitoring diagnostics—Run diagnostic tests periodically when the system is in operation and record test results. Monitoring diagnostics execute only non-disruptive tests.
· On-demand diagnostics—Enable you to manually start or stop diagnostic tests during system operation.
Each kind of diagnostics runs its diagnostic tests. The parameters of a diagnostic test include test name, type, description, attribute (disruptive or non-disruptive), default status, and execution interval.
Support for the diagnostic tests and default values for a test's parameters depend on the device model. You can modify part of the parameters by using the commands provided by this document.
The diagnostic tests are released with the system software image of the device. All enabled diagnostic tests run in the background. You can use the display commands to view test results and logs to verify hardware faults.
Configuring monitoring diagnostics
The system automatically executes monitoring diagnostic tests that are enabled by default after the device starts. Use the diagnostic monitor enable command to enable monitoring diagnostic tests that are disabled by default.
Use the diagnostic monitor interval command to configure an execution interval for each test. The interval you set must be no smaller than the minimum interval for each test. You can view the minimum interval for a test by using the display diagnostic content command.
(In standalone mode.) To configure monitoring diagnostics:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable monitoring diagnostics. |
diagnostic monitor enable slot slot-number-list [ test test-name ] |
By default, monitoring diagnostics are enabled. |
3. Set the execution interval. |
diagnostic monitor interval slot slot-number-list [ test test-name ] time interval |
By default, the settings for this command vary by test. Use the display diagnostic content command to view the execution interval for a test. |
(In IRF mode.) To configure monitoring diagnostics:
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable monitoring diagnostics. |
diagnostic monitor enable chassis chassis-number slot slot-number-list [ test test-name ] |
By default, monitoring diagnostics are enabled. |
3. Set the execution interval. |
diagnostic monitor interval chassis chassis-number slot slot-number-list [ test test-name ] time interval |
By default, the settings for this command vary by test. Use the display diagnostic content command to view the execution interval for a test. |
Configuring on-demand diagnostics
The diagnostic ondemand commands are effective only during the current system operation. These commands are restored to the default after you restart the device.
Starting a device startup check by using on-demand diagnostics
You can perform this task when the device starts up with factory defaults and service ports on the device are not installed with copper cables or transceiver modules. After you perform this task, the system performs startup check for the device and generates a report.
Follow these guidelines to perform this task:
· This feature is supported only in standalone mode.
· After you perform this task, you need to restart the device.
To start a device startup check by using on-demand diagnostics:
Task |
Command |
Remarks |
Start a device startup check by using on-demand diagnostics. |
diagnostic start test test-name |
Use this command in user view. |
Starting on-demand diagnostics during device operation
You can stop an on-demand diagnostic test during device operation by using any of the following commands:
· Use the diagnostic ondemand stop command to immediately stop the test.
· Use the diagnostic ondemand repeating command to configure the number of executions for the test.
· Use the diagnostic ondemand failure command to configure the maximum number of failed tests before the system stops the test.
(In standalone mode.) To start on-demand diagnostics during device operation:
Task |
Command |
Remarks |
Configure the number of executions. |
diagnostic ondemand repeating repeating-number |
The default value for the repeating-number argument is 1. This command only applies to diagnostic tests to be enabled. |
Configure the number of failed tests. |
diagnostic ondemand failure failure-number |
By default, the maximum number of failed tests is not specified. Configure a number no larger than the configured repeating-number argument. This command only applies to diagnostic tests to be enabled. |
Enable on-demand diagnostics. |
diagnostic ondemand start slot slot-number-list test { test-name | non-disruptive } [ para parameters ] |
The system runs the tests according to the default configuration if you do not perform the first two configurations. |
(Optional.) Stop on-demand diagnostics. |
diagnostic ondemand stop slot slot-number-list test { test-name | non-disruptive } |
You can manually stop all on-demand diagnostic tests. |
(In IRF mode.) To start on-demand diagnostics during device operation:
Task |
Command |
Remarks |
Configure the number of executions. |
diagnostic ondemand repeating repeating-number |
The default value for the repeating-number argument is 1. This command only applies to diagnostic tests to be enabled. |
Configure the number of failed tests. |
diagnostic ondemand failure failure-number |
By default, the maximum number of failed tests is not specified. Configure a number no larger than the configured repeating-number argument. This command only applies to diagnostic tests to be enabled. |
Enable on-demand diagnostics. |
diagnostic ondemand start chassis chassis-number slot slot-number-list test { test-name | non-disruptive } [ para parameters ] |
By default, all on-demand diagnostic tests must be manually enabled. |
(Optional.) Stop on-demand diagnostics. |
diagnostic ondemand stop chassis chassis-number slot slot-number-list test { test-name | non-disruptive } |
N/A |
Simulating test results
Test simulation verifies GOLD frame functionality. When you use the diagnostic simulation commands to simulate a diagnostic test, only part of the test code is executed to generate a test result. Test simulation does not trigger hardware correcting actions such as device restart and active and standby switchover.
To simulate a test:
Task |
Command |
Remarks |
Simulate a test. |
In standalone mode: In IRF mode: |
By default, the system runs a test instead of simulating it. |
Configuring the log buffer size
GOLD saves test results in the form of logs. You can use the display diagnostic event-log command to view the logs.
To configure the GOLD log buffer size:
Task |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Configure the maximum number of GOLD logs that can be saved. |
diagnostic event-log size number |
By default, GOLD saves 512 log entries at most. When the number of logs exceeds the configured log buffer size, the system deletes the oldest entries. |
Displaying and maintaining GOLD
(In standalone mode.) Execute display commands in any view and reset commands in user view.
Task |
Command |
Display test content. |
display diagnostic content [ slot slot-number ] [ verbose ] |
Display GOLD logs. |
display diagnostic event-log [ error | info ] |
Display configurations of on-demand diagnostics. |
display diagnostic ondemand configuration |
Display test results. |
display diagnostic result [ slot slot-number [ test test-name ] ] [ verbose ] |
Display statistics for packet-related tests. |
display diagnostic result [ slot slot-number [ test test-name ] ] statistics |
Display configurations for simulated tests. |
display diagnostic simulation [ slot slot-number ] |
Clear GOLD logs. |
reset diagnostic event-log |
Clear test results. |
reset diagnostic result [ slot slot-number [ test test-name ] ] |
(In IRF mode.) Execute display commands in any view and reset commands in user view.
Task |
Command |
Display test content. |
display diagnostic content [ chassis chassis-number [ slot slot-number ] ] [ verbose ] |
Display GOLD logs. |
display diagnostic event-log [ error | info ] |
Display configurations of on-demand diagnostics. |
display diagnostic ondemand configuration |
Display test results. |
display diagnostic result [ chassis chassis-number [ slot slot-number [ test test-name ] ] ] [ verbose ] |
Display statistics for packet-related tests. |
display diagnostic result [ chassis chassis-number [ slot slot-number [ test test-name ] ] ] statistics |
Display configurations for simulated tests. |
display diagnostic simulation [ chassis chassis-number [ slot slot-number ] ] |
Clear GOLD logs. |
reset diagnostic event-log |
Clear test results. |
reset diagnostic result [ chassis chassis-number [ slot slot-number [ test test-name ] ] ] |
GOLD configuration example (in standalone mode)
Network requirements
Enable monitoring diagnostic test PortMonitor on slot 1, and set its execution interval to 1 minute.
Configuration procedure
# View the default status and execution interval of the test on slot 1.
<Sysname> display diagnostic content slot 1 verbose
Diagnostic test suite attributes:
#B/*: Bootup test/NA
#O/*: Ondemand test/NA
#M/*: Monitoring test/NA
#D/*: Disruptive test/Non-disruptive test
#P/*: Per port test/NA
#A/I/*: Monitoring test is active/Monitoring test is inactive/NA
Slot 1 cpu 0:
Test name : PortMonitor
Test attributes : **M*PI
Test interval : 00:00:10
Min interval : 00:00:10
Correct-action : -NA-
Description : A Real-time test, disabled by default that checks link status between ports.
# Enable test PortMonitor on slot 1.
<Sysname> system-view
[Sysname] diagnostic monitor enable slot 1 test PortMonitor
# Set the execution interval to 1 minute.
[Sysname] diagnostic monitor interval slot 1 test PortMonitor time 0:1:0
Verifying the configuration
# View the test configuration.
[Sysname] display diagnostic content slot 1 verbose
Diagnostic test suite attributes:
#B/*: Bootup test/NA
#O/*: Ondemand test/NA
#M/*: Monitoring test/NA
#D/*: Disruptive test/Non-disruptive test
#P/*: Per port test/NA
#A/I/*: Monitoring test is active/Monitoring test is inactive/NA
Slot 1 cpu 0:
Test name : PortMonitor
Test attributes : **M*PA
Test interval : 00:01:00
Min interval : 00:00:10
Correct-action : -NA-
Description : A Real-time test, disabled by default that checks link status between ports.
# View the test result.
[Sysname] display diagnostic result slot 1 verbose
Slot 1 cpu 0:
Test name : PortMonitor
Total run count : 1247
Total failure count : 0
Consecutive failure count: 0
Last execution time : Tue Dec 25 18:09:21 2012
First failure time : -NA-
Last failure time : -NA-
Last pass time : Tue Dec 25 18:09:21 2012
Last execution result : Success
Last failure reason : -NA-
Next execution time : Tue Dec 25 18:10:21 2012
Port link status : Normal
Configuring the packet capture
Overview
The packet capture feature captures incoming packets that are to be forwarded in CPU. The feature displays the captured packets in real time, and allows you to save the captured packets to a .pcap file for future analysis.
Packet capture modes
You can configure packet capture for only one user at a time.
Local packet capture
Local packet capture saves the captured packets to a remote file on an FTP server, to a local file, or displays the captured packets at the CLI.
Remote packet capture
Remote packet capture displays the captured packets on a Wireshark client. Before using remote packet capture, you must install the Wireshark software on a PC and connect the PC to the device to be used to capture packets. The device sends the packet data to be displayed to the Wireshark client.
Feature image-based packet capture
|
NOTE: · To use this mode, you must install the packet capture feature image by using the boot-loader, install, or issu command. For more information, see Fundamentals Configuration Guide. · If you uninstall the feature image, the remote and local packet capture are also uninstalled. |
Feature image-based packet capture saves the captured packets to a local file or displays the captured packets on the terminal in a .pcap or .pcapng file.
Filter elements
Packet capture supports capture filters and display filters. You can use expressions to match packets to capture or display.
A capture or display filter contains a keyword string or multiple keyword strings that are connected by operators.
Keywords include the following types:
· Qualifiers—Fixed keyword strings. For example, you must use the ip qualifier to specify the IPv4 protocol.
· Variables—Values supplied by users in the required format. For example, you can set an IP address to 2.2.2.2 or any other valid values.
A variable must be modified by one or multiple qualifiers. For example, to capture any packets sent from the host at 2.2.2.2, use the filter src host 2.2.2.2.
Operators include the following types:
· Logical operators—Perform logical operations, such as the AND operation.
· Arithmetic operators—Perform arithmetic operations, such as the ADD operation.
· Relational operators—Indicate the relation between keyword strings. For example, the = operator indicates equality.
This document provides basic information about these elements. For more information about capture and display filters, go to the following websites:
· http://wiki.wireshark.org/CaptureFilters.
· http://wiki.wireshark.org/DisplayFilters.
Capture filter keywords
Table 28 and Table 29 describe the qualifiers and variables for capture filters, respectively.
Table 28 Qualifiers for capture filters
Category |
Description |
Examples |
Protocol |
Matches a protocol. If you do not specify a protocol qualifier, the filter matches any supported protocols. |
· arp—Matches ARP. · icmp—Matches ICMP. · ip—Matches IPv4. · ip6—Matches IPv6. · tcp—Matches TCP. · udp—Matches UDP. |
Direction |
Matches packets based on its source or destination location (an IP address or port number). If you do not specify a direction qualifier, the src or dst qualifier applies. |
· src—Matches the source IP address field. · dst—Matches the destination IP address field. · src or dst—Matches the source or destination IP address field. NOTE: The src or dst qualifier applies if you do not specify a direction qualifier. For example, port 23 is equivalent to src or dst port 23. |
Type |
Specifies the direction type. |
· host—Matches the IP address of a host. · net—Matches an IP subnet. · port—Matches a service port number. · portrange—Matches a service port range. NOTE: The host qualifier applies if you do not specify any type qualifier. For example, src 2.2.2.2 is equivalent to src host 2.2.2.2. To specify an IPv6 subnet, you must specify the net qualifier. |
Others |
Any other qualifiers than the previously described qualifiers. |
· broadcast—Matches broadcast packets. · multicast—Matches multicast and broadcast packets. · less—Matches packets that are less than or equal to a specific size. · greater—Matches packets that are greater than or equal to a specific size. · len—Matches the packet length. · vlan—Matches VLAN packets. |
|
NOTE: The broadcast, multicast, and all protocol qualifiers cannot modify variables. |
Table 29 Variable types for capture filters
Variable type |
Description |
Examples |
|
Integer |
Represented in binary, octal, decimal, or hexadecimal notation. |
The port 23 expression matches traffic sent to or from port number 23. |
|
Integer range |
Represented by hyphenated integers. |
The portrange 100-200 expression matches traffic sent to or from any ports in the range of 100 to 200. |
|
IPv4 address |
Represented in dotted decimal notation. |
The src 1.1.1.1 expression matches traffic sent from the IPv4 host at 1.1.1.1. |
|
IPv6 address |
Represented in colon hexadecimal notation. |
The dst host 1::1 expression matches traffic sent to the IPv6 host at 1::1. |
|
IPv4 subnet |
Represented by an IPv4 network ID or an IPv4 address with a mask. |
Both of the following expressions match traffic sent to or from the IPv4 subnet 1.1.1.0/24: · src 1.1.1. · src net 1.1.1.0/24. |
|
IPv6 network segment |
Represented by an IPv6 address with a prefix length. |
The dst net 1::/64 expression matches traffic sent to the IPv6 network 1::/64. |
|
Capture filter operators
Capture filters support logical operators (Table 30), arithmetic operators (Table 31), and relational operators (Table 32). Logical operators can use both alphanumeric and nonalphanumeric symbols. The arithmetic and relational operators can use only nonalphanumeric symbols.
Logical operators are left associative. They group from left to right. The not operator has the highest priority. The and and or operators have the same priority.
Table 30 Logical operators for capture filters
Nonalphanumeric symbol |
Alphanumeric symbol |
Description |
! |
not |
Reverses the result of a condition. Use this operator to capture traffic that matches the opposite value of a condition. For example, to capture non-HTTP traffic, use not port 80. |
&& |
and |
Joins two conditions. Use this operator to capture traffic that matches both conditions. For example, to capture non-HTTP traffic that is sent to or from 1.1.1.1, use host 1.1.1.1 and not port 80. |
|| |
or |
Joins two conditions. Use this operator to capture traffic that matches either of the conditions. For example, to capture traffic that is sent to or from 1.1.1.1 or 2.2.2.2, use host 1.1.1.1 or host 2.2.2.2. |
Table 31 Arithmetic operators for capture filters
Nonalphanumeric symbol |
Description |
+ |
Adds two values. |
- |
Subtracts one value from another. |
* |
Multiplies one value by another. |
/ |
Divides one value by another. |
& |
Returns the result of the bitwise AND operation on two integral values in binary form. |
| |
Returns the result of the bitwise OR operation on two integral values in binary form. |
<< |
Performs the bitwise left shift operation on the operand to the left of the operator. The right-hand operand specifies the number of bits to shift. |
>> |
Performs the bitwise right shift operation on the operand to the left of the operator. The right-hand operand specifies the number of bits to shift. |
[ ] |
Specifies a byte offset relative to a protocol layer. This offset indicates the byte where the matching begins. You must enclose the offset value in the brackets and specify a protocol qualifier. For example, ip[6] matches the seventh byte of payload in IPv4 packets (the byte that is six bytes away from the beginning of the IPv4 payload). |
Table 32 Relational operators for capture filters
Nonalphanumeric symbol |
Description |
= |
Equal to. For example, ip[6]=0x1c matches an IPv4 packet if its seventh byte of payload is equal to 0x1c. |
!= |
Not equal to. For example, len!=60 matches a packet if its length is not equal to 60 bytes. |
> |
Greater than. For example, len>100 matches a packet if its length is greater than 100 bytes. |
< |
Less than. For example, len<100 matches a packet if its length is less than 100 bytes. |
>= |
Greater than or equal to. For example, len>=100 matches a packet if its length is greater than or equal to 100 bytes. |
<= |
Less than or equal to. For example, len<=100 matches a packet if its length is less than or equal to 100 bytes. |
Display filter keywords
Table 33 and Table 34 describe the qualifiers and variables for display filters, respectively.
Table 33 Qualifiers for display filters
Category |
Description |
Examples |
Protocol |
Matches a protocol. |
· eth—Matches Ethernet. · ftp—Matches FTP. · http—Matches HTTP. · icmp—Matches ICMP. · ip—Matches IPv4. · ipv6—Matches IPv6. · tcp—Matches TCP. · telnet—Matches Telnet. · udp—Matches UDP. |
Packet field |
Matches a field in packets by using a dotted string in the protocol.field[.level1-subfield]…[.leveln-subfield] format. |
· tcp.flags.syn—Matches the SYN bit in the flags field of TCP. · tcp.port—Matches the source or destination port field. |
|
NOTE: The protocol qualifiers cannot modify variables. |
Table 34 Variable types for display filters
Variable type |
Description |
Integer |
Represented in binary, octal, decimal, or hexadecimal notation. For example, to display IP packets that are less than or equal to 1500 bytes, use one of the following expressions: · ip.len le 1500. · ip.len le 02734. · ip.len le 0x436. |
Boolean |
This variable type has two values: true or false. This variable type applies if you use a packet field string alone to identify the presence of a field in a packet. · If the field is present, the match result is true. The filter displays the packet. · If the field is not present, the match result is false. The filter does not display the packet. For example, to display TCP packets that contain the SYN field, use tcp.flags.syn. |
MAC address (six bytes) |
Uses colons (:), dots (.), or hyphens (-) to break up the MAC address into two or four segments. For example, to display packets that contain a destination MAC address of ffff.ffff.ffff, use one of the following expressions: · eth.dst==ff:ff:ff:ff:ff:ff. · eth.dst==ff-ff-ff-ff-ff-ff. · eth.dst ==ffff.ffff.ffff. |
IPv4 address |
Represented in dotted decimal notation. For example: · To display IPv4 packets that are sent to or from 192.168.0.1, use ip.addr==192.168.0.1. · To display IPv4 packets that are sent to or from 129.111.0.0/16, use ip.addr==129.111.0.0/16. |
IPv6 address |
Represented in colon hexadecimal notation. For example: · To display IPv6 packets that are sent to or from 1::1, use ipv6.addr==1::1. · To display IPv6 packets that are sent to or from 1::/64, use ipv6.addr==1::/64. |
String |
Character string. For example, to display HTTP packets that contain the string HTTP/1.1 for the request version field, use http.request version=="HTTP/1.1". |
Display filter operators
Display filters support logical operators (Table 35) and relational operators (Table 36). Both operator types can use alphanumeric and nonalphanumeric symbols.
Logical operators are left associative. They group from left to right. Table 35 displays logical operators by priority, from the highest to the lowest. The and and or operators have the same priority.
Table 35 Logical operators for display filters
Nonalphanumeric symbol |
Alphanumeric symbol |
Description |
[ ] |
No alphanumeric symbol is available. |
Used with protocol qualifiers. For more information, see "The proto[…] expression." |
! |
not |
Displays packets that do not match the condition connected to this operator. |
&& |
and |
Joins two conditions. Use this operator to display traffic that matches both conditions. |
|| |
or |
Joins two conditions. Use this operator to display traffic that matches either of the conditions. |
Table 36 Relational operators for display filters
Nonalphanumeric symbol |
Alphanumeric symbol |
Description |
== |
eq |
Equal to. For example, ip.src==10.0.0.5 displays packets with the source IP address as 10.0.0.5. |
!= |
ne |
Not equal to. For example, ip.src!=10.0.0.5 displays packets whose source IP address is not 10.0.0.5. |
> |
gt |
Greater than. For example, frame.len>100 displays frames with a length greater than 100 bytes. |
< |
lt |
Less than. For example, frame.len<100 displays frames with a length less than 100 bytes. |
>= |
ge |
Greater than or equal to. For example, frame.len ge 0x100 displays frames with a length greater than or equal to 256 bytes. |
<= |
le |
Less than or equal to. For example, frame.len le 0x100 displays frames with a length less than or equal to 256 bytes. |
Building a capture filter
This section provides the most commonly used expression types for capture filters.
Logical expression
Use this type of expression to capture packets that match the result of logical operations.
Logical expressions contain keywords and logical operators. For example:
· not port 23 and not port 22—Captures packets with a port number that is not 23 or 22.
· port 23 or icmp—Captures packets with a port number 23 or ICMP packets.
In a logical expression, a qualifier can modify more than one variable connected by its nearest logical operator. For example, to capture packets sourced from IPv4 address 192.168.56.1 or IPv4 network 192.168.27, use either of the following expressions:
· src 192.168.56.1 or 192.168.27.
· src 192.168.56.1 or src 192.168.27.
The expr relop expr expression
Use this type of expression to capture packets that match the result of arithmetic operations.
This expression contains keywords, arithmetic operators (expr), and relational operators (relop). For example, len+100>=200 captures packets that are greater than or equal to 100 bytes.
The proto [ expr:size ] expression
Use this type of expression to capture packets that match the result of arithmetic operations on a number of bytes relative to a protocol layer.
This type of expression contains the following elements:
· proto—Specifies a protocol layer.
· []—Performs arithmetic operations on a number of bytes relative to the protocol layer.
· expr—Specifies the arithmetic expression.
· size—Specifies the byte offset. This offset indicates the number of bytes relative to the protocol layer. The operation is performed on the specified bytes. The offset is set to 1 byte if you do not specify an offset.
For example, ip[0]&0xf !=5 captures an IP packet if the result of ANDing the first byte with 0x0f is not 5.
To match a field, you can specify a field name for expr:size. For example, icmp[icmptype]=0x08 captures ICMP packets that contain a value of 0x08 in the Type field.
The vlan vlan_id expression
Use this type of expression to capture 802.1Q tagged VLAN traffic.
This type of expression contains the vlan vlan_id keywords and logical operators. The vlan_id variable is an integer that specifies a VLAN ID. For example, vlan 1 and ip6 captures IPv6 packets in VLAN 1.
To capture 802.1Q tagged traffic, you must use the vlan vlan_id expression prior to any other expressions. An expression matches untagged packets if it does not follow a vlan vlan_id expression. For example:
· vlan 1 and !tcp—Captures VLAN 1-tagged non-TCP packets.
· icmp and vlan 1—Captures untagged ICMP packets that are VLAN 1 tagged. This expression does not capture any packets because no packets can be both tagged and untagged.
Building a display filter
This section provides the most commonly used expression types for display filters.
Logical expression
Use this type of expression to display packets that match the result of logical operations.
Logical expressions contain keywords and logical operators. For example, ftp or icmp displays all FTP packets and ICMP packets.
Relational expression
Use this type of expression to display packets that match the result of comparison operations.
Relational expressions contain keywords and relational operators. For example, ip.len<=28 displays IP packets that contain a value of 28 or fewer bytes in the length field.
Packet field expression
Use this type of expression to display packets that contain a specific field.
Packet field expressions contain only packet field strings. For example, tcp.flags.syn displays all TCP packets that contain the SYN bit field.
The proto[…] expression
Use this type of expression to display packets that contain specific field values.
This type of expression contains the following elements:
· proto—Specifies a protocol layer or packet field.
· […]—Matches a number of bytes relative to a protocol layer or packet field. Values for the bytes to be matched must be a hexadecimal integer string. The expression in brackets can use the following formats:
? [n:m]—Matches a total of m bytes after an offset of n bytes from the beginning of the specified protocol layer or field. To match only 1 byte, you can use both [n] and [n:1] formats. For example, eth.src[0:3]==00:00:83 matches an Ethernet frame if the first three bytes of its source MAC address are 0x00, 0x00, and 0x83. The eth.src[2] == 83 expression matches an Ethernet frame if the third byte of its source MAC address is 0x83.
? [n-m]—Matches a total of (m-n+1) bytes, starting from the (n+1)th byte relative to the beginning of the specified protocol layer or packet field. For example, eth.src[1-2]==00:83 matches an Ethernet frame if the second and third bytes of its source MAC address are 0x00 and 0x83, respectively.
Packet capture configuration task list
Tasks at a glance |
Remarks |
Perform one of the tasks. |
|
Configuring feature image-based packet capture: |
|
N/A |
Configuring local packet capture
Perform this task to capture incoming packets and save the captured packets to a local file or to a remote file on an FTP server. To display the captured packets on the local device, use the packet-capture read command. To display the captured packets on the FTP server, use the Wireshark client connected to the FTP server. To stop the capture while it is capturing packets, use the packet-capture stop command.
To configure local packet capture:
Task |
Command |
Remarks |
Configure local packet capture. |
packet-capture local interface interface-type interface-number [ capture-filter capt-expression | limit-frame-size bytes | autostop filesize kilobytes | autostop duration seconds ] * write { filepath | url url [ username username [ password { cipher | simple } string ] ] } |
After this command is executed, you can still configure other commands from the CLI. The operation does not affect the packet capture. |
Configuring remote packet capture
Task |
Command |
Remarks |
Configure remote packet capture. |
packet-capture remote interface interface-type interface-number [ port port ] |
To stop the capture while it is capturing packets, use the packet-capture stop command. |
To display the captured packets, perform the following tasks:
1. Connect the Wireshark client to the device that captures packets.
2. Start Wireshark and select Capture > Options.
3. Select Remote from the Interface list.
4. Enter the IP address of the remote interface and the RPCAP service port number on the window that appears, and click OK.
Make sure the interface IP address is reachable for the Wireshark. If you do not specify the RPCAP service port number, the default RPCAP service port 2002 is used.
5. Click Start.
Figure 86 Configuring Wireshark capture options
Configuring feature image-based packet capture
Saving captured packets to a file
Perform this task to capture incoming packets on an interface and save the captured packets to a file. To display the captured packets, use the packet-capture read command. To stop the capture while it is capturing packets, press Ctrl+C. There might be a delay for the capture to stop because of heavy traffic.
To save the captured packets to a file:
Task |
Command |
Remarks |
Save the captured packets to a file. |
packet-capture interface interface-type interface-number [ capture-filter capt-expression | limit-captured-frames limit | limit-frame-size bytes | autostop filesize kilobytes | autostop duration seconds | autostop files numbers | capture-ring-buffer filesize kilobytes | capture-ring-buffer duration seconds | capture-ring-buffer files numbers ] * write filepath [ raw | { brief | verbose } ] * |
After this command is executed, you cannot configure other commands from the CLI until the capture completes capturing packets or it is stopped. |
Filtering packet data to display
To stop the capture while it is capturing packets, press Ctrl+C. There might be a delay for the capture to stop because of heavy traffic.
To filter packet data to display:
Task |
Command |
Remarks |
Filter packet data to display. |
packet-capture interface interface-type interface-number [ capture-filter capt-expression | display-filter disp-expression | limit-captured-frames limit | limit-frame-size bytes | autostop duration seconds ] * [ raw | { brief | verbose } ] * |
After this command is executed, you cannot configure other commands from the CLI until the capture completes capturing packets or it is stopped. |
Displaying the contents in a packet file
Task |
Command |
Remarks |
Display the contents in a packet file. |
packet-capture read filepath [ display-filter disp-expression ] [ raw | { brief | verbose } ] * |
To stop displaying the contents, press Ctrl+C. |
Displaying and maintaining packet capture
Execute display commands in any view.
Command |
|
Display the packet capture status. |
display packet-capture status |
Packet capture configuration examples
Remote packet capture configuration example
Network requirements
As shown in Figure 87, capture packets on HundredGigE 1/0/1 and use Wireshark to display the captured packets.
Configuration procedure
1. Configure remote packet capture on HundredGigE 1/0/1 and specify the RPCAP service port number as 2014.
<Device> packet-capture remote interface hundredgige 1/0/1 port 2014
2. Display captured packets on the PC:
a. Start Wireshark and select Capture > Options.
b. Select Remote from the Interface list.
c. Enter an IP address of 10.1.1.1 and a port number of 2014, and click OK.
d. Click Start.
The captured packets are displayed on the page that appears.
Figure 88 Displaying the captured packets on the Wireshark
Filtering packet data to display configuration example
Network requirements
On Switch A, capture the following incoming IP packets on HundredGigE 1/0/1:
· Packets forwarded through the CPU.
· Packets that are sourced from 192.168.56.1 and forwarded through chips.
Configuration procedure
# Create an IPv4 advanced ACL to match packets that are sourced from 192.168.56.1.
<SwitchA> system-view
[SwitchA] acl advanced 3000
[SwitchA-acl-ipv4-adv-3000] rule permit ip source 192.168.56.1 0
[SwitchA-acl-ipv4-adv-3000] quit
# Configure a traffic behavior to mirror traffic to the CPU.
[SwitchA] traffic behavior behavior1
[SwitchA-behavior-behavior1] mirror-to cpu
[SwitchA-behavior-behavior1] quit
# Configure a traffic class to use the ACL to match traffic.
[SwitchA] traffic classifier classifier1
[SwitchA-classifier-class1] if-match acl 3000
[SwitchA-classifier-class1] quit
# Associate the traffic class with the traffic behavior in a QoS policy.
[SwitchA] qos policy user1
[SwitchA-qospolicy-user1] classifier classifier1 behavior behavior1
[SwitchA-qospolicy-user1] quit
# Apply the QoS policy to the incoming traffic of HundredGigE 1/0/1.
[SwitchA] interface hundredgige 1/0/1
[SwitchA-HundredGigE1/0/1] qos apply policy user1 inbound
[SwitchA-HundredGigE1/0/1] quit
[SwitchA] quit
# Capture incoming traffic on HundredGigE 1/0/1.
<SwitchA> packet-capture interface hundredgige 1/0/1
Capturing on 'HundredGigE1/0/1'
1 0.000000 192.168.56.1 -> 192.168.56.2 TCP 62 6325 > telnet [SYN] Seq=0 Win
=65535 Len=0 MSS=1460 SACK_PERM=1
2 0.000061 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=1 Ack
=1 Win=65535 Len=0
3 0.024370 192.168.56.1 -> 192.168.56.2 TELNET 60 Telnet Data ...
4 0.024449 192.168.56.1 -> 192.168.56.2 TELNET 78 Telnet Data ...
5 0.025766 192.168.56.1 -> 192.168.56.2 TELNET 65 Telnet Data ...
6 0.035096 192.168.56.1 -> 192.168.56.2 TELNET 60 Telnet Data ...
7 0.047317 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=434 Win=65102 Len=0
8 0.050994 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=436 Win=65100 Len=0
9 0.052401 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=438 Win=65098 Len=0
10 0.057736 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=440 Win=65096 Len=0
10 packets captured
Saving captured packets to a file configuration example
Network requirements
On Device A, capture 10 incoming packets on HundredGigE 1/0/1, save the packets to a packet file, and display contents in the file.
Configuration procedure
# Capture packets on HundredGigE 1/0/1. Set the maximum number of captured packets to 10. Save the packets to the file flash:/a.pcap.
<DeviceA> packet-capture interface hundredgige 1/0/1 limit-captured-frames 10 write flash:/a.pcap
Capturing on 'HundredGigE1/0/1'
10
# Display the contents in the packet file.
<DeviceA> packet-capture read flash:/a.pcap
1 0.000000 192.168.56.1 -> 192.168.56.2 TCP 62 6325 > telnet [SYN] Seq=0 Win
=65535 Len=0 MSS=1460 SACK_PERM=1
2 0.000061 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=1 Ack
=1 Win=65535 Len=0
3 0.024370 192.168.56.1 -> 192.168.56.2 TELNET 60 Telnet Data ...
4 0.024449 192.168.56.1 -> 192.168.56.2 TELNET 78 Telnet Data ...
5 0.025766 192.168.56.1 -> 192.168.56.2 TELNET 65 Telnet Data ...
6 0.035096 192.168.56.1 -> 192.168.56.2 TELNET 60 Telnet Data ...
7 0.047317 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=434 Win=65102 Len=0
8 0.050994 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=436 Win=65100 Len=0
9 0.052401 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=438 Win=65098 Len=0
10 0.057736 192.168.56.1 -> 192.168.56.2 TCP 60 6325 > telnet [ACK] Seq=42 Ac
k=440 Win=65096 Len=0
Configuring VCF fabric
Overview
IT infrastructure which comprises clouds, networks, and terminal devices is undergoing a deep transform. As terminal devices become more intelligentized and mobilized, the IT infrastructure is migrating to the cloud. It aims to implement the elastic expansion of computing resources and to provide IT services on demand. In this context, H3C develops the Virtual Converged Framework (VCF) technology. VCF breaks the boundaries between the networking, cloud management, and terminal platforms and converts the IT infrastructure to a converged framework to accommodate all applications. It also implements automatic network provisioning and deployment.
VCF fabric topology
In a VCF fabric, a device has one of the following roles:
· Spine node—Connects to leaf nodes.
· Leaf node—As shown in Figure 89, a leaf node connects to a server or host in a typical data center network. A shown in Figure 90, a leaf node connects to an access node in a typical campus network.
· Access node—Connects to an upstream leaf node and downstream terminal devices.
· Border node—Located at the border of a VCF fabric to provide access to the external network.
Spine nodes and leaf nodes form a large Layer 2 network, which can be a VLAN, a VXLAN with a centralized IP gateway, or a VXLAN with distributed IP gateways. For more information about centralized IP gateways and distributed IP gateways, see VXLAN Configuration Guide.
Figure 89 VCF fabric topology for a data center network
Figure 90 VCF fabric topology for a campus network
Neutron concepts and components
Neutron is a component in OpenStack architecture. It provides networking services for VMs, manages virtual network resources (including networks, subnets, DHCP, virtual routers), and creates an isolated virtual network for each tenant. Neutron provides a unified network resource model, based on which VCF fabric is implemented.
The following are basic concepts in Neutron:
· Network—A virtual object that can be created. It provides an independent network for each tenant in a multitenant environment. A network equals to a switch with virtual ports which can be dynamically created and deleted.
· Subnet—An address pool that contains a group of IP addresses. Two different subnets communicate with each other through a router.
· Port—A connection port. A router or a VM connects to a network through a port.
· Router—A virtual router that can be created and deleted. It performs routing selection and data forwarding.
Neutron has the following components:
· Neutron server—Includes the daemon process neutron-server and multiple plug-ins (neutron-*-plugin). The Neutron server provides an API and forwards the API calls to the configured plugin. The plug-in maintains configuration data and relations between routers, networks, subnets, and ports in the Neutron database.
· Plugin agent (neutron-*-agent)—Processes data packets on virtual networks. The choice of plug-in agents depends on Neutron plug-ins. A plug-in agent interacts with the Neutron server and the configured Neutron plug-in through a message queue.
· DHCP agent (neutron-dhcp-agent)—Provides DHCP services for tenant networks.
· L3 agent (neutron-l3-agent)—Provides Layer 3 forwarding services to enable inter-tenant communication and external network access.
Neutron deployment
Neutron needs to be deployed on servers and network devices.
The following table shows Neutron deployment on a server.
Node |
Neutron components |
Controller node |
· Neutron server · Neutron DB · Message server (such as RabbitMQ server) · H3C ML2 Driver (For more information about H3C ML2 Driver, see H3C Neutron ML2 Driver Installation Guide.) |
Network node |
· neutron-openvswitch-agent · neutron -dhcp-agent |
Compute node |
· neutron-openvswitch-agent · LLDP |
The following table shows Neutron deployments on a network device.
Network type |
Network device |
Neutron components |
Centralized gateway deployment |
Spine |
· neutron-l2-agent · neutron-l3-agent |
Leaf |
neutron-l2-agent |
|
Distributed gateway deployment |
Spine |
N/A |
Leaf |
· neutron-l2-agent · neutron-l3-agent |
Figure 91 Example of Neutron deployment for centralized gateway deployment
Figure 92 Example of Neutron deployment for distributed gateway deployment
Automated VCF fabric provisioning and deployment
VCF provides the following features to ease deployment:
· Automatic topology discovery.
· Automated underlay network provisioning.
· Automated overlay network deployment.
· Dynamic display of topology changes by using the H3C VCF fabric director.
Topology discovery
In a VCF fabric, each device uses LLDP to collect local topology information from directly-connected peer devices. The local topology information includes connection interfaces, roles, MAC addresses, and management interface addresses of the peer devices. The local topology information is formatted and then uploaded to the Neutron database to form a topology for the entire network.
If multiple spine nodes exist in a VCF fabric, a master spine node is specified to collect the topology for the entire network.
Automated underlay network provisioning
An underlay network is a physical Layer 3 network. An overlay network is a virtual network built on top of the underlay network. The main stream overlay technology is VXLAN. For more information about VXLAN, see VXLAN Configuration Guide.
Automated underlay network provisioning sets up a Layer 3 underlay network for users. It is implemented by automatically executing configurations (such as IRF configuration and Layer 3 reachability configurations) in user-defined template files.
Provisioning prerequisites
Before you start automated underlay network provisioning, complete the following tasks:
1. Finish the underlay network planning (such as IP address assignment, reliability design, and routing deployment) based on user requirements.
2. Configure the DHCP server and the TFTP server.
3. Upload startup image files to the TFTP server. Skip this step if all devices in the VCF fabric use the same startup image files and no software update is required.
4. Create template files for all device roles based on the topology type and upload the template files to the TFTP server. A template file is a file that ends with the .template file extension. The following are different template types:
? Template for a leaf node in a VLAN.
? Template for a leaf node in a VXLAN with a centralized gateway.
? Template for a leaf node in a VXLAN with distributed gateways.
? Template for a spine node in a VLAN.
? Template for a spine node in a VXLAN with a centralized gateway.
? Template for a spine node in a VXLAN with distributed gateways.
Process of automated underlay network provisioning
The device finishes automated underlay network provisioning as follows:
1. Starts up with factory configuration.
2. Obtains an IP address, the IP address of the TFTP server, and a template file name in the networktype.template format from the DHCP server.
3. Downloads a template file (named networktype_role.template) that corresponds with its device role from the TFTP server.
4. Parses the template file and executes the configurations in the template file.
|
NOTE: · If the template file contains software version information, the device first compares the software version with the current software version. If the two versions are inconsistent, the device downloads the new software version to perform software upgrade. After restarting up, the device executes the configurations in the template file. · If the template file does not include IRF configurations, the device does no save the configurations after executing all configurations in the template file. To save the configurations, use the save command. |
Template file
A template file contains the following contents:
· System-predefined variables—The variable names cannot be edited, and the variable values are set by the VCF topology discovery feature.
· User-defined variables—The variable names and values are defined by the user. These variables include the username and password used to establish a connection with the RabbitMQ server, network type, and so on. The following are examples of user-defined variables:
#USERDEF
_underlayIPRange = 10.100.0.0/16
_master_spine_mac = 1122-3344-5566
_backup_spine_mac = aabb-ccdd-eeff
_username = aaa
_password = aaa
_rbacUserRole = network-admin
_neutron_username = openstack
_neutron_password = 12345678
_neutron_ip = 172.16.1.136
_loghost_ip = 172.16.1.136
_network_type = centralized-vxlan
· Static configurations—Static configurations are independent from the VCF fabric topology and can be directly executed. The following are examples of static configurations:
#STATICCFG
#
clock timezone beijing add 08:00:00
#
lldp global enable
#
stp global enable
#
· Dynamic configurations—Dynamic configurations are dependent on the VCF fabric topology. The device first obtains the topology information and then executes dynamic configurations. The following are examples of dynamic configurations:
#
interface $$_underlayIntfDown
port link-mode route
ip address unnumbered interface LoopBack0
ospf 1 area 0.0.0.0
ospf network-type p2p
lldp management-address arp-learning
lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0
#
Automated overlay network deployment
Automated overlay network deployment covers VXLAN deployment and EVPN deployment.
Automated overlay network deployment is mainly implemented through the following features of Neutron:
· Layer 2 agent (L2 agent)—Responds to OpenStack events such as network creation, subnet creation, and port creation. It deploys Layer 2 networking to provide Layer 2 connectivity within a virtual network and Layer 2 isolation between different virtual networks.
· Layer 3 agent (L3 agent)—Responds to OpenStack events such as virtual router creation, interface creation, and gateway configuration. It deploys the IP gateways to provide Layer 3 forwarding services for VMs.
For the device to correctly communicate with the Neutron server through the RabbitMQ server, you need to configure the parameters related with the RabbitMQ server. The parameters include the IP address of the RabbitMQ server, the username and password to log in to the RabbitMQ server, and the listening port.
Configuration restrictions and guidelines
Typically, the device completes both automated underlay network provisioning and automated overlay deployment by downloading and executing the template file. You do not need to manually configure the device by using commands. If the device needs to complete only automated overlay network deployment, you can use related commands in "Enabling VCF fabric topology discovery" and "Configuring automated overlay network deployment." No template file is required.
VCF fabric configuration task list
Tasks at a glance |
(Required.) Enabling VCF fabric topology discovery |
(Optional.) Configuring automated underlay network provisioning |
(Optional.) Configuring automated overlay network deployment |
Enabling VCF fabric topology discovery
Configuration restrictions and guidelines
VCF fabric topology discovery can be automatically enabled by executing configurations in the template file or be manually enabled at the CLI. The device uses LLDP to collect topology data of directly-connected devices. Make sure you have enabled LLDP on the device before you manually enable VCF fabric topology discovery.
Configuration procedure
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable LLDP globally. |
lldp global enable |
By default, LLDP is enabled globally. |
3. Enable VCF fabric topology discovery. |
vcf-fabric topology enable |
By default, VCF fabric topology discovery is disabled. |
Configuring automated underlay network provisioning
Configuration restrictions and guidelines
When you configure automated underlay network provisioning, follow these restrictions and guidelines:
· Automated underlay network configuration can be automatically completed after the device starts up. If you need to change the automated underlay network provision on a running device, you can download the new template file through TFTP. Then, execute the vcf-fabric underlay autoconfigure command to manually specify the template file on the device.
· As a best practice, do not modify the network type or the device role while the device is running. If it is necessary to do so, make sure you understand the impacts on the network and services.
· If you change the role of the device, the new role takes effect after the device restarts up.
Configuration procedure
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. (Optional.) Specify the role of the device in the VCF fabric. |
vcf-fabric role { access | leaf | spine } |
By default, the device is a spine node. |
3. Specify the template file for automated underlay network provisioning. |
vcf-fabric underlay autoconfigure template |
By default, no template file is specified for automated underlay network provisioning. |
4. (Optional.) Configure the device as a master spine node. |
vcf-fabric spine-role master |
By default, the device is not a master spine node. |
5. (Optional.) Enable Neutron and enter Neutron view. |
neutron |
By default, Neutron is disabled. |
6. (Optional.) Specify the network type. |
network-type { centralized-vxlan | distributed-vxlan | vlan } |
By default, the network type is VLAN. |
Configuring automated overlay network deployment
Configuration restrictions and guidelines
When you configure automated overlay network deployment, follow these restrictions and guidelines:
· If the network type is VLAN or VXLAN with a centralized IP gateway, perform this task on both the spine node and the leaf nodes.
· If the network type is VXLAN with distributed IP gateways, perform this task on all leaf nodes.
· Make sure the RabbitMQ server settings on the device are the same as those on the controller node.
· Multiple virtual hosts might exist on the RabbitMQ server. Each virtual host can independently provide RabbitMQ services for the device. For the device to correctly communicate with the Neutron server, specify the same virtual host on the device and the Neutron server.
Configuration procedure
Step |
Command |
Remarks |
1. Enter system view. |
system-view |
N/A |
2. Enable Neutron and enter Neutron view. |
neutron |
By default, Neutron is disabled. |
3. Specify the IPv4 address of the RabbitMQ server. |
rabbit host ip ipv4-address [ vpn-instance vpn-instance-name ] |
By default, the IPv4 address of the RabbitMQ server is not specified. |
4. Configure the username used by the device to establish a connection with the RabbitMQ server. |
rabbit user username |
By default, the device uses username guest to establish a connection with the RabbitMQ server. |
5. Configure the password used by the device to establish a connection with the RabbitMQ server. |
rabbit password { cipher | plain } string |
By default, the device uses plaintext password guest to establish a connection with the RabbitMQ server. |
6. Configure the port number used by the device to communicate with the RabbitMQ server. |
rabbit port port-number |
By default, the device uses port number 5672 to communicate with the RabbitMQ server. |
7. Specify a virtual host to provide RabbitMQ services. |
rabbit virtual-host hostname |
By default, the virtual host / provides RabbitMQ services for the device. |
8. Specify the username and password used by the device to deploy configurations through RESTful. |
restful user username password { cipher | plain } password |
By default, no username and password is configured for the device to deploy configurations through RESTful. |
9. (Optional.) Enable the Layer 2 agent. |
l2agent enable |
By default, the Layer 2 agent is disabled. |
10. (Optional.) Enable the Layer 3 agent. |
l3agent enable |
By default, the Layer 3 agent is disabled. |
11. (Optional.) Configure export targets for a tenant VPN instance. |
vpn-target target export-extcommunity |
By default, no export targets are configured for a tenant VPN instance. |
12. (Optional.) Configure import route targets for a tenant VPN instance. |
vpn-target target import-extcommunity |
By default, no import route targets are configured for a tenant VPN instance. |
13. (Optional.) Specify the IPv4 address of the border gateway. |
gateway ip ipv4-address |
By default, the IPv4 address of the border gateway is not specified. |
14. (Optional.) Configure the device as a border node. |
border enable |
By default, the device is not a border node. |
15. (Optional.) Enable local proxy ARP. |
proxy-arp enable |
By default, local proxy ARP is disabled. |
16. (Optional.) Configure the MAC address of VSI interfaces. |
vsi-mac mac-address |
By default, no MAC address is configured for VSI interfaces. |
Displaying and maintaining VCF fabric
Execute display commands in any view.
Task |
Command |
Display VCF fabric topology information. |
display vcf-fabric topology |
Display information about automated underlay network provisioning. |
display vcf-fabric underlay autoconfigure |
Automated VCF fabric configuration example
Network requirements
As shown in Figure 93, Devices A, B, and C all connect to the TFTP server and the DHCP server through management Ethernet interfaces. VM 1 resides on the controller node. VM 2 resides on the compute node. The controller node runs OpenStack Kilo version and Ubuntu14.04 LTS operating system.
Configure a VCF fabric to meet the following requirements:
· The VCF fabric is a VXLAN network deployed on spine node Device A and leaf nodes Device B and Device C to provide connectivity between VM 1 and VM 2. Device A acts as a centralized VXLAN IP gateway.
· Devices A, B, and C complete automated underlay network provisioning by using template files after they start up.
· Devices A, B, and C complete automated overlay network deployment after the controller is configured.
· The DHCP server dynamically assigns IP addresses on subnet 10.11.113.0/24.
Configuration procedure
Configuring the DHCP server
Perform the following tasks on the DHCP server:
1. Configure a DHCP address pool to dynamically assign IP addresses on subnet 10.11.113.0/24 to the devices.
2. Specify the IP address of the TFTP server as 10.11.113.19/24.
3. Specify a template file as the boot file. A template file is named as networktype.template, for example, "vxlan.template".
Creating template files
Create template files and upload them to the TFTP server.
Typically, a template file includes the following contents:
· System-predefined variables—Internally used by the system. User-defined variables cannot be the same as system-predefined variables.
· User-defined variables—Defined by the user. User-defined variables include the following:
? Basic settings: Local username and password, user role, and so on.
? Neutron server settings: IP address of the Neutron server, the username and password for establishing a connection with the Neutron server, and so on.
· Software images for upgrade and the URL to download the software images.
· Configuration commands—Include commands independent from the topology (such as LLDP, NTP, and SNMP) and commands dependent on the topology (such as interfaces and Neutron settings).
Configuring the TFTP server
Place the template files on the TFTP server. In this example, both spine node and leaf node exist on the VXLAN network, so two template files (vxlan_spine.template and vxlan_leaf.template) are required.
Powering up Device A, Device B, and Device C
After starting up with factory configuration, Device A, Device B, and Device C each automatically downloads a template file to finish automated underlay network provisioning.
Configuring the controller node
1. Install OpenStack Neutron related components:
a. Install Neutron, Image, Dashboard, Networking, and RabbitMQ.
b. Install H3C ML2 Driver. For more information, see H3C Neutron ML2 Driver Installation Guide.
c. Configure LLDP.
2. Configure the network as a VXLAN network:
Edit the /etc/neutron/plugin/ml2/ml2_conf.ini file as follows:
a. Add the h3c_vxlan type driver to the type driver list.
type_drivers = h3c_vxlan
b. Add h3c to the mechanism driver list.
mechanism_driver = openvswitch, h3c
c. Specify h3c_vxlan as the default tenant network type.
tenant_network_types=h3c_vxlan
d. Add the [ml2_type_h3c_vxlan] section, and specify a VXLAN ID range in the format of vxlan-id1:vxlan-id2. The value range for VXLAN IDs is 0 to 16777215.
[ml2_type_h3c_vxlan]
vni_ranges = 10000:60000
3. Configure the database:
Before you configure the database, make sure you have configured the Neutron server.
[openstack@localhost ~]$ sudo h3c_config db_sync
4. Restart the Neutron server:
[root@localhost ~]# service neutron-server restart
5. Create a network named Network.
6. Create subnets:
# Create a subnet named subnet-1 and assign network address range 10.10.1.0/24 to the subnet. (Details not shown.)
# Create a subnet named subnet-2, and assign network address range 10.1.1.0/24 to the subnet. (Details not shown.)
In this example, VM 1 and VM 2 obtain IP addresses from the DHCP server. You must enable DHCP for the subnets.
7. Create a router named router. Bind a port on the router with subnet subnet-1 and then bind another port with subnet subnet-2. (Details not shown.)
8. Create VMs:
# Create VM 1 on the controller node. (Details not shown.)
# Create VM 2 on the compute node. (Details not shown.)
In this example, VM 1 and VM 2 obtain IP addresses 10.10.1.3/24 and 10.1.1.3/24 from the DHCP server, respectively.
Verifying the configuration
Verifying the collected topology of the underlay network
# Display VCF fabric topology information on Device A.
[DeviceA] display vcf-fabric topology
Topology Information
----------------------------------------------------------------------------------
* indicates the master spine role among all spines
SpineIP Interface Link LeafIP Status
*10.11.113.51 HundredGigE1/0/1 Up 10.11.113.52 Running
HundredGigE1/0/2 Down -- --
HundredGigE1/0/3 Down -- --
HundredGigE1/0/4 Down -- --
HundredGigE1/0/5 Up 10.11.113.53 Running
HundredGigE1/0/6 Down -- --
HundredGigE1/0/7 Down -- --
HundredGigE1/0/8 Down -- --
Verifying the automated configuration for the underlay network
# Display information about automated underlay network provisioning on Device A.
[DeviceA] display vcf-fabric underlay autoconfigure
success command:
#
system
clock timezone beijing add 08:00:00
#
system
lldp global enable
#
system
stp global enable
#
system
ospf 1
graceful-restart ietf
area 0.0.0.0
#
system
interface LoopBack0
#
system
ip vpn-instance global
route-distinguisher 1:1
vpn-target 1:1 import-extcommunity
#
system
l2vpn enable
#
system
vxlan tunnel mac-learning disable
vxlan tunnel arp-learning disable
#
system
ntp-service enable
ntp-service unicast-peer 10.11.113.136
#
system
netconf soap http enable
netconf soap https enable
restful http enable
restful https enable
#
system
ip http enable
ip https enable
#
system
telnet server enable
#
system
info-center loghost 10.11.113.136
#
system
local-user aaa
password ******
service-type telnet http https
service-type ssh
authorization-attribute user-role network-admin
#
system
line vty 0 63
authentication-mode scheme
user-role network-admin
#
system
bgp 100
graceful-restart
address-family l2vpn evpn
undo policy vpn-target
#
system
vcf-fabric topology enable
#
system
neutron
rabbit user openstack
rabbit password ******
rabbit host ip 10.11.113.136
restful user aaa password ******
network-type centralized-vxlan
vpn-target 1:1 export-extcommunity
l2agent enable
l3agent enable
#
system
snmp-agent
snmp-agent community read public
snmp-agent community write private
snmp-agent sys-info version all
#interface up-down:
HundredGigE1/0/1
HundredGigE1/0/5
Loopback0 IP Allocation:
DEV_MAC LOOPBACK_IP MANAGE_IP STATE
a43c-adae-0400 10.100.16.17 10.11.113.53 up
a43c-9aa7-0100 10.100.16.15 10.11.113.51 up
a43c-a469-0300 10.100.16.16 10.11.113.52 up
IRF Allocation:
Self Bridge Mac: a43c-9aa7-0100
IRF Status: No
Member List: [1]
bgp configure peer:
10.100.16.17
10.100.16.16
Verifying the automated deployment for the overlay network
# Display the running configuration for the current VSI on Device A.
[DeviceA] display current-configuration configuration vsi
#
vsi vxlan10071
gateway vsi-interface 8190
vxlan 10071
evpn encapsulation vxlan
route-distinguisher auto
vpn-target auto export-extcommunity
vpn-target auto import-extcommunity
#
return
[DeviceA] display current-configuration interface Vsi-interface
#
interface Vsi-interface8190
ip binding vpn-instance neutron-1024
ip address 11.1.1.1 255.255.255.0 sub
ip address 10.10.1.1 255.255.255.0 sub
#
return
[DeviceA] display ip vpn-instance
Total VPN-Instances configured : 1
VPN-Instance Name RD Create time
neutron-1024 1024:1024 2016/03/12 00:25:59
Verifying the connectivity between VM 1 and VM 2
# Ping VM 2 on the Computer Node from the VM 1 on the Controller node.
$ ping 10.1.1.3
Ping 10.1.1.3 (10.1.1.3): 56 data bytes, press CTRL_C to break
56 bytes from 10.1.1.3: icmp_seq=0 ttl=254 time=10.000 ms
56 bytes from 10.1.1.3: icmp_seq=1 ttl=254 time=4.000 ms
56 bytes from 10.1.1.3: icmp_seq=2 ttl=254 time=4.000 ms
56 bytes from 10.1.1.3: icmp_seq=3 ttl=254 time=3.000 ms
56 bytes from 10.1.1.3: icmp_seq=4 ttl=254 time=3.000 ms
--- Ping statistics for 10.1.1.3 ---
5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 3.000/4.800/10.000/2.638 ms
A
access control
SNMP MIB, 115
SNMP view-based MIB, 115
accessing
NTP access control, 75
NTP access control rights, 81
SNMP access control mode, 116
accounting
IPv6 NetStream configuration, 263, 272
action
Event MIB notification, 140
Event MIB notification action configuration, 143
Event MIB set, 140
Event MIB set action configuration, 142
address
ping address reachability determination, 1
agent
sFlow agent+collector information configuration, 278
aggregating
IPv6 NetStream data export, 271
IPv6 NetStream data export (aggregation), 264, 274
NetStream aggregation data export, 249, 257
NetStream data export configuration (aggregation), 260
aging
IPv6 NetStream flow, 264
IPv6 NetStream flow aging, 269
IPv6 NetStream flow aging (forced), 269
IPv6 NetStream flow aging (periodic/active), 269
IPv6 NetStream flow aging (periodic/inactive), 269
IPv6 NetStream flow aging methods, 269
NetStream flow aging, 249, 255
NetStream flow aging (forced), 255
NetStream flow aging (periodic/active), 255
NetStream flow aging (periodic/inactive), 255
NetStream flow aging methods, 255
alarm
RMON alarm configuration, 134, 137
RMON alarm group sample types, 133
RMON configuration, 131
RMON group, 132
RMON private group, 132
Appendix A
NETCONF supported operations, 192
applying
flow mirroring QoS policy, 244
flow mirroring QoS policy (control plane), 245
flow mirroring QoS policy (global), 245
flow mirroring QoS policy (interface), 244
flow mirroring QoS policy (VLAN), 245
architecture
IPv6 NetStream, 263
NetStream, 248
NTP, 72
arithmetic
packet capture filter configuration (expr relop expr expression), 313
packet capture filter configuration (proto [ exprsize ] expression), 313
packet capture filter operator, 309
packet capture operator, 307
assigning
port mirroring monitor port to remote probe VLAN, 227
associating
IPv6 NTP client/server association mode, 92
IPv6 NTP multicast association mode (on switch), 100
IPv6 NTP symmetric active/passive association mode, 94
NTP association mode, 78
NTP broadcast association mode, 73, 79
NTP broadcast association mode (on switch), 95
NTP broadcast association mode+authentication (on switch), 105
NTP client/server association mode, 73, 78, 91
NTP client/server association mode+authentication, 103
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP multicast association mode, 73, 80
NTP multicast association mode (on switch), 97
NTP symmetric active/passive association mode, 73, 79, 93
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
attribute
IPv6 NetStream data export attribute configuration, 267
NetStream data export attribute configuration, 253
NetStream data export format, 253
authenticating
NTP, 75
NTP broadcast authentication, 85
NTP broadcast mode+authentication (on switch), 105
NTP client/server mode authentication, 81
NTP client/server mode+authentication, 103
NTP configuration, 81
NTP multicast authentication, 86
NTP security, 75
NTP symmetric active/passive mode authentication, 83
SNMP silence, 116
SNTP authentication, 112
automated deployment
VCF fabric, 324
automated overlay network deployment
guidelines, 328
restrictions, 328
automated provisioning
VCF fabric, 324
automated underlay network provisioning
guidelines, 327
restrictions, 327
automated VCF fabric provisioning and deployment
configuration, 330
automtaed underlay network provisioning
template file, 325
B
bidirectional
port mirroring, 218
Boolean
Event MIB trigger test, 140
Event MIB trigger test configuration, 144, 149
booting
GOLD configuration, 301, 305
broadcast
NTP association mode (on switch), 95
NTP broadcast association mode, 73, 79, 85
NTP broadcast association mode+authentication (on switch), 105
NTP broadcast client configuration, 79
NTP broadcast mode dynamic associations max, 89
NTP broadcast server configuration, 80
buffer
GOLD log buffer size, 304
information center log buffer, 290
buffering
information center log storage period (log buffer), 294
building
packet capture display filter, 314
packet capture filter, 313
C
capturing
packet capture configuration, 307, 317
packet capture configuration (remote), 317
packet capture file save, 320
packet capture filtered data display, 318
packet configuration, 314
changing
NETCONF parameter value, 172
classifying
port mirroring classification, 219
CLI
EAA configuration, 202, 210
EAA event monitor policy configuration, 210
EAA monitor policy configuration, 206
EAA monitor policy configuration (CLI-defined+environment variables), 213
NETCONF CLI operations, 185, 186
NETCONF device management, 157
NETCONF return to CLI, 191
client
NQA client history record save, 29
NQA client operation (DHCP), 13
NQA client operation (DLSw), 23
NQA client operation (DNS), 13
NQA client operation (FTP), 14
NQA client operation (HTTP), 15
NQA client operation (ICMP echo), 11
NQA client operation (ICMP jitter), 12
NQA client operation (path jitter), 23
NQA client operation (SNMP), 17
NQA client operation (TCP), 18
NQA client operation (UDP echo), 19
NQA client operation (UDP jitter), 16
NQA client operation (UDP tracert), 20
NQA client operation (voice), 21
NQA client operation scheduling, 30
NQA client statistics collection, 29
NQA client template, 30
NQA client template (DNS), 32
NQA client template (FTP), 37
NQA client template (HTTP), 36
NQA client template (ICMP), 31
NQA client template (RADIUS), 38
NQA client template (TCP half open), 34
NQA client template (TCP), 33
NQA client template (UDP), 35
NQA client template optional parameters, 39
NQA client threshold monitoring, 8, 26
NQA client+Track collaboration, 26
NQA collaboration configuration, 62
NQA enable, 10
NQA operation, 10
NQA operation configuration (DHCP), 44
NQA operation configuration (DLSw), 59
NQA operation configuration (DNS), 45
NQA operation configuration (FTP), 47
NQA operation configuration (HTTP), 48
NQA operation configuration (ICMP echo), 40
NQA operation configuration (ICMP jitter), 42
NQA operation configuration (path jitter), 60
NQA operation configuration (SNMP), 51
NQA operation configuration (TCP), 53
NQA operation configuration (UDP echo), 54
NQA operation configuration (UDP jitter), 49
NQA operation configuration (UDP tracert), 55
NQA operation configuration (voice), 57
NTP broadcast client configuration, 79
NTP multicast client configuration, 80
SNTP configuration, 77, 111, 111, 113
client/server
IPv6 NTP client/server association mode, 92
NTP association mode, 73, 78
NTP client/server association mode, 81, 91
NTP client/server association mode+authentication, 103
NTP client/server mode dynamic associations max, 89
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
clock
NTP local clock as reference source, 90
collaborating
NQA client+Track function, 26
NQA+Track collaboration, 8
collecting
IPv6 NetStream collector (NSC), 263, 263
sFlow agent+collector information configuration, 278
troubleshooting sFlow remote collector cannot receive packets, 281
common
information center common logs, 283
comparing
packet capture display filter operator, 312
packet capture filter operator, 309
conditional match NETCONF data filtering, 184
conditional match NETCONF data filtering (column-based), 182
configuring
automated overlay network deployment, 328
automated underlay network provisioning, 327
automated VCF fabric provisioning and deployment, 330
EAA, 202, 210
EAA environment variable (user-defined), 205
EAA event monitor policy (CLI), 210
EAA event monitor policy (Track), 211
EAA monitor policy, 206
EAA monitor policy (CLI), 206
EAA monitor policy (CLI-defined+environment variables), 213
EAA monitor policy (Tcl), 208
EAA monitor policy (Tcl-defined), 214
Event MIB, 139, 141
Event MIB notification action, 143
Event MIB object list, 141
Event MIB sampling, 141
Event MIB set action, 142
Event MIB trigger test, 143
Event MIB trigger test (Boolean), 144, 149
Event MIB trigger test (existence), 145, 147
Event MIB trigger test (threshold), 145, 152
feature image-based packet capture, 316
flow mirroring, 243, 243, 245
flow mirroring match criteria, 243
flow mirroring QoS policy, 244
flow mirroring traffic behavior, 244
GOLD, 301, 305
GOLD diagnostic test simulation, 303
GOLD diagnostics (monitoring), 301
GOLD diagnostics (on-demand), 302
GOLD log buffer size, 304
information center, 283, 288, 297
information center log output (console), 297
information center log output (Linux log host), 299
information center log output (UNIX log host), 298
information center log suppression, 296
information center trace log file max size, 294
IPv6 NetStream, 263, 266, 272
IPv6 NetStream data export, 270
IPv6 NetStream data export (aggregation), 271, 274
IPv6 NetStream data export (traditional), 270, 272
IPv6 NetStream data export attribute, 267
IPv6 NetStream data export format, 267
IPv6 NetStream filtering, 266
IPv6 NetStream flow aging, 269
IPv6 NetStream sampling, 267
IPv6 NetStream v9/v10 template refresh rate, 268
IPv6 NTP client/server association mode, 92
IPv6 NTP multicast association mode (on switch), 100
IPv6 NTP symmetric active/passive association mode, 94
Layer 2 remote port mirroring, 224
Layer 2 remote port mirroring (egress port), 225, 238
Layer 2 remote port mirroring (reflector port), 225, 235
Layer 3 remote port mirroring, 230, 240
Layer 3 remote port mirroring local group, 231
Layer 3 remote port mirroring local group monitor port, 232
Layer 3 remote port mirroring local group monitor port (interface view), 233
Layer 3 remote port mirroring local group monitor port (system view), 232
Layer 3 remote port mirroring local group source CPU, 232
Layer 3 remote port mirroring local group source ports, 231
Layer 3 remote port mirroring local group source ports (interface view), 232
Layer 3 remote port mirroring local group source ports (system view), 231
local packet capture, 315
local port mirroring, 222
local port mirroring (source CPU mode), 234
local port mirroring (source port mode), 233
local port mirroring group monitor port, 223
local port mirroring group monitor port (interface view), 224
local port mirroring group monitor port (system view), 224
local port mirroring group source CPU, 223
local port mirroring group source ports, 222
local port mirroring group source ports (interface view), 223
local port mirroring group source ports (system view), 223
MPLS-aware IPv6 NetStream, 269
MPLS-aware NetStream, 255
NETCONF, 155, 158
NETCONF over SOAP, 158
NetStream, 248, 252, 258
NetStream data export, 256
NetStream data export (aggregation), 257, 260
NetStream data export (traditional), 256, 258
NetStream data export attribute, 253
NetStream data export format, 253
NetStream filtering, 252
NetStream flow aging, 255
NetStream sampling, 253
NetStream v9/v10 template refresh rate, 254
NQA, 7, 9, 40
NQA client history record save, 29
NQA client operation, 10
NQA client operation (DHCP), 13
NQA client operation (DLSw), 23
NQA client operation (DNS), 13
NQA client operation (FTP), 14
NQA client operation (HTTP), 15
NQA client operation (ICMP echo), 11
NQA client operation (ICMP jitter), 12
NQA client operation (path jitter), 23
NQA client operation (SNMP), 17
NQA client operation (TCP), 18
NQA client operation (UDP echo), 19
NQA client operation (UDP jitter), 16
NQA client operation (UDP tracert), 20
NQA client operation (voice), 21
NQA client operation optional parameters, 24
NQA client statistics collection, 29
NQA client template, 30
NQA client template (DNS), 32
NQA client template (FTP), 37
NQA client template (HTTP), 36
NQA client template (ICMP), 31
NQA client template (RADIUS), 38
NQA client template (TCP half open), 34
NQA client template (TCP), 33
NQA client template (UDP), 35
NQA client template optional parameters, 39
NQA client threshold monitoring, 26
NQA client+Track collaboration, 26
NQA collaboration, 62
NQA operation (DHCP), 44
NQA operation (DLSw), 59
NQA operation (DNS), 45
NQA operation (FTP), 47
NQA operation (HTTP), 48
NQA operation (ICMP echo), 40
NQA operation (ICMP jitter), 42
NQA operation (path jitter), 60
NQA operation (SNMP), 51
NQA operation (TCP), 53
NQA operation (UDP echo), 54
NQA operation (UDP jitter), 49
NQA operation (UDP tracert), 55
NQA operation (voice), 57
NQA server, 9
NQA template (DNS), 65
NQA template (FTP), 69
NQA template (HTTP), 68
NQA template (ICMP), 64
NQA template (RADIUS), 69
NQA template (TCP half open), 67
NQA template (TCP), 66
NQA template (UDP), 67
NTP, 71, 77, 91
NTP access control rights, 81
NTP association mode, 78
NTP broadcast association mode, 79
NTP broadcast association mode (on switch), 95
NTP broadcast client, 79
NTP broadcast mode authentication, 85
NTP broadcast mode+authentication (on switch), 105
NTP broadcast server, 80
NTP client/server association mode, 78, 91
NTP client/server mode authentication, 81
NTP client/server mode+authentication, 103
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP dynamic associations max, 89
NTP local clock as reference source, 90
NTP multicast association mode, 80
NTP multicast association mode (on switch), 97
NTP multicast client, 80
NTP multicast mode authentication, 86
NTP multicast server, 80
NTP optional parameters, 88
NTP symmetric active/passive association mode, 79, 93
NTP symmetric active/passive mode authentication, 83
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
packet capture, 307, 314, 317
packet capture (remote), 317
packet capture file save, 320
packet capture filtered data display, 318
port mirroring, 233
port mirroring remote destination group monitor port, 226
port mirroring remote destination group remote probe VLAN, 226
remote packet capture, 315
remote port mirroring destination group, 226
remote port mirroring source group, 227
remote port mirroring source group egress port, 229
remote port mirroring source group reflector port, 228
remote port mirroring source group remote probe VLAN, 230
remote port mirroring source group source CPU, 228
remote port mirroring source group source ports, 227
RMON, 131
RMON alarm, 134, 137
RMON Ethernet statistics group, 135
RMON history group, 136
RMON statistics, 133
sampler, 216
sampler (IPv4 Netstream), 216
sFlow, 277, 277, 280
sFlow agent+collector information, 278
sFlow counter sampling, 279
sFlow flow sampling, 278
SNMP, 115
SNMP basics, 117
SNMP logging, 123
SNMP notification, 124
SNMPv1, 126
SNMPv1 basics, 117
SNMPv1 host notification send, 124
SNMPv2c, 126
SNMPv2c basics, 117
SNMPv2c host notification send, 124
SNMPv3 basics, 119
SNMPv3 host notification send, 124
SNTP, 77, 111, 111, 113
SNTP authentication, 112
VCF fabric, 327
VXLAN-aware NetStream, 255
console
information center log output, 288
information center log output configuration, 297
content
packet file content display, 317
control plane
flow mirroring QoS policy application, 245
controlling
NTP access control rights, 81
RMON history control entry, 133
CPU
flow mirroring configuration, 243, 243, 245
Layer 3 remote port mirroring local group source CPU, 232
local port mirroring (source CPU mode), 234
creating
Event MIB event, 142
local port mirroring group, 222
port mirroring remote destination group, 226
port mirroring remote source group, 227
RMON Ethernet statistics entry, 133
RMON history control entry, 133
sampler, 216
D
data
feature image-based packet capture data display filter, 317
IPv6 NetStream analyzer (NDA), 263
IPv6 NetStream data export, 270
IPv6 NetStream data export (aggregation), 264, 271, 274
IPv6 NetStream data export (traditional), 264, 270, 272
IPv6 NetStream data export attribute configuration, 267
IPv6 NetStream export format, 265
IPv6 NetStream exporter (NDE), 263
NETCONF configuration data retrieval (all modules), 168
NETCONF configuration data retrieval (Syslog module), 170
NETCONF data entry retrieval (interface table), 171
NETCONF filtering (column-based), 180
NETCONF filtering (column-based) (conditional match), 182
NETCONF filtering (column-based) (full match), 180
NETCONF filtering (column-based) (regex match), 181
NETCONF filtering (conditional match), 184
NETCONF filtering (regex match), 183
NETCONF filtering (table-based), 180
NetStream data export, 249, 256
NetStream data export (aggregation), 249, 257
NetStream data export (traditional), 249, 256
NetStream data export attribute configuration, 253
NetStream data export configuration (aggregation), 260
NetStream data export configuration (traditional), 258
NetStream data export format, 253
debugging
feature module, 6
system, 5
system maintenance, 1
default
information center log default output rules, 284
system information default output rules (diagnostic log), 284
system information default output rules (hidden log), 285
system information default output rules (security log), 284
system information default output rules (trace log), 285
destination
information center system logs, 284
port mirroring, 218
port mirroring destination device, 218
determining
ping address reachability, 1
device
feature image-based packet capture configuration, 316
feature image-based packet capture file save, 316
GOLD configuration, 301, 305
GOLD diagnostics (monitoring), 301
GOLD diagnostics (on-demand), 302
information center configuration, 283, 288, 297
information center log output configuration (console), 297, 297
information center log output configuration (Linux log host), 299
information center log output configuration (UNIX log host), 298
information center system log types, 283
IPv6 NetStream data export (aggregation), 274
IPv6 NetStream data export (traditional), 272
IPv6 NTP multicast association mode (on switch), 100
Layer 2 remote port mirroring (egress port), 225, 238
Layer 2 remote port mirroring (reflector port), 225, 235
Layer 2 remote port mirroring configuration, 224
Layer 3 remote port mirroring configuration, 230, 240
Layer 3 remote port mirroring local group, 231
Layer 3 remote port mirroring local group monitor port, 232
Layer 3 remote port mirroring local group source CPU, 232
Layer 3 remote port mirroring local group source port, 231
local packet capture configuration, 315
local port mirroring (source CPU mode), 234
local port mirroring (source port mode), 233
local port mirroring configuration, 222
local port mirroring group monitor port, 223
local port mirroring group source CPU, 223
NETCONF capability exchange, 160
NETCONF CLI operations, 185, 186
NETCONF configuration, 155, 158
NETCONF configuration lock/unlock, 163, 164
NETCONF device management, 157
NETCONF edit-config operation, 167
NETCONF get/get-bulk operation, 165
NETCONF get-config/get-bulk-config operation, 167
NETCONF information retrieval, 187
NETCONF save-point/begin operation, 174
NETCONF save-point/commit operation, 175
NETCONF save-point/end operation, 176
NETCONF save-point/get-commit-information operation, 177
NETCONF save-point/get-commits operation, 176
NETCONF save-point/rollback operation, 175
NETCONF service operations, 165
NETCONF session information retrieval, 188
NETCONF session termination, 190
NETCONF YANG file content retrieval, 188
NetStream data export configuration (aggregation), 260
NetStream data export configuration (traditional), 258
NQA client operation, 10
NQA collaboration configuration, 62
NQA operation configuration (DHCP), 44
NQA operation configuration (DNS), 45
NQA server, 9
NTP architecture, 72
NTP broadcast association mode (on switch), 95
NTP broadcast mode+authentication (on switch), 105
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP MPLS L3VPN instance support, 76
NTP multicast association mode (on switch), 97
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
packet capture configuration (remote), 317
packet capture filtered data display, 318
port mirroring configuration, 218, 233
port mirroring remote destination group, 226
port mirroring remote source group, 227
port mirroring remote source group egress port, 229
port mirroring remote source group reflector port, 228
port mirroring remote source group remote probe VLAN, 230
port mirroring remote source group source CPU, 228
port mirroring remote source group source ports, 227
port mirroring source device, 218
remote packet capture configuration, 315
SNMP basics configuration, 117
SNMP configuration, 115
SNMP MIB, 115
SNMP notification, 124
SNMP view-based MIB access control, 115
SNMPv1 basics configuration, 117
SNMPv1 configuration, 126
SNMPv2c basics configuration, 117
SNMPv2c configuration, 126
SNMPv3 basics configuration, 119
device startup check
on-demand diagnostics, 302
DHCP
NQA client operation, 13
NQA operation configuration, 44
diagnosing
device startup check, 302
GOLD configuration, 301, 305
GOLD diagnostics (on-demand), 302
information center diagnostic log, 283
information center diagnostic log save (log file), 293
on-demand diagnostics during device operation, 302
direction
port mirroring (bidirectional), 218
port mirroring (inbound), 218
port mirroring (outbound), 218
disabling
information center interface link up/link down log generation, 296
NTP message receiving, 89
displaying
EAA settings, 210
Event MIB, 147
feature image-based packet capture data display filter, 317
GOLD, 304
information center, 297
IPv6 NetStream, 271
NetStream, 258
NQA, 40
NTP, 90
packet capture, 317
packet capture display filter configuration, 314
packet file content, 317
port mirroring, 233
RMON settings, 135
sampler, 216
sFlow, 279
SNMP settings, 126
SNTP, 113
VCF fabric, 329
DLSw
NQA client operation, 23
NQA operation configuration, 59
DNS
NQA client operation, 13
NQA client template, 32
NQA operation configuration, 45
NQA template configuration, 65
DSCP
NTP packet value setting, 90
duplicate log suppression, 295
dynamic
NTP dynamic associations max, 89
E
configuration, 202, 210
environment variable configuration (user-defined), 205
event monitor, 202
event monitor policy action, 204
event monitor policy configuration (CLI), 210
event monitor policy configuration (Track), 211
event monitor policy element, 203
event monitor policy environment variable, 204
event monitor policy runtime, 204
event monitor policy user role, 204
event source, 202
how it works, 202
monitor policy, 203
monitor policy configuration, 206
monitor policy configuration (CLI), 206
monitor policy configuration (CLI-defined+environment variables), 213
monitor policy configuration (Tcl), 208
monitor policy configuration (Tcl-defined), 214
monitor policy configuration restrictions, 206
monitor policy suspension, 209
RTM, 202
settings display, 210
echo
NQA client operation (ICMP echo), 11
NQA operation configuration (ICMP echo), 40
egress port
Layer 2 remote port mirroring, 219, 225
Layer 2 remote port mirroring (egress port), 238
port mirroring remote source group egress port, 229
Embedded Automation Architecture. Use EAA
enabling
Event MIB SNMP notification, 146
information center duplicate log suppression, 295
information center synchronous output, 295
information center system log SNMP notification, 296
NETCONF logging, 159
NETCONF over SSH, 159
NQA client, 10
NTP, 78
preprovisioning, 179
SNMP notification, 124
SNTP, 111
VCF fabric topology discovery, 327
entering
NETCONF XML view, 160
environment
EAA environment variable configuration (user-defined), 205
EAA event monitor policy environment variable, 204
establishing
NETCONF session, 160
Ethernet
Layer 2 remote port mirroring configuration, 224
Layer 3 remote port mirroring configuration, 230
port mirroring configuration, 218, 233
RMON Ethernet statistics entry, 133
RMON Ethernet statistics group configuration, 135
RMON statistics configuration, 133
RMON statistics group, 131
sampler configuration, 216
sampler configuration (IPv4 Netstream), 216
sFlow configuration, 277, 277, 280
event
EAA configuration, 202, 210
EAA environment variable configuration (user-defined), 205
EAA event monitor, 202
EAA event monitor policy element, 203
EAA event monitor policy environment variable, 204
EAA event source, 202
EAA monitor policy, 203
NETCONF event notification subscription, 161, 162
RMON event group, 131
Event Management Information Base. See Event MIB
configuration, 139, 141
display, 147
event actions, 140
event creation, 142
monitored object, 139
notification action configuration, 143
object list, 141
object owner, 139
sampling configuration, 141
set action configuration, 142
SNMP notification enable, 146
trigger test, 139
trigger test configuration, 143
trigger test configuration (Boolean), 144, 149
trigger test configuration (existence), 145, 147
trigger test configuration (threshold), 145, 152
exchanging
NETCONF capabilities, 160
existence
Event MIB trigger test, 139
Event MIB trigger test configuration, 145, 147
export
IPv6 NetStream data export, 270
IPv6 NetStream data export (aggregation), 271, 274
IPv6 NetStream data export (traditional), 270, 272
IPv6 NetStream data export format, 267
exporting
IPv6 NetStream data export (aggregation), 264
IPv6 NetStream data export (traditional), 264
IPv6 NetStream data export attribute configuration, 267
NetStream data export, 249, 256
NetStream data export (aggregation), 249, 257
NetStream data export (traditional), 249, 256
NetStream data export attribute configuration, 253
NetStream data export configuration (aggregation), 260
NetStream data export configuration (traditional), 258
NetStream data export format, 253
NetStream format, 251
F
feature and hardware compatibility
IPv6 NetStream, 266
NetStream, 252
field
packet capture display filter keyword, 310
file
information center diagnostic log output destination, 293
information center log save (log file), 291
information center log storage period (log buffer), 294
information center security log file management, 292
information center security log save (log file), 292
NETCONF YANG file content retrieval, 188
packet capture file save, 320
packet file content display, 317
filtering
column-based filtering, 179
feature image-based packet capture data display, 317
IPv6 NetStream, 265
IPv6 NetStream configuration, 266
IPv6 NetStream filtering, 265
IPv6 NetStream filtering configuration, 266
NETCONF data (conditional match), 184
NETCONF data (regex match), 183
NETCONF data filtering (column-based), 180
NETCONF data filtering (table-based), 180
NetStream configuration, 248, 252, 258
NetStream filtering, 251
NetStream filtering configuration, 252
packet capture display filter configuration, 314
packet capture filter configuration, 313
table-based filtering, 179
FIPS compliance
information center, 288
NETCONF, 157
SNMP, 117
flow
IPv6 NetStream configuration, 263, 272
IPv6 NetStream flow aging, 264, 269
IPv6 NetStream flow aging methods, 269
mirroring. See flow mirroring
NetStream flow aging, 249, 255
NetStream flow aging methods, 255
Sampled Flow. Use sFlow
sFlow configuration, 277
configuration, 243, 243, 245
match criteria configuration, 243
QoS policy application, 244
QoS policy application (control plane), 245
QoS policy application (global), 245
QoS policy application (interface), 244
QoS policy application (VLAN), 245
QoS policy configuration, 244
traffic behavior configuration, 244
format
information center system logs, 285
IPv6 NetStream data export, 265
IPv6 NetStream data export format, 267
IPv6 NetStream v9/v10 template refresh rate, 268
NETCONF message, 156
NetStream data export format, 253
NetStream export, 251
NetStream v9/v10 template refresh rate, 254
FTP
NQA client operation, 14
NQA client template, 37
NQA operation configuration, 47
NQA template configuration, 69
full match NETCONF data filtering (column-based), 180
G
generating
information center interface link up/link down log generation, 296
Generic Online Diagnostics. Use GOLD
get operation
NETCONF get/get-bulk, 165
NETCONF get-config/get-bulk-config, 167
SNMP, 116
SNMP logging, 123
configuration, 301, 305
diagnostic test simulation, 303
diagnostics configuration (monitoring), 301
diagnostics configuration (on-demand), 302
display, 304
log buffer size configuration, 304
maintain, 304
starting a device startup check by using on-demand diagnostics, 302
starting on-demand diagnostics during device operation, 302
group
Layer 3 remote port mirroring local group, 231
Layer 3 remote port mirroring local group monitor port, 232
Layer 3 remote port mirroring local group source port, 231
local port mirroring group monitor port, 223
local port mirroring group source CPU, 223
local port mirroring group source port, 222
port mirroring group, 218
RMON, 131
RMON alarm, 132
RMON Ethernet statistics, 131
RMON event, 131
RMON history, 131
RMON private alarm, 132
guidelines
VCF fabric configuraiton, 326
VCF fabric topology enabling, 327
H
hardware
GOLD configuration, 301, 305
GOLD diagnostic test simulation, 303
GOLD diagnostics (monitoring), 301
GOLD diagnostics (on-demand), 302
hidden log (information center), 283
history
NQA client history record save, 29
RMON group, 131
RMON history control entry, 133
RMON history group configuration, 136
host
information center log output (log host), 290
HTTP
NETCONF over SOAP (HTTP-based), 158
NQA client operation, 15
NQA client template, 36
NQA operation configuration, 48
NQA template configuration, 68
HTTPS
NETCONF over SOAP (HTTPS-based), 158
I
ICMP
NQA client operation (ICMP echo), 11
NQA client operation (ICMP jitter), 12
NQA client template, 31
NQA collaboration configuration, 62
NQA operation configuration (ICMP echo), 40
NQA operation configuration (ICMP jitter), 42
NQA template configuration, 64
ping command, 1
identifying
tracert node failure, 4, 4
image
packet capture feature image-based configuration, 316
packet capture feature image-based mode, 307
implementing
local port mirroring, 219
port mirroring, 219
remote port mirroring, 220
inbound
port mirroring, 218
information center
configuration, 283, 288, 297
default output rules (diagnostic log), 284
default output rules (hidden log), 285
default output rules (security log), 284
default output rules (trace log), 285
diagnostic log save (log file), 293
display, 297
duplicate log suppression, 295
FIPS compliance, 288
interface link up/link down log generation, 296
log default output rules, 284
log output (console), 288
log output (log buffer), 290
log output (log host), 290
log output (monitor terminal), 289
log output configuration (console), 297
log output configuration (Linux log host), 299
log output configuration (UNIX log host), 298
log save (log file), 291
log storage period (log buffer), 294
log suppression, 296
maintain, 297
security log file management, 292
security log management, 292
security log save (log file), 292
synchronous log output, 295
system information log types, 283
system log destinations, 284
system log formats, 285
system log levels, 283
system log SNMP notification, 296
trace log file max size, 294
Internet
NQA configuration, 7, 9, 40
SNMP basics configuration, 117
SNMP configuration, 115
SNMP MIB, 115
SNMPv1 basics configuration, 117
SNMPv2c basics configuration, 117
SNMPv3 basics configuration, 119
interval
sampler creation, 216
IP addressing
tracert, 3
tracert node failure identification, 4, 4
IP services
NQA client history record save, 29
NQA client operation (DHCP), 13
NQA client operation (DLSw), 23
NQA client operation (DNS), 13
NQA client operation (FTP), 14
NQA client operation (HTTP), 15
NQA client operation (ICMP echo), 11
NQA client operation (ICMP jitter), 12
NQA client operation (path jitter), 23
NQA client operation (SNMP), 17
NQA client operation (TCP), 18
NQA client operation (UDP echo), 19
NQA client operation (UDP jitter), 16
NQA client operation (UDP tracert), 20
NQA client operation (voice), 21
NQA client operation optional parameters, 24
NQA client operation scheduling, 30
NQA client statistics collection, 29
NQA client template (DNS), 32
NQA client template (FTP), 37
NQA client template (HTTP), 36
NQA client template (ICMP), 31
NQA client template (RADIUS), 38
NQA client template (TCP half open), 34
NQA client template (TCP), 33
NQA client template (UDP), 35
NQA client template optional parameters, 39
NQA client threshold monitoring, 26
NQA client+Track collaboration, 26
NQA collaboration configuration, 62
NQA configuration, 7, 9, 40
NQA operation configuration (DHCP), 44
NQA operation configuration (DLSw), 59
NQA operation configuration (DNS), 45
NQA operation configuration (FTP), 47
NQA operation configuration (HTTP), 48
NQA operation configuration (ICMP echo), 40
NQA operation configuration (ICMP jitter), 42
NQA operation configuration (path jitter), 60
NQA operation configuration (SNMP), 51
NQA operation configuration (TCP), 53
NQA operation configuration (UDP echo), 54
NQA operation configuration (UDP jitter), 49
NQA operation configuration (UDP tracert), 55
NQA operation configuration (voice), 57
NQA template configuration (DNS), 65
NQA template configuration (FTP), 69
NQA template configuration (HTTP), 68
NQA template configuration (ICMP), 64
NQA template configuration (RADIUS), 69
NQA template configuration (TCP half open), 67
NQA template configuration (TCP), 66
NQA template configuration (UDP), 67
IPv6
NTP client/server association mode, 92
NTP multicast association mode (on switch), 100
NTP symmetric active/passive association mode, 94
architecture, 263
configuration, 263, 266, 272
data export (aggregation), 264
data export (traditional), 264
data export attribute configuration, 267
data export configuration, 270
data export configuration (aggregation), 271, 274
data export configuration (traditional), 270, 272
data export format, 267
display, 271
enable, 266
export format, 265
feature and hardware compatibility, 266
filtering, 265
filtering configuration, 266
flow aging, 264
flow aging configuration, 269
flow aging methods, 269
maintain, 271
MPLS-aware configuration, 269
protocols and standards, 265
sampling, 265
sampling configuration, 267
v9/v10 template refresh rate, 268
IT infrasturcture
VCF fabric overview, 321
K
keyword
packet capture, 307
packet capture filter, 308
L
label
MPLS-aware IPv6 NetStream, 269
MPLS-aware NetStream, 255
VXLAN-aware NetStream, 255
Layer 2
port mirroring configuration, 218, 233
remote port mirroring (egress port), 225, 238
remote port mirroring (reflector port), 225, 235
remote port mirroring configuration, 224
Layer 3
port mirroring configuration, 218, 233
remote port mirroring configuration, 230, 240
tracert, 3
tracert node failure identification, 4, 4
level
information center system logs, 283
link
information center interface link up/link down log generation, 296
Linux
information center log host output configuration, 299
list
Event MIB object list configuration, 141
loading
NETCONF configuration, 173, 178
local
NTP local clock as reference source, 90
packet capture configuration, 315
packet capture mode, 307
port mirroring, 219
port mirroring configuration, 222
port mirroring group creation, 222
port mirroring group monitor port, 223
port mirroring group source CPU, 223
port mirroring group source port, 222
locking
NETCONF configuration, 163, 164
log suppression, 296
logging
GOLD log buffer size, 304
information center common logs, 283
information center configuration, 283, 288, 297
information center diagnostic log save (log file), 293
information center diagnostic logs, 283
information center duplicate log suppression, 295
information center hidden logs, 283
information center interface link up/link down log generation, 296
information center log default output rules, 284
information center log output (console), 288
information center log output (log buffer), 290
information center log output (log host), 290
information center log output (monitor terminal), 289
information center log output configuration (console), 297
information center log output configuration (Linux log host), 299
information center log output configuration (UNIX log host), 298
information center log save (log file), 291
information center log storage period (log buffer), 294
information center log suppression, 296
information center security log file management, 292
information center security log management, 292
information center security log save (log file), 292
information center security logs, 283
information center synchronous log output, 295
information center system log destinations, 284
information center system log formats, 285
information center system log levels, 283
information center system log SNMP notification, 296
information center trace log file max size, 294
NETCONF logging enable, 159
SNMP configuration, 123
system information default output rules (diagnostic log), 284
system information default output rules (hidden log), 285
system information default output rules (security log), 284
system information default output rules (trace log), 285
logical
packet capture display filter configuration (logical expression), 314
packet capture display filter operator, 312
packet capture filter configuration (logical expression), 313
packet capture filter operator, 309
packet capture operator, 307
M
maintaining
GOLD, 304
information center, 297
IPv6 NetStream, 271
NetStream, 258
VCF fabric, 329
Management Information Base. Use MIB
managing
information center security log file, 292
information center security logs, 292
NETCONF device management, 157
matching
flow mirroring match criteria, 243
NETCONF data filtering (column-based), 180
NETCONF data filtering (column-based) (conditional match), 182
NETCONF data filtering (column-based) (full match), 180
NETCONF data filtering (column-based) (regex match), 181
NETCONF data filtering (conditional match), 184
NETCONF data filtering (regex match), 183
NETCONF data filtering (table-based), 180
packet capture display filter configuration (proto[…] expression), 314
message
NETCONF format, 156
NTP message receiving disable, 89
NTP message source interface, 88
Event MIB configuration, 139, 141
Event MIB event actions, 140
Event MIB monitored object, 139
Event MIB object list configuration, 141
Event MIB object owner, 139
Event MIB sampling configuration, 141
Event MIB trigger test, 139
Event MIB trigger test configuration, 143
Event MIB trigger test configuration (Boolean), 144, 149
Event MIB trigger test configuration (existence), 145, 147
Event MIB trigger test configuration (threshold), 145, 152
SNMP, 115, 115
SNMP Get operation, 116
SNMP Set operation, 116
SNMP view-based access control, 115
mirroring
flow. See flow mirroring
port. See port mirroring
mode
NTP association, 78
NTP broadcast association, 73, 79
NTP client/server association, 73, 78
NTP multicast association, 73, 80
NTP symmetric active/passive association, 73, 79
packet capture feature image-based, 307
packet capture local, 307
packet capture remote, 307
sampler random, 216
SNMP access control (rule-based), 116
SNMP access control (view-based), 116
module
feature module debug, 6
information center configuration, 283, 288, 297
NETCONF configuration data retrieval (all modules), 168
NETCONF configuration data retrieval (Syslog module), 170
NETCONF data entry retrieval (interface table), 171
monitor terminal
information center log output, 289
monitoring
EAA configuration, 202
EAA environment variable configuration (user-defined), 205
EAA monitor policy configuration (CLI), 206
Event MIB configuration, 139, 141
Event MIB trigger test configuration (Boolean), 149
Event MIB trigger test configuration (existence), 147
Event MIB trigger test configuration (threshold), 152
GOLD configuration, 305
GOLD diagnostics (monitoring), 301
network, 248, See also NMM
NQA client threshold monitoring, 26
NQA threshold monitoring, 8
MPLS
MPLS-aware IPv6 NetStream, 269
MPLS-aware NetStream, 255
MPLS L3VPN instance
NTP support, 76
multicast
IPv6 NTP multicast association mode (on switch), 100
NTP multicast association mode, 73, 80
NTP multicast association mode (on switch), 97
NTP multicast client configuration, 80
NTP multicast mode authentication, 86
NTP multicast mode dynamic associations max, 89
NTP multicast server configuration, 80
N
NDA
IPv6 NetStream data analyzer, 263
NetStream architecture, 248
NDE
IPv6 NetStream data exporter, 263
NetStream architecture, 248
capability exchange, 160
CLI operations, 185, 186
CLI return, 191
configuration, 155, 158
configuration data retrieval (all modules), 168
configuration data retrieval (Syslog module), 170
configuration load, 173, 178
configuration lock/unlock, 163, 164
configuration rollback, 173
configuration rollback (configuration file-based), 173
configuration rollback (rollback point-based), 174
configuration save, 173, 173, 178
data entry retrieval (interface table), 171
data filtering, 179
data filtering (conditional match), 184
data filtering (regex match), 183
device management, 157
edit-config operation, 167
event notification subscription, 161, 162
FIPS compliance, 157
get/get-bulk operation, 165
get-config/get-bulk-config operation, 167
information retrieval, 187
message format, 156
NETCONF logging enable, 159
over SOAP, 156
over SOAP configuration, 158
over SSH enable, 159
parameter value change, 172
preprovisioning enable, 179
protocols and standards, 157
save-point/begin operation, 174
save-point/commit operation, 175
save-point/end operation, 176
save-point/get-commit-information operation, 177
save-point/get-commits operation, 176
save-point/rollback operation, 175
service operations, 165
session establishment, 160
session idle timeout time set, 160
session information retrieval, 188
session termination, 190
structure, 155
supported operations, 192
XML view, 160
YANG file content retrieval, 188
NetStream
aggregation data export restrictions, 257
architecture, 248
configuration, 248, 252, 258
data export, 249
data export (aggregation), 249
data export (traditional), 249
data export attribute configuration, 253
data export configuration, 256
data export configuration (aggregation), 257, 260
data export configuration (traditional), 256, 258
data export format configuration, 253
display, 258
enable, 252
export format, 251
feature and hardware compatibility, 252
filtering, 251
filtering configuration, 252
flow aging, 249
flow aging configuration, 255
flow aging methods, 255
IPv6. See IPv6 NetStream
maintain, 258
MPLS-aware configuration, 255
NDA, 248
NDE, 248
NSC, 248
protocols and standards, 251
sampler configuration, 216
sampler configuration (IPv4 Netstream), 216
sampler creation, 216
sampling configuration, 253
v9/v10 template refresh rate, 254
VXLAN-aware configuration, 255
netwokring
automatd overlay network deployment, 326
automatd underlay network provisioning, 324
VCF fabric topology discovery, 324
network
Event MIB SNMP notification enable, 146
Event MIB trigger test configuration (Boolean), 149
Event MIB trigger test configuration (existence), 147
Event MIB trigger test configuration (threshold), 152
feature module debug, 6
flow mirroring configuration, 243, 243, 245
flow mirroring match criteria, 243
flow mirroring traffic behavior, 244
GOLD log buffer size, 304
information center diagnostic log save (log file), 293
information center duplicate log suppression, 295
information center interface link up/link down log generation, 296
information center log output configuration (console), 297
information center log output configuration (Linux log host), 299
information center log output configuration (UNIX log host), 298
information center log storage period (log buffer), 294
information center log suppression, 296
information center security log file management, 292
information center security log save (log file), 292
information center synchronous log output, 295
information center system log SNMP notification, 296
information center system log types, 283
information center trace log file max size, 294
IPv6 NetStream filtering, 265
IPv6 NetStream filtering configuration, 266
IPv6 NetStream sampling, 265
IPv6 NetStream sampling configuration, 267
Layer 2 remote port mirroring (egress port), 225, 238
Layer 2 remote port mirroring (reflector port), 225, 235
Layer 2 remote port mirroring configuration, 224
Layer 3 remote port mirroring configuration, 230, 240
Layer 3 remote port mirroring local group, 231
Layer 3 remote port mirroring local group monitor port, 232
Layer 3 remote port mirroring local group source CPU, 232
Layer 3 remote port mirroring local group source port, 231
local port mirroring (source CPU mode), 234
local port mirroring (source port mode), 233
local port mirroring configuration, 222
local port mirroring group monitor port, 223
local port mirroring group source CPU, 223
local port mirroring group source port, 222
monitoring, 248, See also NMM
MPLS-aware IPv6 NetStream, 269
MPLS-aware NetStream, 255
NetStream data export configuration (traditional), 258
NetStream filtering, 251
NetStream filtering configuration, 252
NetStream sampling, 251
NetStream sampling configuration, 253
Network Configuration Protocol. Use NETCONF
NQA client history record save, 29
NQA client operation, 10
NQA client operation (DHCP), 13
NQA client operation (DLSw), 23
NQA client operation (DNS), 13
NQA client operation (FTP), 14
NQA client operation (HTTP), 15
NQA client operation (ICMP echo), 11
NQA client operation (ICMP jitter), 12
NQA client operation (path jitter), 23
NQA client operation (SNMP), 17
NQA client operation (TCP), 18
NQA client operation (UDP echo), 19
NQA client operation (UDP jitter), 16
NQA client operation (UDP tracert), 20
NQA client operation (voice), 21
NQA client operation optional parameters, 24
NQA client operation scheduling, 30
NQA client statistics collection, 29
NQA client template, 30
NQA client threshold monitoring, 26
NQA client+Track collaboration, 26
NQA collaboration configuration, 62
NQA operation configuration (DHCP), 44
NQA operation configuration (DLSw), 59
NQA operation configuration (DNS), 45
NQA operation configuration (FTP), 47
NQA operation configuration (HTTP), 48
NQA operation configuration (ICMP echo), 40
NQA operation configuration (ICMP jitter), 42
NQA operation configuration (path jitter), 60
NQA operation configuration (SNMP), 51
NQA operation configuration (TCP), 53
NQA operation configuration (UDP echo), 54
NQA operation configuration (UDP jitter), 49
NQA operation configuration (UDP tracert), 55
NQA operation configuration (voice), 57
NQA server, 9
NQA template configuration (DNS), 65
NQA template configuration (FTP), 69
NQA template configuration (HTTP), 68
NQA template configuration (ICMP), 64
NQA template configuration (RADIUS), 69
NQA template configuration (TCP half open), 67
NQA template configuration (TCP), 66
NQA template configuration (UDP), 67
NTP access control rights, 81
NTP association mode, 78
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP message receiving disable, 89
NTP MPLS L3VPN instance support, 76
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
ping network connectivity test, 1
port mirroring remote destination group, 226
port mirroring remote source group, 227
port mirroring remote source group egress port, 229
port mirroring remote source group reflector port, 228
port mirroring remote source group remote probe VLAN, 230
port mirroring remote source group source CPU, 228
port mirroring remote source group source ports, 227
preprovisioning enable, 179
quality analyzer. See NQA
RMON alarm configuration, 134, 137
RMON alarm group sample types, 133
RMON Ethernet statistics group configuration, 135
RMON history group configuration, 136
RMON statistics configuration, 133
sFlow counter sampling configuration, 279
sFlow flow sampling configuration, 278
SNMPv1 basics configuration, 117
SNMPv2c basics configuration, 117
SNMPv3 basics configuration, 119
tracert node failure identification, 4, 4
VXLAN-aware NetStream, 255
network management
automated VCF fabric provisioning and deployment, 330
EAA configuration, 202, 210
Event MIB configuration, 139, 141
GOLD configuration, 301, 305
information center configuration, 283, 288, 297
IPv6 NetStream configuration, 263, 266, 272
NETCONF configuration, 155
NetStream configuration, 248, 252, 258
NQA configuration, 7, 9, 40
NTP configuration, 71, 77, 91
packet capture configuration, 307, 314, 317
port mirroring configuration, 218, 233
RMON configuration, 131
sampler configuration, 216
sampler configuration (IPv4 Netstream), 216
sampler creation, 216
sFlow configuration, 277, 277, 280
SNMP configuration, 115
SNMPv1 configuration, 126
SNMPv2c configuration, 126
VCF fabric configuration, 327
VCF fabric overview, 321
Network Time Protocol. Use NTP
networking
automated VCF fabric provisioning and deployment, 324
Neutron deployment, 323
EAA configuration, 202, 210
EAA environment variable configuration (user-defined), 205
EAA event monitor, 202
EAA event monitor policy configuration (CLI), 210
EAA event monitor policy configuration (Track), 211
EAA event monitor policy element, 203
EAA event monitor policy environment variable, 204
EAA event source, 202
EAA monitor policy, 203
EAA monitor policy configuration, 206
EAA monitor policy configuration (CLI), 206
EAA monitor policy configuration (CLI-defined+environment variables), 213
EAA monitor policy configuration (Tcl), 208
EAA monitor policy configuration (Tcl-defined), 214
EAA monitor policy suspension, 209
EAA RTM, 202
EAA settings display, 210
feature image-based packet capture configuration, 316
feature module debug, 6
flow mirroring configuration, 245
GOLD configuration, 301
GOLD diagnostic test simulation, 303
GOLD diagnostics (monitoring), 301
GOLD diagnostics (on-demand), 302
GOLD display, 304
GOLD maintain, 304
information center configuration, 283, 288, 297
information center diagnostic log save (log file), 293
information center display, 297
information center duplicate log suppression, 295
information center interface link up/link down log generation, 296
information center log default output rules, 284
information center log destinations, 284
information center log formats, 285
information center log levels, 283
information center log output (console), 288
information center log output (log buffer), 290
information center log output (log host), 290
information center log output (monitor terminal), 289
information center log output configuration (console), 297
information center log output configuration (Linux log host), 299
information center log output configuration (UNIX log host), 298
information center log save (log file), 291
information center log storage period (log buffer), 294
information center log suppression, 296
information center maintain, 297
information center security log file management, 292
information center security log management, 292
information center security log save (log file), 292
information center synchronous log output, 295
information center system log SNMP notification, 296
information center system log types, 283
information center trace log file max size, 294
IPv6 NetStream architecture, 263
IPv6 NetStream configuration, 263, 266
IPv6 NetStream data export, 264
IPv6 NetStream data export (aggregation), 271, 274
IPv6 NetStream data export (traditional), 270, 272
IPv6 NetStream data export attribute configuration, 267
IPv6 NetStream data export configuration, 270
IPv6 NetStream data export format, 267
IPv6 NetStream display, 271
IPv6 NetStream enable, 266
IPv6 NetStream filtering, 265
IPv6 NetStream filtering configuration, 266
IPv6 NetStream flow aging, 269
IPv6 NetStream flow aging methods, 269
IPv6 NetStream maintain, 271
IPv6 NetStream protocols and standards, 265
IPv6 NetStream sampling, 265
IPv6 NetStream sampling configuration, 267
IPv6 NetStream v9/v10 template refresh rate, 268
IPv6 NTP client/server association mode configuration, 92
IPv6 NTP multicast association mode configuration (on switch), 100
IPv6 NTP symmetric active/passive association mode configuration, 94
Layer 2 remote port mirroring (egress port), 225, 238
Layer 2 remote port mirroring (reflector port), 225, 235
Layer 2 remote port mirroring configuration, 224
Layer 3 remote port mirroring configuration, 230, 240
Layer 3 remote port mirroring local group, 231
Layer 3 remote port mirroring local group monitor port, 232
Layer 3 remote port mirroring local group source CPU, 232
Layer 3 remote port mirroring local group source port, 231
local packet capture configuration, 315
local port mirroring (source CPU mode), 234
local port mirroring (source port mode), 233
local port mirroring configuration, 222
local port mirroring group, 222
local port mirroring group monitor port, 223
local port mirroring group source CPU, 223
local port mirroring group source port, 222
MPLS-aware IPv6 NetStream, 269
MPLS-aware NetStream, 255
NETCONF capability exchange, 160
NETCONF CLI operations, 185, 186
NETCONF CLI return, 191
NETCONF configuration, 155, 158
NETCONF configuration data retrieval (all modules), 168
NETCONF configuration data retrieval (Syslog module), 170
NETCONF configuration load, 173
NETCONF configuration lock/unlock, 163, 164
NETCONF configuration rollback, 173
NETCONF configuration save, 173
NETCONF data entry retrieval (interface table), 171
NETCONF data filtering, 179
NETCONF edit-config operation, 167
NETCONF event notification subscription, 161, 162
NETCONF get/get-bulk operation, 165
NETCONF get-config/get-bulk-config operation, 167
NETCONF information retrieval, 187
NETCONF over SOAP configuration, 158
NETCONF over SSH enable, 159
NETCONF parameter value change, 172
NETCONF protocols and standards, 157
NETCONF save-point/begin operation, 174
NETCONF save-point/commit operation, 175
NETCONF save-point/end operation, 176
NETCONF save-point/get-commit-information operation, 177
NETCONF save-point/get-commits operation, 176
NETCONF save-point/rollback operation, 175
NETCONF service operations, 165
NETCONF session establishment, 160
NETCONF session information retrieval, 188
NETCONF session termination, 190
NETCONF structure, 155
NETCONF supported operations, 192
NETCONF YANG file content retrieval, 188
NetStream aggregation data export restrictions, 257
NetStream architecture, 248
NetStream configuration, 248, 252, 258, 258
NetStream data export, 249, 256
NetStream data export (aggregation), 257
NetStream data export (traditional), 256
NetStream data export attribute configuration, 253
NetStream data export configuration (aggregation), 260
NetStream data export configuration (traditional), 258
NetStream data export format, 253
NetStream display, 258
NetStream enable, 252
NetStream filtering, 251
NetStream filtering configuration, 252
NetStream flow aging, 249, 255
NetStream flow aging methods, 255
NetStream format, 251
NetStream maintain, 258
NetStream protocols and standards, 251
NetStream sampling, 251
NetStream sampling configuration, 253
NetStream v9/v10 template refresh rate, 254
NQA client history record save, 29
NQA client operation, 10
NQA client operation (DHCP), 13
NQA client operation (DLSw), 23
NQA client operation (DNS), 13
NQA client operation (FTP), 14
NQA client operation (HTTP), 15
NQA client operation (ICMP echo), 11
NQA client operation (ICMP jitter), 12
NQA client operation (path jitter), 23
NQA client operation (SNMP), 17
NQA client operation (TCP), 18
NQA client operation (UDP echo), 19
NQA client operation (UDP jitter), 16
NQA client operation (UDP tracert), 20
NQA client operation (voice), 21
NQA client operation optional parameters, 24
NQA client operation scheduling, 30
NQA client statistics collection, 29
NQA client template, 30
NQA client template (DNS), 32
NQA client template (FTP), 37
NQA client template (HTTP), 36
NQA client template (ICMP), 31
NQA client template (RADIUS), 38
NQA client template (TCP half open), 34
NQA client template (TCP), 33
NQA client template (UDP), 35
NQA client template optional parameters, 39
NQA client threshold monitoring, 26
NQA client+Track collaboration, 26
NQA collaboration configuration, 62
NQA configuration, 7, 9, 40
NQA display, 40
NQA operation configuration (DHCP), 44
NQA operation configuration (DLSw), 59
NQA operation configuration (DNS), 45
NQA operation configuration (FTP), 47
NQA operation configuration (HTTP), 48
NQA operation configuration (ICMP echo), 40
NQA operation configuration (ICMP jitter), 42
NQA operation configuration (path jitter), 60
NQA operation configuration (SNMP), 51
NQA operation configuration (TCP), 53
NQA operation configuration (UDP echo), 54
NQA operation configuration (UDP jitter), 49
NQA operation configuration (UDP tracert), 55
NQA operation configuration (voice), 57
NQA server, 9
NQA template configuration (DNS), 65
NQA template configuration (FTP), 69
NQA template configuration (HTTP), 68
NQA template configuration (ICMP), 64
NQA template configuration (RADIUS), 69
NQA template configuration (TCP half open), 67
NQA template configuration (TCP), 66
NQA template configuration (UDP), 67
NQA threshold monitoring, 8
NQA+Track collaboration, 8
NTP access control rights, 81
NTP architecture, 72
NTP association mode, 78
NTP authentication configuration, 81
NTP broadcast association mode configuration, 79
NTP broadcast association mode configuration (on switch), 95
NTP broadcast mode authentication configuration, 85
NTP broadcast mode+authentication (on switch), 105
NTP client/server association mode configuration, 91
NTP client/server mode authentication configuration, 81
NTP client/server mode+authentication, 103
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP configuration, 71, 77, 91
NTP display, 90
NTP dynamic associations max, 89
NTP enable, 78
NTP local clock as reference source, 90
NTP message receiving disable, 89
NTP message source interface specification, 88
NTP multicast association mode, 80
NTP multicast association mode configuration (on switch), 97
NTP multicast mode authentication configuration, 86
NTP optional parameter configuration, 88
NTP packet DSCP value setting, 90
NTP protocols and standards, 77
NTP security, 75
NTP symmetric active/passive association mode configuration, 93
NTP symmetric active/passive mode authentication configuration, 83
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
packet capture configuration, 307, 314, 317
packet capture configuration (remote), 317
packet capture display, 317
packet capture display filter configuration, 314
packet capture file save, 320
packet capture filter configuration, 313
packet capture filtered data display, 318
packet file content display, 317
ping address reachability determination, 1
ping command, 1
ping network connectivity test, 1
port mirroring classification, 219
port mirroring configuration, 218, 233
port mirroring display, 233
port mirroring implementation, 219
port mirroring remote destination group, 226
port mirroring remote source group, 227
remote packet capture configuration, 315
RMON alarm configuration, 137
RMON configuration, 131
RMON Ethernet statistics group configuration, 135
RMON group, 131
RMON history group configuration, 136
RMON protocols and standards, 133
RMON settings display, 135
sampler configuration, 216
sampler configuration (IPv4 Netstream), 216
sampler creation, 216
sFlow agent+collector information configuration, 278
sFlow configuration, 277, 277, 280
sFlow counter sampling configuration, 279
sFlow display, 279
sFlow flow sampling configuration, 278
sFlow protocols and standards, 277
SNMP access control mode, 116
SNMP basics configuration, 117
SNMP configuration, 115
SNMP framework, 115
SNMP Get operation, 116
SNMP host notification send, 124
SNMP logging configuration, 123
SNMP MIB, 115
SNMP notification, 124
SNMP protocol versions, 116
SNMP settings display, 126
SNMP silence, 116
SNMP view-based MIB access control, 115
SNMPv1 configuration, 126
SNMPv2c configuration, 126
SNTP authentication, 112
SNTP configuration, 77, 111, 111, 113
SNTP display, 113
SNTP enable, 111
SNTP NTP server specification, 111
starting a device startup check by using on-demand diagnostics, 302
starting on-demand diagnostics during device operation, 302
system debugging, 1, 5
system information default output rules (diagnostic log), 284
system information default output rules (hidden log), 285
system information default output rules (security log), 284
system information default output rules (trace log), 285
system maintenance, 1
tracert, 3
tracert node failure identification, 4, 4
troubleshooting sFlow, 281
troubleshooting sFlow remote collector cannot receive packets, 281
VXLAN-aware NetStream, 255
NMS
Event MIB SNMP notification enable, 146
RMON configuration, 131
SNMP Notification operation, 116
SNMP protocol versions, 116
SNMP Set operation, 116, 116
node
Event MIB monitored object, 139
notifying
Event MIB event notification action configuration, 143
Event MIB SNMP notification enable, 146
information center system log SNMP notification, 296
NETCONF event notification subscription, 161, 162
SNMP configuration, 115
SNMP host notification send, 124
SNMP notification, 124
SNMP Notification operation, 116
client enable, 10
client history record save, 29
client operation, 10
client operation (DHCP), 13
client operation (DLSw), 23
client operation (DNS), 13
client operation (FTP), 14
client operation (HTTP), 15
client operation (ICMP echo), 11
client operation (ICMP jitter), 12
client operation (path jitter), 23
client operation (SNMP), 17
client operation (TCP), 18
client operation (UDP echo), 19
client operation (UDP jitter), 16
client operation (UDP tracert), 20
client operation (voice), 21
client operation optional parameters, 24
client operation scheduling, 30
client statistics collection, 29
client template (DNS), 32
client template (FTP), 37
client template (HTTP), 36
client template (ICMP), 31
client template (RADIUS), 38
client template (TCP half open), 34
client template (TCP), 33
client template (UDP), 35
client template configuration, 30
client template optional parameters, 39
client threshold monitoring, 26
client+Track collaboration, 26
collaboration configuration, 62
configuration, 7, 9, 40
display, 40
how it works, 7
operation configuration (DHCP), 44
operation configuration (DLSw), 59
operation configuration (DNS), 45
operation configuration (FTP), 47
operation configuration (HTTP), 48
operation configuration (ICMP echo), 40
operation configuration (ICMP jitter), 42
operation configuration (path jitter), 60
operation configuration (SNMP), 51
operation configuration (TCP), 53
operation configuration (UDP echo), 54
operation configuration (UDP jitter), 49
operation configuration (UDP tracert), 55
operation configuration (voice), 57
server configuration, 9
supported operations, 7
template configuration (DNS), 65
template configuration (FTP), 69
template configuration (HTTP), 68
template configuration (ICMP), 64
template configuration (RADIUS), 69
template configuration (TCP half open), 67
template configuration (TCP), 66
template configuration (UDP), 67
threshold monitoring, 8
Track collaboration function, 8
NSC
NetStream architecture, 248
access control, 75
access control rights configuration, 81
architecture, 72
association mode configuration, 78
authentication, 75
authentication configuration, 81
broadcast association mode, 73
broadcast association mode configuration, 79
broadcast association mode configuration (on switch), 95
broadcast client configuration, 79
broadcast mode authentication configuration, 85
broadcast mode dynamic associations max, 89
broadcast mode+authentication (on switch), 105
broadcast server configuration, 80
client/server association mode, 73
client/server association mode configuration, 78, 91
client/server mode authentication configuration, 81
client/server mode dynamic associations max, 89
client/server mode+authentication, 103
client/server mode+MPLS L3VPN network time synchronization (on switch), 107
configuration, 71, 77, 91
configuration restrictions, 77
display, 90
enable, 78
how it works, 71
IPv6 client/server association mode configuration, 92
IPv6 multicast association mode configuration (on switch), 100
IPv6 symmetric active/passive association mode configuration, 94
local clock as reference source, 90
message receiving disable, 89
message source interface specification, 88
MPLS L3VPN instance support, 76
multicast association mode, 73
multicast association mode configuration, 80
multicast association mode configuration (on switch), 97
multicast client configuration, 80
multicast mode authentication configuration, 86
multicast mode dynamic associations max, 89
multicast server configuration, 80
optional parameter configuration, 88
packet DSCP value setting, 90
protocols and standards, 77
security, 75
SNTP authentication, 112
SNTP configuration, 77, 111, 111, 113
SNTP configuration restrictions, 111
SNTP server specification, 111
symmetric active/passive association mode, 73
symmetric active/passive association mode configuration, 79, 93
symmetric active/passive mode authentication configuration, 83
symmetric active/passive mode dynamic associations max, 89
symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
O
object
Event MIB monitored, 139
Event MIB object list configuration, 141
Event MIB object owner, 139
operator
packet capture arithmetic, 307
packet capture logical, 307
packet capture relational, 307
outbound
port mirroring, 218
outputting
information center log configuration (console), 297
information center log configuration (Linux log host), 299
information center log default output rules, 284
information center logs (log buffer), 290
information center logs configuration (UNIX log host), 298
information center synchronous log output, 295
information logs (console), 288
information logs (log host), 290
information logs (monitor terminal), 289
overlay network
automated deployment, 326
P
packet
flow mirroring configuration, 243, 243, 245
flow mirroring match criteria, 243
flow mirroring QoS policy, 244
flow mirroring QoS policy application, 244
flow mirroring traffic behavior, 244
Layer 3 remote port mirroring configuration, 230
NTP DSCP value setting, 90
packet capture display filter configuration (packet field expression), 314
port mirroring configuration, 218, 233
sampler configuration, 216
sampler configuration (IPv4 Netstream), 216
sampler creation, 216
SNMP silence, 116
SNTP configuration, 77, 111, 111, 113
packet capture
capture filter keywords, 308
capture filter operator, 309
configuration, 307, 314, 317
display, 317
display filter configuration, 314
display filter configuration (logical expression), 314
display filter configuration (packet field expression), 314
display filter configuration (proto[…] expression), 314
display filter configuration (relational expression), 314
display filter keyword, 310
display filter operator, 312
feature image-based configuration, 316
feature image-based file save, 316
feature image-based packet data display filter, 317
file content display, 317
file save, 320
filter configuration, 313
filter configuration (expr relop expr expression), 313
filter configuration (logical expression), 313
filter configuration (proto [ exprsize ] expression), 313
filter configuration (vlan vlan_id expression), 313
filter data to display, 318
filter elements, 307
local configuration, 315
mode, 307
remote configuration, 315, 317
parameter
NETCONF parameter value change, 172
NQA client history record save, 29
NQA client operation optional parameters, 24
NQA client template optional parameters, 39
NTP dynamic associations max, 89
NTP local clock as reference source, 90
NTP message receiving disable, 89
NTP message source interface, 88
NTP optional parameter configuration, 88
SNMP basics configuration, 117
SNMPv1 basics configuration, 117
SNMPv2c basics configuration, 117
SNMPv3 basics configuration, 119
path
NQA client operation (path jitter), 23
NQA operation configuration, 60
performing
NETCONF CLI operations, 185, 186
NETCONF edit-config operation, 167
NETCONF get/get-bulk operation, 165
NETCONF get-config/get-bulk-config operation, 167
NETCONF save-point/begin operation, 174
NETCONF save-point/commit operation, 175
NETCONF save-point/end operation, 176
NETCONF save-point/get-commit-information operation, 177
NETCONF save-point/get-commits operation, 176
NETCONF save-point/rollback operation, 175
NETCONF service operations, 165
ping
address reachability determination, 1, 1
network connectivity test, 1
system maintenance, 1
policy
EAA configuration, 202, 210
EAA environment variable configuration (user-defined), 205
EAA event monitor policy configuration (CLI), 210
EAA event monitor policy configuration (Track), 211
EAA event monitor policy element, 203
EAA event monitor policy environment variable, 204
EAA monitor policy, 203
EAA monitor policy configuration, 206
EAA monitor policy configuration (CLI), 206
EAA monitor policy configuration (CLI-defined+environment variables), 213
EAA monitor policy configuration (Tcl), 208
EAA monitor policy configuration (Tcl-defined), 214
EAA monitor policy suspension, 209
flow mirroring QoS policy, 244
flow mirroring QoS policy application, 244
port
IPv6 NTP client/server association mode, 92
IPv6 NTP multicast association mode (on switch), 100
IPv6 NTP symmetric active/passive association mode, 94
mirroring. See port mirroring
NTP association mode, 78
NTP broadcast association mode (on switch), 95
NTP broadcast mode+authentication (on switch), 105
NTP client/server association mode, 91
NTP client/server mode+authentication, 103
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP configuration, 71, 77, 91
NTP multicast association mode (on switch), 97
NTP symmetric active/passive association mode, 93
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
SNTP configuration, 77, 111, 111, 113
classification, 219
configuration, 218, 233
display, 233
implementation, 219
Layer 2 remote (reflector port), 225
Layer 2 remote configuration, 224
Layer 2 remote configuration (egress port), 225
Layer 2 remote port mirroring configuration (egress port), 238
Layer 2 remote port mirroring configuration (reflector port), 235
Layer 3 local group source port configuration restrictions, 231
Layer 3 remote configuration, 230
Layer 3 remote port mirroring configuration, 240
Layer 3 remote port mirroring local mirroring group monitor port configuration restrictions, 232
local configuration, 222
local group creation, 222
local group monitor port, 223
local group monitor port configuration restrictions, 224
local group source CPU, 223
local group source port, 222
local group source port configuration restrictions, 223
local mirroring configuration (source CPU mode), 234
local mirroring configuration (source port mode), 233
monitor port to remote probe VLAN assignment, 227
remote destination group configuration, 226
remote destination group creation, 226
remote destination group monitor port, 226
remote destination group remote probe VLAN, 226
remote source group configuration, 227
terminology, 218
preprovisioning
enable, 179
prerequisite
automated underlay network provisioning, 324
private
RMON private alarm group, 132
procedure
applying flow mirroring QoS policy, 244
applying flow mirroring QoS policy (control plane), 245
applying flow mirroring QoS policy (global), 245
applying flow mirroring QoS policy (interface), 244
applying flow mirroring QoS policy (VLAN), 245
changing NETCONF parameter value, 172
configuring automated overlay network deployment, 328
configuring automated underlay network provisioning, 327
configuring EAA environment variable (user-defined), 205
configuring EAA event monitor policy (CLI), 210
configuring EAA event monitor policy (Track), 211
configuring EAA monitor policy, 206
configuring EAA monitor policy (CLI), 206
configuring EAA monitor policy (CLI-defined+environment variables), 213
configuring EAA monitor policy (Tcl), 208
configuring EAA monitor policy (Tcl-defined), 214
configuring Event MIB, 141
configuring Event MIB notification action, 143
configuring Event MIB object list, 141
configuring Event MIB sampling, 141
configuring Event MIB set action, 142
configuring Event MIB trigger test, 143
configuring Event MIB trigger test (Boolean), 144, 149
configuring Event MIB trigger test (existence), 145, 147
configuring Event MIB trigger test (threshold), 145, 152
configuring feature image-based packet capture, 316
configuring flow mirroring, 243, 245
configuring flow mirroring match criteria, 243
configuring flow mirroring QoS policy, 244
configuring flow mirroring traffic behavior, 244
configuring GOLD, 305
configuring GOLD diagnostics (monitoring), 301
configuring GOLD diagnostics (on-demand), 302
configuring GOLD log buffer size, 304
configuring information center, 288
configuring information center log output (console), 297
configuring information center log output (Linux log host), 299
configuring information center log output (UNIX log host), 298
configuring information center log storage period (log buffer), 294
configuring information center log suppression, 296
configuring information center trace log file max size, 294
configuring IPv6 NetStream, 266
configuring IPv6 NetStream data export, 270
configuring IPv6 NetStream data export (aggregation), 271, 274
configuring IPv6 NetStream data export (traditional), 270, 272
configuring IPv6 NetStream data export attribute, 267
configuring IPv6 NetStream data export format, 267
configuring IPv6 NetStream filtering, 266
configuring IPv6 NetStream flow aging, 269
configuring IPv6 NetStream sampling, 267
configuring IPv6 NetStream v9/v10 template refresh rate, 268
configuring IPv6 NTP client/server association mode, 92
configuring IPv6 NTP multicast association mode (on switch), 100
configuring IPv6 NTP symmetric active/passive association mode, 94
configuring Layer 2 remote port mirroring, 224
configuring Layer 2 remote port mirroring (egress port), 225, 238
configuring Layer 2 remote port mirroring (reflector port), 225, 235
configuring Layer 3 remote port mirroring, 230, 240
configuring Layer 3 remote port mirroring local group, 231
configuring Layer 3 remote port mirroring local group monitor port (interface view), 233
configuring Layer 3 remote port mirroring local group monitor port (system view), 232
configuring Layer 3 remote port mirroring local group source port, 231
configuring Layer 3 remote port mirroring local group source ports (interface view), 232
configuring Layer 3 remote port mirroring local group source ports (system view), 231
configuring Layer 3 remote port mirroring local mirroring group monitor port, 232
configuring Layer 3 remote port mirroring local mirroring group source CPU, 232
configuring local packet capture, 315
configuring local port mirroring, 222
configuring local port mirroring (source CPU mode), 234
configuring local port mirroring (source port mode), 233
configuring local port mirroring group monitor port, 223
configuring local port mirroring group monitor port (interface view), 224
configuring local port mirroring group monitor port (system view), 224
configuring local port mirroring group source CPUs, 223
configuring local port mirroring group source ports, 222
configuring local port mirroring group source ports (interface view), 223
configuring local port mirroring group source ports (system view), 223
configuring MPLS-aware IPv6 NetStream, 269
configuring MPLS-aware NetStream, 255
configuring NETCONF, 158
configuring NETCONF over SOAP, 158
configuring NetStream, 252
configuring NetStream data export, 256
configuring NetStream data export (aggregation), 257, 260
configuring NetStream data export (traditional), 256, 258
configuring NetStream data export attribute, 253
configuring NetStream data export format, 253
configuring NetStream filtering, 252
configuring NetStream flow aging, 255
configuring NetStream sampling, 253
configuring NetStream v9/v10 template refresh rate, 254
configuring NQA, 9
configuring NQA client history record save, 29
configuring NQA client operation, 10
configuring NQA client operation (DHCP), 13
configuring NQA client operation (DLSw), 23
configuring NQA client operation (DNS), 13
configuring NQA client operation (FTP), 14
configuring NQA client operation (HTTP), 15
configuring NQA client operation (ICMP echo), 11
configuring NQA client operation (ICMP jitter), 12
configuring NQA client operation (path jitter), 23
configuring NQA client operation (SNMP), 17
configuring NQA client operation (TCP), 18
configuring NQA client operation (UDP echo), 19
configuring NQA client operation (UDP jitter), 16
configuring NQA client operation (UDP tracert), 20
configuring NQA client operation (voice), 21
configuring NQA client operation optional parameters, 24
configuring NQA client statistics collection, 29
configuring NQA client template, 30
configuring NQA client template (DNS), 32
configuring NQA client template (FTP), 37
configuring NQA client template (HTTP), 36
configuring NQA client template (ICMP), 31
configuring NQA client template (RADIUS), 38
configuring NQA client template (TCP half open), 34
configuring NQA client template (TCP), 33
configuring NQA client template (UDP), 35
configuring NQA client template optional parameters, 39
configuring NQA client threshold monitoring, 26
configuring NQA client+Track collaboration, 26
configuring NQA collaboration, 62
configuring NQA operation (DHCP), 44
configuring NQA operation (DLSw), 59
configuring NQA operation (DNS), 45
configuring NQA operation (FTP), 47
configuring NQA operation (HTTP), 48
configuring NQA operation (ICMP echo), 40
configuring NQA operation (ICMP jitter), 42
configuring NQA operation (path jitter), 60
configuring NQA operation (SNMP), 51
configuring NQA operation (TCP), 53
configuring NQA operation (UDP echo), 54
configuring NQA operation (UDP jitter), 49
configuring NQA operation (UDP tracert), 55
configuring NQA operation (voice), 57
configuring NQA server, 9
configuring NQA template (DNS), 65
configuring NQA template (FTP), 69
configuring NQA template (HTTP), 68
configuring NQA template (ICMP), 64
configuring NQA template (RADIUS), 69
configuring NQA template (TCP half open), 67
configuring NQA template (TCP), 66
configuring NQA template (UDP), 67
configuring NTP, 77
configuring NTP access control rights, 81
configuring NTP association mode, 78
configuring NTP broadcast association mode, 79
configuring NTP broadcast association mode (on switch), 95
configuring NTP broadcast client, 79
configuring NTP broadcast mode authentication, 85
configuring NTP broadcast mode+authentication (on switch), 105
configuring NTP broadcast server, 80
configuring NTP client/server association mode, 78, 91
configuring NTP client/server mode authentication, 81
configuring NTP client/server mode+authentication, 103
configuring NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
configuring NTP dynamic associations max, 89
configuring NTP local clock as reference source, 90
configuring NTP multicast association mode, 80
configuring NTP multicast association mode (on switch), 97
configuring NTP multicast client, 80
configuring NTP multicast mode authentication, 86
configuring NTP multicast server, 80
configuring NTP optional parameters, 88
configuring NTP symmetric active/passive association mode, 79, 93
configuring NTP symmetric active/passive mode authentication, 83
configuring NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
configuring packet capture, 314
configuring packet capture (remote), 317
configuring packet capture file save, 320
configuring packet capture filtered data display, 318
configuring port mirroring monitor port to remote probe VLAN assignment, 227
configuring port mirroring remote destination group monitor port, 226
configuring port mirroring remote destination group on the destination device, 226
configuring port mirroring remote destination group remote probe VLAN, 226
configuring port mirroring remote source group egress port, 229
configuring port mirroring remote source group on source device, 227
configuring port mirroring remote source group reflector port, 228
configuring port mirroring remote source group remote probe VLAN, 230
configuring port mirroring remote source group source CPU, 228
configuring port mirroring remote source group source ports, 227
configuring remote packet capture, 315
configuring RMON alarm, 134, 137
configuring RMON Ethernet statistics group, 135
configuring RMON history group, 136
configuring RMON statistics, 133
configuring sampler (IPv4 Netstream), 216
configuring sFlow, 277, 280
configuring sFlow agent+collector information, 278
configuring sFlow counter sampling, 279
configuring sFlow flow sampling, 278
configuring SNMP basic parameters, 117
configuring SNMP logging, 123
configuring SNMP notification, 124
configuring SNMPv1, 126
configuring SNMPv1 basics, 117
configuring SNMPv1 host notification send, 124
configuring SNMPv2c, 126
configuring SNMPv2c basics, 117
configuring SNMPv2c host notification send, 124
configuring SNMPv3 basic parameters, 119
configuring SNMPv3 host notification send, 124
configuring SNTP, 77, 111, 113
configuring SNTP authentication, 112
configuring VCF fabric, 327
configuring VXLAN-aware NetStream, 255
creating Event MIB event, 142
creating local port mirroring group, 222
creating port mirroring remote destination group, 226
creating port mirroring remote source group, 227
creating RMON Ethernet statistics entry, 133
creating RMON history control entry, 133
creating sampler, 216
debugging feature module, 6
determining ping address reachability, 1
disabling information center interface link up/link down log generation, 296
disabling NTP message interface receiving, 89
displaying EAA settings, 210
displaying Event MIB, 147
displaying GOLD, 304
displaying information center, 297
displaying IPv6 NetStream, 271
displaying NetStream, 258
displaying NMM sFlow, 279
displaying NQA, 40
displaying NTP, 90
displaying packet capture, 317
displaying packet file content, 317
displaying port mirroring, 233
displaying RMON settings, 135
displaying sampler, 216
displaying SNMP settings, 126
displaying SNTP, 113
displaying VCF fabric, 329
enabling Event MIB SNMP notification, 146
enabling information center duplicate log suppression, 295
enabling information center synchronous log output, 295
enabling information center system log SNMP notification, 296
enabling NETCONF logging, 159
enabling NETCONF over SSH, 159
enabling NQA client, 10
enabling NTP, 78
enabling preprovisioning, 179
enabling SNMP notification, 124
enabling SNTP, 111
enabling VCF fabric topology discovery, 327
entering NETCONF XML view, 160
establishing NETCONF session, 160
exchanging NETCONF capabilities, 160
filtering feature image-based packet capture data display, 317
filtering NETCONF data, 179
filtering NETCONF data (conditional match), 184
filtering NETCONF data (regex match), 183
identifying tracert node failure, 4, 4
loading NETCONF configuration, 173, 178
locking NETCONF configuration, 163, 164
maintaining GOLD, 304
maintaining information center, 297
maintaining IPv6 NetStream, 271
maintaining NetStream, 258
maintaining VCF fabric, 329
managing information center security log, 292
managing information center security log file, 292
outputting information center logs (console), 288
outputting information center logs (log buffer), 290
outputting information center logs (log host), 290
outputting information center logs (monitor terminal), 289
performing NETCONF CLI operations, 185, 186
performing NETCONF edit-config operation, 167
performing NETCONF get/get-bulk operation, 165
performing NETCONF get-config/get-bulk-config operation, 167
performing NETCONF save-point/begin operation, 174
performing NETCONF save-point/commit operation, 175
performing NETCONF save-point/end operation, 176
performing NETCONF save-point/get-commit-information operation, 177
performing NETCONF save-point/get-commits operation, 176
performing NETCONF save-point/rollback operation, 175
performing NETCONF service operations, 165
retrieving NETCONF configuration data (all modules), 168
retrieving NETCONF configuration data (Syslog module), 170
retrieving NETCONF data entry (interface table), 171
retrieving NETCONF information, 187
retrieving NETCONF session information, 188
retrieving NETCONF YANG file content information, 188
returning to NETCONF CLI, 191
rolling back NETCONF configuration, 173
rolling back NETCONF configuration (configuration file-based), 173
rolling back NETCONF configuration (rollback point-based), 174
saving feature image-based packet capture to file, 316
saving information center diagnostic logs (log file), 293
saving information center log (log file), 291
saving information center security logs (log file), 292
saving NETCONF configuration, 173, 173, 178
scheduling NQA client operation, 30
setting NETCONF session idle timeout time, 160
setting NTP packet DSCP value, 90
simulating GOLD diagnostic tests, 303
specifying NTP message source interface, 88
specifying SNTP NTP server, 111
subscribing to NETCONF event notifications, 161, 162
suspending EAA monitor policy, 209
terminating NETCONF session, 190
testing network connectivity with ping, 1
troubleshooting sFlow remote collector cannot receive packets, 281
unlocking NETCONF configuration, 163, 164
process
automated underlay network provisioning, 325
protocols and standards
IPv6 NetStream, 265
NETCONF, 155, 157
NetStream, 251
NTP, 77
packet capture display filter keyword, 310
RMON, 133
sFlow, 277
SNMP configuration, 115
SNMP versions, 116
Q
QoS
flow mirroring configuration, 243, 243, 245
flow mirroring QoS policy, 244
flow mirroring QoS policy application, 244
R
RADIUS
NQA client template, 38
NQA template configuration, 69
random mode (NMM sampler), 216
real-time
event manager. See RTM
reflector port
Layer 2 remote port mirroring, 219
Layer 2 remote port mirroring (reflector port), 225
port mirroring remote source group reflector port, 228
refreshing
IPv6 NetStream v9/v10 template refresh rate, 268
NetStream v9/v10 template refresh rate, 254
regex match NETCONF data filtering, 183
regex match NETCONF data filtering (column-based), 181
regular expression. Use regex
relational
packet capture display filter configuration (relational expression), 314
packet capture operator, 307
remote
Layer 2 remote port mirroring, 224
Layer 2 remote port mirroring (egress port), 225
Layer 2 remote port mirroring (reflector port), 225
Layer 3 port mirroring local group, 231
Layer 3 port mirroring local group monitor port, 232
Layer 3 port mirroring local group source CPU, 232
Layer 3 port mirroring local group source port, 231
Layer 3 remote port mirroring configuration, 230
packet capture configuration, 315, 317
packet capture mode, 307
port mirroring, 220
port mirroring destination group, 226
port mirroring destination group creation, 226
port mirroring destination group monitor port, 226
port mirroring destination group remote probe VLAN, 226
port mirroring monitor port to remote probe VLAN assignment, 227
port mirroring source group, 227
port mirroring source group creation, 227
port mirroring source group egress port, 229
port mirroring source group reflector port, 228
port mirroring source group remote probe VLAN, 230
port mirroring source group source CPU, 228
port mirroring source group source ports, 227
Remote Network Monitoring. Use RMON
remote probe VLAN
Layer 2 remote port mirroring, 219
port mirroring monitor port to remote probe VLAN assignment, 227
port mirroring remote destination group, 226
port mirroring remote source group remote probe VLAN, 230
restrictions
automated overlay network deployment, 328
automated underlay network provisioning, 327
EAA monitor policy configuration, 206
Layer 3 remote port mirroring local group monitor port configuration, 232
Layer 3 remote port mirroring local group source port configuration, 231
local port mirroring group monitor port configuration, 224
local port mirroring group source port configuration, 223
NetStream aggregation data export, 257
NTP configuration, 77
SNTP configuration, 77
SNTP configuration restrictions, 111
VCF fabric configuration, 326
VCF fabric topology enabling, 327
retrieving
NETCONF configuration data (all modules), 168
NETCONF configuration data (Syslog module), 170
NETCONF data entry (interface table), 171
NETCONF information, 187
NETCONF session information, 188
NETCONF YANG file content, 188
returning
NETCONF CLI return, 191
alarm configuration, 134, 137
alarm group, 132
alarm group sample types, 133
configuration, 131
Ethernet statistics entry creation, 133
Ethernet statistics group, 131
Ethernet statistics group configuration, 135
event group, 131
Event MIB configuration, 139, 141
Event MIB event creation, 142
Event MIB notification action configuration, 143
Event MIB object list configuration, 141
Event MIB sampling configuration, 141
Event MIB set action configuration, 142
Event MIB trigger test configuration (Boolean), 149
Event MIB trigger test configuration (existence), 147
Event MIB trigger test configuration (threshold), 152
group, 131
history control entry creation, 133
history group, 131
history group configuration, 136
private alarm group, 132
protocols and standards, 133
settings display, 135
statistics configuration, 133
rollback
NETCONF save-point/begin operation, 174
NETCONF save-point/commit operation, 175
NETCONF save-point/end operation, 176
NETCONF save-point/get-commit-information operation, 177
NETCONF save-point/get-commits operation, 176
NETCONF save-point/rollback operation, 175
rolling back
NETCONF configuration, 173
NETCONF configuration (configuration file-based), 173
NETCONF configuration (rollback point-based), 174
routing
automated VCF fabric provisioning and deployment, 330
IPv6 NTP client/server association mode, 92
IPv6 NTP multicast association mode (on switch), 100
IPv6 NTP symmetric active/passive association mode, 94
NTP association mode, 78
NTP broadcast association mode (on switch), 95
NTP broadcast mode+authentication (on switch), 105
NTP client/server association mode, 91
NTP client/server mode+authentication, 103
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP configuration, 71, 77, 91
NTP multicast association mode (on switch), 97
NTP symmetric active/passive association mode, 93
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
SNTP configuration, 77, 111, 111, 113
VCF fabric configuration, 327
EAA, 202
EAA configuration, 202, 210
rule
information center log default output rules, 284
SNMP access control (rule-based), 116
system information default output rules (diagnostic log), 284
system information default output rules (hidden log), 285
system information default output rules (security log), 284
system information default output rules (trace log), 285
runtime
EAA event monitor policy runtime, 204
S
sampler
configuration, 216
configuration (IPv4 Netstream), 216
creation, 216
displaying, 216
sampling
Event MIB sampling configuration, 141
IPv6 NetStream, 265
IPv6 NetStream configuration, 266
IPv6 NetStream sampling, 265
IPv6 NetStream sampling configuration, 267
NetStream configuration, 248, 252, 258
NetStream sampling, 251
NetStream sampling configuration, 253
Sampled Flow. Use sFlow
sFlow counter sampling, 279
sFlow flow sampling configuration, 278
saving
feature image-based packet capture to file, 316
information center diagnostic logs (log file), 293
information center log (log file), 291
information center security logs (log file), 292
NETCONF configuration, 173, 173, 178
NQA client history records, 29
packet capture file save, 320
scheduling
NQA client operation, 30
security
information center security log file management, 292
information center security log management, 292
information center security log save (log file), 292
information center security logs, 283
NTP, 75
NTP access control rights, 81
NTP authentication, 75, 81
NTP broadcast mode authentication, 85
NTP client/server mode authentication, 81
NTP multicast mode authentication, 86
NTP symmetric active/passive mode authentication, 83
SNMP silence, 116
SNTP authentication, 112
server
NQA configuration, 9
NTP broadcast server configuration, 80
NTP multicast server configuration, 80
SNTP configuration, 77, 111, 111, 113
SNTP NTP server specification, 111
service
NETCONF configuration data retrieval (all modules), 168
NETCONF configuration data retrieval (Syslog module), 170
NETCONF configuration load, 173
NETCONF configuration rollback, 173
NETCONF configuration save, 173
NETCONF data entry retrieval (interface table), 171
NETCONF edit-config operation, 167
NETCONF get/get-bulk operation, 165
NETCONF get-config/get-bulk-config operation, 167
NETCONF operations, 165
NETCONF parameter value change, 172
session
NETCONF session establishment, 160
NETCONF session idle timeout time, 160
NETCONF session information retrieval, 188
NETCONF session termination, 190
set operation
SNMP, 116
SNMP logging, 123
setting
information center log storage period (log buffer), 294
NETCONF session idle timeout time, 160
NTP packet DSCP value, 90
severity level (system information), 283
agent+collector information configuration, 278
configuration, 277, 277, 280
counter sampling configuration, 279
display, 279
flow sampling configuration, 278
protocols and standards, 277
troubleshoot, 281
troubleshoot remote collector cannot receive packets, 281
silence (SNMP), 116
Simple Network Management Protocol. Use SNMP
Simplified NTP. See SNTP
simulating
GOLD diagnostic test simulation, 303
access control mode, 116
agent, 115
agent notification, 124
basic parameter configuration, 117
configuration, 115
Event MIB configuration, 139, 141
Event MIB display, 147
Event MIB event creation, 142
Event MIB notification action configuration, 143
Event MIB object list configuration, 141
Event MIB sampling configuration, 141
Event MIB set action configuration, 142
Event MIB SNMP notification enable, 146
Event MIB trigger test configuration, 143
Event MIB trigger test configuration (Boolean), 144, 149
Event MIB trigger test configuration (existence), 145, 147
Event MIB trigger test configuration (threshold), 145, 152
FIPS compliance, 117
framework, 115
Get operation, 116
get operation, 123
host notification send, 124
information center system log SNMP notification, 296
logging configuration, 123
manager, 115
MIB, 115, 115
MIB view-based access control, 115
notification configuration, 124
notification enable, 124
Notification operation, 116
NQA client operation, 17
NQA operation configuration, 51
protocol versions, 116
RMON configuration, 131
Set operation, 116
set operation, 123
settings display, 126
silence, 116
SNMPv1 basic parameter configuration, 117
SNMPv1 configuration, 126
SNMPv2c basic parameter configuration, 117
SNMPv2c configuration, 126
SNMPv3 basic parameter configuration, 119
SNMPv1
basic parameter configuration, 117
configuration, 126
host notification send, 124
Notification operation, 116
protocol version, 116
SNMPv2c
basic parameter configuration, 117
configuration, 126
host notification send, 124
Notification operation, 116
protocol version, 116
SNMPv3
basic parameter configuration, 119
Event MIB object owner, 139
Notification operation, 116
notification send, 124
protocol version, 116
authentication, 112
configuration, 77, 111, 111, 113
configuration restrictions, 77, 111
display, 113
enable, 111
NTP server specification, 111
SOAP
NETCONF message format, 156
NETCONF over SOAP configuration, 158
source
port mirroring source, 218
port mirroring source device, 218
specifying
NTP message source interface, 88
SNTP NTP server, 111
SSH
NETCONF over SSH enable, 159
statistics
IPv6 NetStream configuration, 263, 266, 272
IPv6 NetStream data export format, 265
IPv6 NetStream filtering, 265
IPv6 NetStream filtering configuration, 266
IPv6 NetStream sampling, 265
IPv6 NetStream sampling configuration, 267
MPLS-aware IPv6 NetStream, 269
MPLS-aware NetStream, 255
NetStream configuration, 248, 252, 258
NetStream filtering, 251
NetStream filtering configuration, 252
NetStream sampling, 251
NetStream sampling configuration, 253
NQA client statistics collection, 29
RMON configuration, 131
RMON Ethernet statistics entry, 133
RMON Ethernet statistics group, 131
RMON Ethernet statistics group configuration, 135
RMON history control entry, 133
RMON statistics configuration, 133
sampler configuration, 216
sampler configuration (IPv4 Netstream), 216
sampler creation, 216
sFlow agent+collector information configuration, 278
sFlow configuration, 277, 277, 280
sFlow counter sampling configuration, 279
sFlow flow sampling configuration, 278
VXLAN-aware NetStream, 255
storage
information center log storage period (log buffer), 294
subscribing
NETCONF event notification, 161, 162
suppressing
information center duplicate log suppression, 295
information center log suppression, 296
suspending
EAA monitor policy, 209
switch
module debug, 6
screen output, 6
switching
automated VCF fabric provisioning and deployment, 330
VCF fabric configuration, 327
symmetric
IPv6 NTP symmetric active/passive association mode, 94
NTP symmetric active/passive association mode, 73, 79, 83, 93
NTP symmetric active/passive mode dynamic associations max, 89
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
synchronizing
information center synchronous log output, 295
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP configuration, 71, 77, 91
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
SNTP configuration, 77, 111, 111, 113
Syslog
NETCONF configuration data retrieval (Syslog module), 170
system
default output rules (diagnostic log), 284
default output rules (hidden log), 285
default output rules (security log), 284
default output rules (trace log), 285
information center duplicate log suppression, 295
information center interface link up/link down log generation, 296
information center log destinations, 284
information center log levels, 283
information center log output (console), 288
information center log output (log buffer), 290
information center log output (log host), 290
information center log output (monitor terminal), 289
information center log output configuration (console), 297
information center log output configuration (Linux log host), 299
information center log output configuration (UNIX log host), 298
information center log save (log file), 291
information center log suppression, 296
information center log types, 283
information center security log file management, 292
information center security log management, 292
information center security log save (log file), 292
information center synchronous log output, 295
information center system log SNMP notification, 296
information log formats, 285
log default output rules, 284
system administration
debugging, 1
feature module debug, 6
ping, 1
ping address reachability, 1
ping command, 1
ping network connectivity test, 1
system debugging, 5
tracert, 1, 3
tracert node failure identification, 4, 4
system debugging
module debugging switch, 6
screen output switch, 6
system information
information center configuration, 283, 288, 297
T
Tcl
EAA configuration, 202, 210
EAA monitor policy configuration, 208, 214
TCP
NQA client operation, 18
NQA client template, 33
NQA client template (TCP half open), 34
NQA operation configuration, 53
NQA template configuration, 66
NQA template configuration (half open), 67
template
NetStream v9/v10 template refresh rate, 254
NQA client template (DNS), 32
NQA client template (FTP), 37
NQA client template (HTTP), 36
NQA client template (ICMP), 31
NQA client template (RADIUS), 38
NQA client template (TCP half open), 34
NQA client template (TCP), 33
NQA client template (UDP), 35
NQA client template configuration, 30
NQA client template optional parameters, 39
NQA template configuration (DNS), 65
NQA template configuration (FTP), 69
NQA template configuration (HTTP), 68
NQA template configuration (ICMP), 64
NQA template configuration (RADIUS), 69
NQA template configuration (TCP half open), 67
NQA template configuration (TCP), 66
NQA template configuration (UDP), 67
template file
automated underlay network provisioning, 325
terminating
NETCONF session, 190
testing
Event MIB trigger test (Boolean), 139
Event MIB trigger test (existence), 139
Event MIB trigger test (threshold), 139
Event MIB trigger test configuration, 143
Event MIB trigger test configuration (Boolean), 144, 149
Event MIB trigger test configuration (existence), 145, 147
Event MIB trigger test configuration (threshold), 145, 152
GOLD diagnostic test simulation, 303
ping network connectivity test, 1
threshold
Event MIB trigger test, 140
Event MIB trigger test configuration, 145, 152
NQA client threshold monitoring, 8, 26
NQA operation reaction entry, 27
NQA operation support accumulate type, 26
NQA operation support average type, 26
NQA operation support consecutive type, 26
NQA operation triggered action none, 26
NQA operation triggered action trap-only, 26
NQA operation triggered action trigger-only, 26
time
NTP configuration, 71, 77, 91
NTP local clock as reference source, 90
SNTP configuration, 77, 111, 111, 113
timeout
NETCONF session idle timeout time, 160
timer
SNMP silence, 116
topology
VCF fabric topology, 321
VCF fabric topology discovery, 324
traceroute. See tracert
IP address retrieval, 3
node failure detection, 3, 4, 4
NQA client operation (UDP tracert), 20
NQA operation configuration (UDP tracert), 55
system maintenance, 1
tracing
information center trace log file max size, 294
Track
EAA event monitor policy configuration, 211
NQA client+Track collaboration, 26
NQA collaboration, 8
NQA collaboration configuration, 62
traditional
IPv6 NetStream data export, 264, 270, 272
traditional NetStream
data export configuration, 258
traditional NetStream data export, 249, 256
traffic
IPv6 NetStream configuration, 263, 266, 272
IPv6 NetStream filtering, 265
IPv6 NetStream filtering configuration, 266
IPv6 NetStream flow aging, 269
IPv6 NetStream flow aging methods, 269
IPv6 NetStream sampling, 265
IPv6 NetStream sampling configuration, 267
NetStream configuration, 248, 252, 258
NetStream enable, 252
NetStream filtering, 251
NetStream filtering configuration, 252
NetStream flow aging, 255
NetStream flow aging methods, 255
NetStream sampling, 251
NetStream sampling configuration, 253
NQA client operation (voice), 21
RMON configuration, 131
sampler configuration, 216
sampler configuration (IPv4 Netstream), 216
sampler creation, 216
sFlow agent+collector information configuration, 278
sFlow configuration, 277, 277, 280
sFlow counter sampling configuration, 279
sFlow flow sampling configuration, 278
trapping
Event MIB SNMP notification enable, 146
information center system log SNMP notification, 296
SNMP notification, 124
triggering
Event MIB trigger test configuration, 143
Event MIB trigger test configuration (Boolean), 144, 149
Event MIB trigger test configuration (existence), 145, 147
Event MIB trigger test configuration (threshold), 145, 152
NQA operation threshold triggered action none, 26
NQA operation threshold triggered action trap-only, 26
NQA operation threshold triggered action trigger-only, 26
troubleshooting
sFlow, 281
sFlow remote collector cannot receive packets, 281
U
UDP
IPv6 NetStream v10 data export format, 265
IPv6 NetStream v9 data export format, 265
IPv6 NTP client/server association mode, 92
IPv6 NTP multicast association mode (on switch), 100
IPv6 NTP symmetric active/passive association mode, 94
NQA client operation (UDP echo), 19
NQA client operation (UDP jitter), 16
NQA client operation (UDP tracert), 20
NQA client template, 35
NQA operation configuration (UDP echo), 54
NQA operation configuration (UDP jitter), 49
NQA operation configuration (UDP tracert), 55
NQA template configuration, 67
NTP association mode, 78
NTP broadcast association mode (on switch), 95
NTP broadcast mode+authentication (on switch), 105
NTP client/server association mode, 91
NTP client/server mode+authentication, 103
NTP client/server mode+MPLS L3VPN network time synchronization (on switch), 107
NTP configuration, 71, 77, 91
NTP multicast association mode (on switch), 97
NTP symmetric active/passive association mode, 93
NTP symmetric active/passive mode+MPLS L3VPN network time synchronization (on switch), 109
sFlow configuration, 277, 277, 280
underlay network
automated provisioning, 324
UNIX
information center log host output configuration, 298
unlocking
NETCONF configuration, 163, 164
V
value
NETCONF parameter value change, 172
variable
EAA environment variable configuration (user-defined), 205
EAA event monitor policy environment (user-defined), 205
EAA event monitor policy environment system-defined (event-specific), 204
EAA event monitor policy environment system-defined (public), 204
EAA event monitor policy environment variable, 204
EAA monitor policy configuration (CLI-defined+environment variables), 213
packet capture, 307
automated overlay network deployment, 326, 328
automated provisioning and deployemtn, 324
automated underlay network provisioning, 324, 327
configuration, 327
display, 329
maintain, 329
Neutron component, 322
Neutron concepts, 322
Neutron deployment, 323
overview, 321
topology, 321
VCF fabric confiugration
guidelines, 326
restrictions, 326
VCF fabric topology discovery, 324
enable, 327
VCF fabric topology enabling
guidelines, 327
restrictions, 327
version
IPv6 NetStream v10 data export format, 265
IPv6 NetStream v9 data export format, 265
IPv6 NetStream v9/v10 template refresh rate, 268
NetStream v10 export format, 251
NetStream v5 export format, 251
NetStream v8 export format, 251
NetStream v9 export format, 251
NetStream v9/v10 template refresh rate, 254
view
SNMP access control (view-based), 116
Virtual converged framework. Use VCF fabric
VLAN
flow mirroring configuration, 243, 243, 245
flow mirroring QoS policy application, 245
Layer 2 remote port mirroring (egress port), 225
Layer 2 remote port mirroring (reflector port), 225
Layer 2 remote port mirroring configuration, 224
Layer 3 remote port mirroring configuration, 230
local port mirroring configuration, 222
local port mirroring group monitor port, 223
local port mirroring group source port, 222
packet capture filter configuration (vlan vlan_id expression), 313
port mirroring configuration, 218, 233
port mirroring remote probe VLAN, 219
port mirroring remote source group remote probe VLAN, 230
VCF fabric overview, 321
voice
NQA client operation, 21
NQA operation configuration, 57
VPN
NTP MPLS L3VPN instance support, 76
VXLAN
automated VCF fabric provisioning and deployment, 330
VCF fabric overview, 321
VXLAN-aware NetStream, 255
X
XML
NETCONF capability exchange, 160
NETCONF configuration, 155, 158
NETCONF data filtering, 179
NETCONF data filtering (conditional match), 184
NETCONF data filtering (regex match), 183
NETCONF message format, 156
NETCONF structure, 155
NETCONF XML view, 160
XSD
NETCONF message format, 156
Y
YANG
NETCONF YANG file content retrieval, 188