09-AD-DC 6.3 SeerAnalyzer Configuration Guide

HomeSupportAD-NET(SDN)H3C AD-DCConfigure & DeployConfiguration GuidesAD-DC 6.3 Configuration Guide-5W10009-AD-DC 6.3 SeerAnalyzer Configuration Guide
Download Book
  • Released At: 01-06-2023
  • Page Views:
  • Downloads:
Table of Contents
Related Documents

 

AD-DC 6.3

SeerAnalyzer Configuration Guide

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Document version: 5W100-20230513

 

Copyright © 2023 New H3C Technologies Co., Ltd. All rights reserved.

No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd.

Except for the trademarks of New H3C Technologies Co., Ltd., any trademarks that may be mentioned in this document are the property of their respective owners.

This document provides generic technical information, some of which might not be applicable to your products.

The information in this document is subject to change without notice.


Contents

Overview·· 1

Terms· 1

Networking· 2

Network configuration· 2

Configure basic network settings· 7

Configuration workflow· 7

Network configuration· 7

Procedure· 7

Configure network devices· 7

Add network assets· 14

Configure protocol templates· 18

Set the protocols· 22

Configure network health· 26

Configuration workflow· 26

Network configuration· 26

Procedure· 26

Configure basic network settings· 26

Start parsing tasks on the analyzer 26

Verify the configuration· 28

Restrictions and guidelines· 30

Configure health summary· 31

Configuration workflow· 31

Network configuration· 31

Procedure· 31

Configure basic network settings· 31

Starting parsing tasks on the analyzer 31

Obtain topology· 31

Configure traffic heatmap· 32

Verify the configuration· 33

Restrictions and guidelines· 33

Configure packet loss analysis· 34

Configuration workflow· 34

Network diagram·· 34

Procedure· 34

Configure basic network settings· 34

Configure device settings· 35

Configure applications· 36

Start parsing tasks on the analyzer 36

Verify the configuration· 36

Restrictions and guidelines· 37

Configure change analysis· 38

Configuration workflow· 38

Network configuration· 38

Procedure· 38

Configure basic network settings· 38

Start parsing tasks on the analyzer 38

Verify the configuration· 39

Restrictions and guidelines· 39

Configure issue center (problem center) 40

Configuration workflow· 40

Network configuration· 40

Procedure· 40

Configure network devices· 40

Manage assets· 40

Start parsing tasks on the analyzer 41

Verify the configuration· 41

Restrictions and guidelines· 42

Configure switch logins· 43

Configuration workflow· 43

Network configuration· 43

Procedure· 43

Configure network devices· 43

Manage assets· 43

Add a widget 44

Verify the configuration· 44

Restrictions and guidelines· 45

Configure intent verification· 46

Configuration workflow· 46

Network configuration· 46

Procedure· 46

Configure network devices· 46

Manage assets· 46

Configure intent verification settings· 47

Verify the configuration· 48

Restrictions and guidelines· 49

Configure TCP flow analysis· 50

Configuration workflow· 50

Network configuration· 50

Procedure· 50

Configure device settings· 50

Configure collector settings· 54

Configure flow analysis· 55

Start the parsing task· 56

Verify the configuration· 56

Restrictions and guidelines· 58

Configure illegal analysis· 59

Configuration workflow· 59

Network configuration· 59

Procedure· 59

Configure device settings· 59

Configure collector settings· 59

Configure flow analysis· 59

Configure parsing tasks· 61

Verify the configuration· 62

Restrictions and guidelines· 64

Configure application health· 65

Configuration workflow· 65

Network configuration· 65

Procedure· 65

Configure device settings· 65

Configure collector settings· 65

Configure settings on the application health page· 65

Start parsing tasks· 65

Verify the configuration· 65

Configure issue analysis· 67

Configuration workflow· 67

Network issue workflow· 67

Applications issue workflow· 67

Network configuration· 67

Procedure· 67

Configure network issues· 67

Configure application issues· 67

Verify the configuration· 67

Restrictions and guidelines· 68

Configure UDP flow analysis· 69

Configuration workflow· 69

Network configuration· 69

Procedure· 69

Configure device settings· 69

Configure collector settings· 69

Configure flow analysis· 69

Configure parsing tasks· 69

Verify the configuration· 69

Restrictions and guidelines· 70

Configure INT flow analysis· 72

Configuration workflow· 72

Network configuration· 72

Procedure· 72

Configure INT settings on the device· 72

Configure collector settings· 77

Configure applications· 77

Configure parsing tasks· 77

Verify the configuration· 77

Restrictions and guidelines· 78

Configure intelligent prediction· 80

Configuration workflow· 80

Network configuration· 80

Procedure· 80

Configure basic network settings· 80

Starting parsing tasks on the analyzer 80

Enabling AI prediction· 80

Verify the configuration· 81

Restrictions and guidelines· 81

Configure health report 83

Configuration workflow· 83

Network configuration· 83

Procedure· 83

Configure mail server settings· 83

Create a network-wide health report task· 84

Immediately generate a health report 85

Verify the configuration· 86

Restrictions and guidelines· 87

Configure RoCE network analysis· 88

Configuration workflow· 88

Network configuration· 88

Procedure· 88

Configure switch settings· 88

Configure RoCE server settings· 92

Configure RoCE-associated parsing tasks· 98

Configure server and cluster settings for RoCE network analysis· 98

Verify the configuration· 99

Restrictions and guidelines· 102

Configure the cross-DC network· 103

Configuration workflow· 103

Network configuration· 103

Procedure· 103

Configure site settings· 103

Configure analyzer settings· 104

Configure fabric settings· 104

Configure egress link settings· 105

Configure application settings· 106

Verify the configuration· 107

Restrictions and guidelines· 107

Configure link analysis· 108

Configuration workflow· 108

Network configuration· 108

Procedure· 108

Configure basic network settings· 108

Configure device settings· 108

Configure link settings· 109

Start parsing tasks for the analyzer 110

Verify the configuration· 110

Restrictions and guidelines· 111

Configure vSwitch health monitoring (OVS) 112

Configuration workflow· 112

Network configuration· 112

Procedure· 112

Configure a data source for the controller 112

Add vSwitch assets· 112

Verify the configuration· 112

Restrictions and guidelines· 116

FAQ·· 117

 


Overview

Analyzer focuses on digging the values in machine data. Based on the big data technology, Analyzer analyzes valuable information in massive data and provides reference for enterprise network&service operations and business decision making in enterprises through various methods such as machine learning and deep learning. Analyzer collects real-time data about and provides insights into the device performance, user online status, and service traffic, and visualizes the network operating status and proactively perceives risks and automatically alarms through the big data analytics technique and AI algorithms.

Analyzer analyzes network device running data, network service application traffic data, and user access and network usage data.

Analyzer in the DC scenario is designed to ensure high availability and low latency for the DC network. Through all-time full data collection for network device running information, Analyzer establishes a network-wide health assessment system, and supports TCP/UDP session analysis, application visibility and analysis, chip-level buffer monitoring, and packet loss analysis. Analyzer provides full support and assurance for DC network O&M.

Terms

Table 1 Terms

Term

Definition

SNMP

Simple Network Management Protocol (SNMP), which is used to remotely manage and operate network devices.

NETCONF

Network Configuration Protocol (NETCONF), which is used to configure and manage network devices and supports programming.

NetStream

NetStream is a stream-based statistics method, and is used to collect and analyze statistics of service traffic in the network.

ERSPAN

Layer 3 remote port mirroring. ERSPAN encapsulates traffic in GRE packets with protocol number 0x88BE and routes the traffic to a remote monitoring device for data monitoring.

Syslog

Syslog protocol, which is used to record system log messages.

Telemetry

Telemetry stream is a network monitoring technology that collects data from devices, and is used to report data to collectors.

gRPC

Google Remote Procedure Call (gRPC), which is used to configure and manage devices and supports programming through multiple languages.

INT

In-Band Telemetry, a network monitoring technology that collects data from devices and reports collected data to collectors. The collectors analyze the received collected data, and monitor the network device performance and network running conditions.

TCB

Transient Capture Buffer (TCB), a technology that monitors lost packets of queues through memory management units (MMUs).

MOD

Mirror On Drop (MOD), which detects the packets dropped during the forwarding process in the device.

DRNI

Distributed Resilient Network Interconnect (DRNI) is a cross-device link aggregation technology. It aggregates two physical devices on the aggregation layer into one device to provide device-level redundancy protection and load sharing.

PFC

Priority-based flow control (PFC) is a granular traffic control mechanism. It can meet the requirements for no packet loss in Ethernet traffic transmission and provide the lossless service through Ethernet.

ECN

RFC 2481 defines an end-to-end congestion notification mechanism. The ECN feature uses the DS fields in IP headers to mark the congestion state for packets along the transmission path. An endpoint that supports this function can identify whether congestion occurs on the transmission path through packet contents and adjust the packet sending method accordingly to avoid deteriorating congestion.

RoCE

RDMA over Converged Ethernet (RoCE), a network protocol that allows Ethernet to use Remote Direct Memory Access (RDMA).

 

Networking

WARNING

WARNING!

The networking solution in this document uses the southbound single stack, and the collection network uses the IPv4 protocol.

 

·     Northbound network—Northbound service virtual IP set in the Unified Platform. The IP address is used by the cluster to provide external services.

·     Southbound network—Network used by the analyzer's collection component or an independent collector to receive collected data from devices. Make sure the southbound network and the devices to be collected can reach each other. Currently, the southbound network supports the following networking solutions. Select one as needed.

¡     Unified southbound and northbound network—The analyzer and the data collection system share the network of the Unified Platform, and no additional network is created.

¡     Southbound single stack—In this networking solution, the data collection system uses a separate network, which can use the IPv4 or IPv6 network.

¡     Southbound dual stack—In this networking solution, the data collection system uses a separate network, which must be configured with both IPv4 and IPv6 addresses.

Network configuration

·     The southbound network and northbound network of Analyzer are separated. The southbound network can be on the same subnet as the device management interfaces or on a different subnet from the device management interfaces.

·     Traffic of ERSPAN, INT, and telemetry stream is sent to the collector NICs through in-band service interfaces.

·     Leaf nodes and border nodes use M-LAG to avoid single point of failure. On this network, the keepalive links of the M-LAG systems reuse the device management network addresses. For actual networks, allocate addressees as needed.

·     The collectors are used to collect traffic of ERSPAN, INT, and telemetry stream. As a best practice, use the H3CLinux operating system provided with Unified Platform (of version earlier than E0707) and make sure collector NICs support DPDK. For common NICs that support DPDK, see the analyzer installation and deployment guide.

Figure 1 Network diagram

 

Table 2 Device and server interface IP address details

Device

Interface

IP address

Remarks

Northbound VIP of the Unified Platform

\

192.168.12.145

Northbound VIP of the cluster

Southbound passive collection IP address of Analyzer

\

192.168.16.100

Southbound passive collection VIP

Southbound active collection IP address of Analyzer

\

192.168.16.104

Southbound active collection VIP

SA001

ethipv4 (connecting to MGT)

192.168.12.141

Northbound network address of node 1

enp61s0f0 (connecting to MGT)

192.168.16.101

Southbound passive collection pod address of node 1

enp61s0f0 (connecting to MGT)

192.168.16.105

Southbound active collection pod address of node 1

SA002

ethipv4 (connecting to MGT)

192.168.12.142

Northbound network address of node 2

enp61s0f0 (connecting to MGT)

192.168.16.102

Southbound passive collection pod address of node 2

enp61s0f0 (connecting to MGT)

192.168.16.106

Southbound active collection pod address of node 2

SA003

ethipv4 (connecting to MGT)

192.168.12.143

Northbound network address of node 3

enp61s0f0 (connecting to MGT)

192.168.16.103

Southbound passive collection pod address of node 3

enp61s0f0 (connecting to MGT)

192.168.16.107

Southbound active collection pod address of node 3

Collector

enp61s0f0 (connecting to MGT)

192.168.12.146

Collector management IP

enp61s0f3 (connecting to leaf1-WGE1/0/11)

11.1.1.3

Collection NIC address of the collector

\

11.1.1.2

Floating IP address of the collector

Management switch

MGT

vlan-int10

192.168.12.1

Gateway of the analyzer's northbound network

vlan-int11

192.168.16.1

Gateway of the analyzer's southbound network

vlan-int21

192.168.11.1

Gateway of the device management network

leaf1

MGE0/0/0

192.168.12.23

Device management address

WGE1/0/1 (connecting to spine1: WGE1/0/1)

10.1.1.2

Underlay interface address

WGE1/0/11 (connecting to spine2: WGE1/0/11)

10.2.1.2

Underlay interface address

WGE1/0/21 (connecting to leaf2: WGE1/0/21)

Int-Vlan4094

69.1.1.11

IPL interface

WGE1/0/31 (connecting to the collector)

11.1.1.1

Collection NIC interconnection address of the collector

Loop0

2.1.1.11

Loopback interface address

Loop1

3.1.1.11

Loopback interface address, M-LAG group address

Leaf2

MGE0/0/0

192.168.12.24

Device management address

WGE1/0/1 (connecting to spine1: WGE1/0/2)

10.1.2.2

Underlay interface address

WGE1/0/11 (connecting to spine2: WGE1/0/12)

10.2.2.2

Underlay interface address

WGE1/0/21 (connecting to leaf1: WGE1/0/21)

Int-Vlan4094

69.1.1.111

IPL interface

Loop0

2.1.1.111

Loopback interface address

Loop1

3.1.1.11

Loopback interface address, M-LAG group address

Leaf3

MGE0/0/0

192.168.12.25

Device management address

WGE1/0/1 (connecting to spine1: WGE1/0/3)

10.1.3.2

Underlay interface address

WGE1/0/11 (connecting to spine2: WGE1/0/13)

10.2.3.2

Underlay interface address

WGE1/0/21 (connecting to leaf4: WGE1/0/21)

Int-Vlan4094

68.1.1.3

IPL interface

Loop0

2.1.1.22

Loopback interface address

Loop1

3.1.1.111

Loopback interface address, M-LAG group address

Leaf4

MGE0/0/0

192.168.12.26

Device management address

WGE1/0/1 (connecting to spine1: WGE1/0/4)

10.1.4.2

Underlay interface address

WGE1/0/11 (connecting to spine2: WGE1/0/14)

10.2.4.2

Underlay interface address

WGE1/0/21 (connecting to leaf4: WGE1/0/21)

Int-Vlan4094

68.1.1.4

IPL interface

Loop0

2.1.1.222

Loopback interface address

Loop1

3.1.1.111

Loopback interface address, M-LAG group address

Border1

MGE0/0/0

192.168.12.27

Device management address

WGE1/0/1 (connecting to spine1: WGE1/0/5)

10.1.5.2

Underlay interface address

WGE1/0/11 (connecting to spine2: WGE1/0/15)

10.2.5.2

Underlay interface address

WGE1/0/21 (connecting to board2: WGE1/0/21)

Int-Vlan4094

70.1.1.1

IPL interface

Loop0

2.1.1.21

Loopback interface address

Loop1

3.1.1.211

Loopback interface address, M-LAG group address

Border2

MGE0/0/0

192.168.12.28

Device management address

WGE1/0/1 (connecting to spine1: WGE1/0/6)

10.1.6.2

Underlay interface address

WGE1/0/11 (connecting to spine2: WGE1/0/16)

10.2.6.2

Underlay interface address

WGE1/0/21 (connecting to board1: WGE1/0/21)

Int-Vlan4094

70.1.1.2

IPL interface

Loop0

2.1.1.21

Loopback interface address

Loop1

3.1.1.211

Loopback interface address, M-LAG group address

Spine1

MGE0/0/0

192.168.12.29

Device management address

WGE1/0/1 (connecting to leaf1: WGE1/0/1)

10.1.1.1

Underlay interface address

WGE1/0/2 (connecting to leaf2: WGE1/0/1)

10.1.2.1

Underlay interface address

WGE1/0/3 (connecting to leaf3: WGE1/0/1)

10.1.3.1

Underlay interface address

WGE1/0/4 (connecting to leaf4: WGE1/0/1)

10.1.4.1

Underlay interface address

WGE1/0/5 (connecting to board1: WGE1/0/1)

10.1.5.1

Underlay interface address

WGE1/0/6 (connecting to board2: WGE1/0/1)

10.1.6.1

Underlay interface address

Loop0

2.1.1.10

Loopback interface address

Spine2

MGE0/0/0

192.168.12.30

Device management address

WGE1/0/11 (connecting to leaf1: WGE1/0/11)

10.2.1.1

Underlay interface address

WGE1/0/12 (connecting to leaf2: WGE1/0/11)

10.2.2.1

Underlay interface address

WGE1/0/13 (connecting to leaf3: WGE1/0/11)

10.2.3.1

Underlay interface address

WGE1/0/14 (connecting to leaf4: WGE1/0/11)

10.2.4.1

Underlay interface address

WGE1/0/15 (connecting to board1: WGE1/0/11)

10.2.5.1

Underlay interface address

WGE1/0/16 (connecting to board2: WGE1/0/11)

10.2.6.1

Underlay interface address

Loop0

2.1.1.10

Loopback interface address

 

IMPORTANT

IMPORTANT:

Use the same NIC for the southbound passive collection pod and southbound active collection pod.

 

 


Configure basic network settings

Configuration workflow

Figure 2 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure network devices

Configure routing

# Configure the static route from the network device to the southbound collection network of the analyzer. If the controller has deployed this route, skip this step.

#

[Device] ip route-static 192.168.16.0  24 192.168.11.1

#

Configure the log host

#

[Device] info-center loghost source MGE 0/0/0

[Device] info-center loghost 192.168.16.100 facility local5

#

 

 

NOTE:

The IP address 192.168.16.0 here is the southbound network address. The IP address 192.168.11.1 is the device management network gateway used for southbound communication between the device management network and the controller. Configure the settings as needed.

 

Configure SNMP

#

[Device] snmp-agent 

[Device] snmp-agent community write private

[Device] snmp-agent community read public

[Device] snmp-agent sys-info version v2c v3

[Device] snmp-agent target-host trap address udp-domain 192.168.16.100 params securityname [Device] public v2c

[Device] snmp-agent trap enable arp 

[Device] snmp-agent trap enable l2vpn

[Device] snmp-agent trap enable radius 

[Device] snmp-agent trap enable stp

[Device] snmp-agent trap source M-GigabitEthernet0/0/0

#

Configure NETCONF

#

netconf ssh server enable

#

Configure SSH

#

ssh server enable

#

Configure a local user

# Configure the username and password as admin and Qwert@1234, respectively.

[Device]local-user admin class manage

[Device-luser-manage-admin] password simple Qwert@1234

[Device-luser-manage-admin] service-type ftp

[Device-luser-manage-admin] service-type telnet http https ssh  

[Device-luser-manage-admin] authorization-attribute user-role network-admin

[Device-luser-manage-admin] authorization-attribute user-role network-operator

[Device-luser-manage-admin] line vty 0 63 

[Device-line-vty0-63] authentication-mode scheme

[Device-line-vty0-63]user-role network-admin 

[Device-line-vty0-63]user-role network-operator 

[Device-line-vty0-63] quit

#

 

 

NOTE:

When the underlay is not automatically deployed, the configurations in sections above in Configuring network devices” must be manually deployed.

 

Configure gRPC

The controller supports deploying gRPC configuration used for device data collection.

To configure gRPC:

1.     Add a collector.

Navigate to the Analytics > Data Collection > Telemetry page to add a collector:

¡     Set the IP address to 192.168.16.100 (collector management IP) and port number to 50051 for collecting CPU, memory, interface, and cache queue data from devices through gRPC.

Figure 3 Adding a gRPC collector on the controller

 

2.     Add a collection template.

Navigate to the Analytics > Data Collection > Telemetry > gRPC page. Click the Collection Template tab, and click Edit. Select collection paths, and set the default push interval to 60 seconds. You can edit the collection template as needed. As a best practice, include at least collection paths for collecting the following information: device information, interface information, error packet statistics, entry resources, and change analysis. Configure other collection paths depending on your requirements and device support.

Figure 4 gRPC collection module

 

3.     Deploy the configuration through the controller.

When you use the controller to deploy configuration, follow these steps:

Click Edit.

Figure 5 Clicking Edit

 

Select the corresponding sensor paths, and click Save.

Figure 6 Saving the configuration

 

Select sensor paths as needed.

For periodic collection, specify sensor paths of the same collection interval in one collection group.

Device information collection:

  sensor path device/base                            //Used for collecting device information

  sensor path device/boards                            //Used for collecting device information

  sensor path device/extphysicalentities              //Used for collecting device information

  sensor path device/physicalentities               //Used for collecting device information

  sensor path device/transceivers                     //Used for collecting device transceiver module information                        

  sensor path device/transceiverschannels                     //Used for collecting device transceiver module information                        

 

Interface information collection:

  sensor path ifmgr/ethportstatistics                //Used for collecting device interface statistics                         

  sensor path ifmgr/interfaces                            //Used for collecting device interface information                         

  sensor path ifmgr/statistics                             //Used for collecting device interface statistics     

(Optional.) Device buffer monitoring information: 

  sensor path buffermonitor/bufferusages               //Used for collecting buffermonitor data  

sensor path buffermonitor/commbufferusages         //Used for collecting buffermonitor data    

  sensor path buffermonitor/commheadroomusages      //Used for collecting buffermonitor data

  sensor path buffermonitor/ecnandwredstatistics   //Used for collecting buffermonitor data

  sensor path buffermonitor/egressdrops      //Used for collecting buffermonitor data

  sensor path buffermonitor/ingressdrops     //Used for collecting buffermonitor data

sensor path buffermonitor/pfcspeeds    //Used for collecting buffermonitor data

sensor path buffermonitor/pfcstatistics    //Used for collecting buffermonitor data

Entry resource collection:

  sensor path resourcemonitor/monitors       //Used for collecting entry resources, which replaces NETCONF collection

  sensor path resourcemonitor/resources     //Used for collecting entry resources, which replaces NETCONF collection    

Change analysis collection:

  sensor path route/ipv4routes  //Used for collecting configuration change entries

  sensor path route/ipv6routes //Used for collecting configuration change entries

  sensor path lldp/lldpneighbors //Used for collecting configuration change entries

  sensor path mac/macunicasttable //Used for collecting configuration change entries

sensor path arp/arptable //Used for collecting configuration change entries

sensor path nd/ndtable  //Used for collecting configuration change entries

The following paths support incremental data reporting. (As a best practice, use incremental data reporting.) For change analysis, you do not need to configure both full data reporting and incremental data reporting for a sensor path. If you have configured incremental data reporting for a sensor path, you do not need to configure full data reporting. Support for sensor paths depends on the device model. As a best practice, configure the collection interval as 3600 seconds.

sensor path arp_event/arptableevent   //Used for collecting configuration change entries, incremental data reporting

sensor path mac/overlaymacevent   //Used for collecting configuration change entries, incremental data reporting

sensor path mac/underlaymacevent   //Used for collecting configuration change entries, incremental data reporting

sensor path nd/ndtableevent   //Used for collecting configuration change entries, incremental data reporting

sensor path route_stream/ipv4routeevent   //Used for collecting configuration change entries, incremental data reporting

sensor path route_stream/ipv6routeevent   //Used for collecting configuration change entries, incremental data reporting

(Optional.) Event-trigged data collection:

  sensor path buffermonitor/portquedropevent   //Queue packet drop alarm   

  sensor path buffermonitor/portqueoverrunevent   //Queue threshold crossing alarm

  sensor path tcb/tcbpacketinfoevent //TCB sensor path

sensor path telemetryftrace/genevent //MOD data sensor path

4.     Perform manual configuration.

For the controller-issued paths, see the previous step and select the settings as needed. This step illustrates the detailed configuration procedure. The sensor group, destination group, and subscription name are customizable.

¡     Enable GRPC globally:

[Device] grpc enable //Enable the GRPC service.

¡     Configure periodic data collection:

[Device] telemetry   //Enter Telemetry view.

[Device-telemetry] sensor-group group_grpc  //Create a sensor group.

[Device-telemetry-sensor-group-group_grpc] sensor path device/base   //Add a sencor path.

[Device-telemetry-sensor-group-group_grpc] sensor path device/boards   //Add a sencor path.

[Device-telemetry-sensor-group-group_grpc]

destination-group destination_grpc   //Create a destination group.

[Device-telemetry-destination-group-destination_grpc]

ipv4-address 192.168.16.100 port 50051 vpn-instance mgmt   //Specify the IPv4 address and listening port number of a collector for the destination group. Specify the VPN instance setting as needed.

[Device-telemetry-destination-group-destination_grpc]

subscription subscription_grpc   //Create a subscription associated with the sensor group and destination group.

[Device-telemetry-subscription_grpc]

sensor-group group_grpc sample-interval 60  //Associate the sensor group with the subscription and configure the data collection interval as needed.

[Device-telemetry-subscription_grpc]

 source-address 192.168.12.23  //Specify the source IP address for packets sent to collectors. As a best practice, specify the device management IP address.

[Device-telemetry-subscription_grpc]

 destination-group destination_grpc  //Associate the destination group.

[Device-telemetry-subscription_grpc] quit

¡     Configure event-triggered data collection:

[Device] telemetry   //Enter telemetry view.

[Device-telemetry] sensor-group group_grpc  //Create a sensor group.

[Device-telemetry-sensor-group-group_grpc]

sensor path tcb/tcbpacketinfoevent   //Add a sencor path.

[Device-telemetry-sensor-group-group_grpc]

sensor path telemetryftrace/genevent   //Add a sencor path.

[Device-telemetry-sensor-group-group_grpc]

destination-group destination_grpc   //Create a destination group.

[Device-telemetry-destination-group-destination_grpc]

ipv4-address 192.168.16.100 port 50051 vpn-instance mgmt   //Specify the IPv4 address and listening port number of a collector for the destination group. Specify the VPN instance setting as needed.

[Device-telemetry-destination-group-destination_grpc]

subscription subscription_grpc   //Create a subscription associated with the sensor group and destination group.

[Device-telemetry-subscription-subscription_grpc]

sensor-group group_grpc  //Associate the sensor group. You do not need to configure the interval for event-triggered data collection.

[Device-telemetry-subscription-subscription_grpc]

source-address 192.168.12.23  //Specify the source IP address for packets sent to collectors. As a best practice, specify the device management IP address.

[Device-telemetry-subscription-subscription_grpc]

destination-group destination_grpc  //Associate the destination group.

[Device-telemetry-subscription-subscription_grpc]

quit

5.     Add collected devices and associate them with collectors.

Navigate to the Analytics > Data Collection > Telemetry > gRPC page. Select the collected devices, and click Add. On the page that opens, select collectors for the selected devices, and click Apply to deploy the configuration to devices.

Figure 7 Adding collected devices and associate them with collectors

 

CAUTION

CAUTION:

·     When you configure a destination group, the IP address is the southbound passive collection VIP and the port number is 50051. If the interface connecting the device to the analyzer is bound to a VPN instance, you must specify the vpn-instance parameter after the collector address of the destination group. Otherwise, you do not need to specify the parameter.

·     For non-event-triggered data collection, as a best practice, set the collection interval to one minute. You can adjust the collection interval as needed according to the display accuracy requirements.

 

Set the time zone and system time on the network device

1.     View the analyzer time.

# Use the date command to view the analyzer time..

[root@sa ~]# date

Sat Aug 13 14:27:51 CST 2022

2.     Set the time zone.

# Set the name of the time zone to bj. Use the time zone of the analyzer as the time zone of the network device.

[Device]clock timezone bj add 08:00:00

3.     Set the system time.

# Set the system time to 14:53:00 2021/12/08.

[Device]clock protocol none

[Device] quit

<Device> clock datetime 14:53:00 2021/12/08

4.     Verify the time zone and system time settings.

# Display the system time and date when the time zone and system time are specified.

<Device> system-view

[Device] dis clock

14:53:13.271 bj Wed 12/08/2021

Time Zone : bj add 08:00:00

Add network assets

 

NOTE:

You can import assets in multiple methods, including synchronization from controller, manual addition, import from Excel, and import from Unified Platform. You can add network assets in one or more of these methods.

 

Synchronize assets from the controller

1.     To add a controller connection, navigate to the Analysis > Analysis Options > Resources > Assets > Data Sources page, click Add, enter basic information on the page that opens, and click OK.

¡     Name: Enter the name of a controller, a string of up to 36 characters.

¡     Type: Controller.

¡     Scene: DC.

¡     Username: Enter the administrator account admin. For more information, see the controller deployment guide.

¡     Password: Enter the password Pwd@12345 for the administrator account admin.

¡     IP: Northbound service VIP.

¡     Port: Port number in the URL for logging into the system. When the protocol is HTTP, the default port number is 30000. When the protocol is HTTPS, the default is 30443.

¡     HTTPS: To log in through HTTP, turn off this option. To log in through HTTPS, turn on this option.

Figure 8 Configuring data sources on the controller

 

2.     To import logical areas, navigate to the Analysis > Analysis Options > Resources > Areas > Logical Areas page, and click Import Areas. Select Import from Controller, and wait until the areas are successfully imported. The logical areas imported from the controller correspond to the fabrics of the controller.

Figure 9 Importing logical areas

 

3.     To import assets, navigate to the Analysis > Analysis Options > Resources > Assets > Asset List page, and click Import Assets. Select Import from Controller, and wait until the assets are successfully imported.

Figure 10 Importing assets

 

Manually add assets

Navigate to the Analysis > Analysis Options > Resources > Assets page, and click Add Asset. On the page that opens, configure the following parameters:

·     Asset Type: Network Device.

·     Device Category: Switch.

·     Asset Name: Required. Enter a string of up to 100 characters. Only letters, Chinese characters, digits, hyphens (-), and underscores (_), tildes (~), and dots (.) are allowed.

·     IP Address: Required. Enter an IPv4 or IPv6 address.

·     Scenario: Required. Select DC from the dropdown list.

Click Save. After the asset is added, the system automatically obtains the other information about the device.

 

 

NOTE:

·     The Asset Type, Device Category, Asset Name, IP Address, and Scenario fields are required, and the other fields are optional. After an asset is added, the other information about the device is automatically obtained by the system.

·     You can manually add security device assets in the same way network device assets are added.

 

Figure 11 Adding assets

 

Import assets from Excel

Navigate to the Analysis > Analysis Options > Resources > Assets > Asset List page. Download the Excel template, enter data in the template as instructed, and select to import assets from the Excel file.

Figure 12 Importing assets from Excel

 

Import assets from UC

Navigate to the Analysis > Analysis Options > Resources > Assets > Asset List page, and click Import Assets. Select Import from UC, and wait until the assets are successfully imported.

Figure 13 Importing assets from UC

 

Configure protocol templates

Add an SNMP protocol template

Navigate to the Analysis > Analysis Options > COMMON Collector > SNMP page, and click Add. In the dialog box that opens, configure the following parameters:

·     Template Name: Enter a template name, a string of up to 32 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

·     Version: The version is v2c by default. Available options include v2c and v3.

·     Read-Only Community Name: Enter the read-only community name, which must be the same as that on the device.

·     Read-Write Community Name: Enter the read-write community name, which must be the same as that on the device.

·     Port Info: SNMP protocol port. The default is 161.

·     Timeout (sec): Enter the SNMP data request timeout period, an integer in the range of 1 to 60 seconds. The default is 4.

·     Retries: Enter the SNMP data request retries, an integer in the range of 1 to 20. The default is 3.

Figure 14 Creating a protocol template

 

Add an SNMP collection template

The system predefines a collection template. This template is applicable to most devices, and meets the basic network analysis requirements. If you have special requirements, you can self-define an SNMP collection template.

Navigate to the Analysis > Analysis Options > Common Collector > SNMP page, and click Add (Clone). In the dialog box that opens, configure the following parameters:

·     Template Name: Enter a template name.

·     Remarks: Optional. This field describes the purpose and feature of the template.

·     Collected Metrics: Select collected metrics. You can edit the collection interval for a collected metric as needed.

Figure 15 Adding (cloning) an SNMP protocol template

 

Add a NETCONF protocol template

Navigate to the Analysis > Analysis Options > Resources > Protocol Templates > NETCONF page, and click Add. In the dialog box that opens, configure the following parameters:

·     Template Name: Enter a template name, a string of up to 32 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

·     Username: Enter the username of the NETCONF service. Make sure it is the same as that on the device.

·     Password: Enter the password for the NETCONF service. Make sure it is the same as that on the device.

·     Protocol: Connection protocol of the NETCONF service. Use the default, SSH.

·     Port Info: Port number of the NETCONF service. The default is 830.

·     Access Path: URL path of the NETCONF request. Use the default path.

Figure 16 Creating a NETCONF protocol template

 

Add a NETCONF collection template

The system predefines the following two collection templates for the DC scenario:

·     General DC template—This template is applicable to most devices, and provides collected data with medium granularity in the DC scenario.

·     Advanced DC template—This template is applicable to most devices, and provides collected data with high granularity in the DC scenario.

If you have special requirements, you can self-define a NETCONF collection template.

Navigate to the Analysis > Analysis Options > Common Collector > NETCONF page, and click Add (Clone). In the dialog box that opens, configure the following parameters:

·     Template Name: Enter a template name

·     Remarks: Optional. This field describes the purpose and feature of the template.

·     Collected Metrics: Select collected metrics. You can edit the collection interval for a collected metric as needed.

Figure 17 Adding (cloning) a NETCONF collection template

 

Set the protocols

Set the SNMP template

Navigate to the Analysis > Analysis Options > Resources > Assets > Asset List page. Select assets and then click Set Access Parameters > SNMP Template Settings to set the SNMP template.

Figure 18 Setting the SNMP template for the asset list

 

In the dialog box that opens, select the SNMP protocol template and collection template.

Figure 19 SNMP template settings

 

Set the NETCONF template

Navigate to the Analysis > Analysis Options > Resources > Assets > Asset List page. Select assets and then click Set Access Parameters > NETCONF Template Settings to set the NETCONF template.

Figure 20 Setting the NETCONF template for the asset list

 

In the dialog box that opens, select the NETCONF protocol template and collection template.

Figure 21 NETCONF protocol template settings

 

Figure 22 NETCONF collection template settings

 

Set syslog

Navigate to the Analysis > Analysis Options > Resources > Assets > Asset List page. Select assets and then click SYSLOG > Enable to enable SYSLOG collection for analyzing cases in the issue center.

Figure 23 Enable SYSLOG collection



 


Configure network health

The network health page displays network health from multiple perspectives, including devices, boards, chips, interfaces, transceiver modules, links, and queues. This page displays the overall health trend of network devices, current state of network devices, and the list of network devices in the system.

Configuration workflow

Figure 24 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure basic network settings

See “Procedure.” Complete the network device configuration, network asset addition, protocol template settings, and protocol settings.

Start parsing tasks on the analyzer

Configure DeviceResource parsing

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the DeviceResource task.

Figure 25 DeviceResource parsing task

 

Configure FlinkNetConf parsing

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the FlinkNetConf parsing task.

Figure 26 FlinkNetConf parsing task

 

Configure the device health task

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the health analysis task.

Figure 27 Health analysis task

 

Configure IfKpiGrpc parsing

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the IfKpiAnalysis parsing task. This parsing task displays the device index data in port monitoring of network analysis, including transceiver module, interface error packet and packet loss, and link information.

Figure 28 IfKpiAnalysis parsing task

 

Configure NodeKpiGrpc parsing

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the NodeKpiGrpc parsing task. This task analyzes the device CPU and memory information in network analysis.

Figure 29 NodeKpiAnalysis parsing task

 

Configure grpcAnalysis task

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the grpcAnalysis parsing task. This parsing task analyzes the RoCE network analysis data.

Figure 30 grpcAnalysis parsing task

 

Configure the buffermonitor flow processing task

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the BufferMonitorAnalysis parsing task. This parsing task displays buffer monitoring information in network analysis.

Figure 31 BufferMonitorAnalysis parsing task

 

Configure the SNMP trap parsing task

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the SNMPTrapParse parsing task.

Figure 32 SNMPTrapParse parsing task

 

Configure the device control plane connectivity flow processing task

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the device control plane connectivity flow processing task.

Figure 33 Device control plane connectivity flow processing task

Verify the configuration

1.     Navigate to the Analysis > Health Analysis > Network Analysis > Network Health page. View the overall health of the network, including health trend, network health, and network device list.

Figure 34 Net health

 

2.     Click a device name in the network device list. On the device details page that opens, you can see the device health trend, connection topology, device statistics, packet loss, cache monitoring, and port index monitoring information, as well as the trend information.

Figure 35 Connection topology information

 

Figure 36 Statistics information

 

Figure 37 Packet loss information

 

Figure 38 Cache monitoring information

 

Figure 39 Port index monitoring information

 

Restrictions and guidelines

None.

 

 


Configure health summary

The health summary page displays the running state of network-wide deices, and allows you to drill down to view device details.

Configuration workflow

Figure 40 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure basic network settings

See “Procedure.” Complete the network device configuration, network asset addition, protocol template settings, and protocol settings.

Starting parsing tasks on the analyzer

See “Start parsing tasks on the analyzer.”

Obtain topology

Navigate to the Analysis > Health Analysis > Health Overview > Topo > Physical Topo page. Click the Obtain Topology  button in the lower right corner of the topology to obtain the topology map.

Figure 41 Obtaining topology

 

Configure traffic heatmap

Configure the topology settings

1.     Navigate to the Analysis > Health Analysis > Health Overview > Topo > Physical Topo page. Click the Obtain Topology  button in the lower right corner of the topology to obtain the topology map.

2.     In the dialog box that opens, configure the following parameters:

¡     Level-1: Enter the level-1 alarm threshold for the topology link bandwidth usage. When the threshold is exceeded, the traffic heatmap becomes red.

¡     Level-2: Enter the level-2 alarm threshold for the topology link bandwidth usage. When the threshold is exceeded, the traffic heatmap becomes yellow.

3.     Click OK to save the configuration.

Figure 42 Configuring the thresholds for the topology link bandwidth usage

 

 

NOTE:

To reset the thresholds for the topology link bandwidth usage, click Reset.

 

Enabling the traffic heatmap

Navigate to the Analysis > Health Analysis > Health Summary > Topology Overview page. Click the  button in the lower right corner of the topology.

Figure 43 Enabling the traffic heatmap

 

Verify the configuration

Navigate to the Analysis > Health Analysis > Health Summary > Topology Overview page. View the topology of the whole network. The page displays the physical topology of the whole network, as well as the health of each device and link status. By clicking the tools at the bottom, you can perform operations on the topology, such as zoom in, zoom out, save, enable or disable traffic heat map, and configure link settings.

Figure 44 Topology of the whole network

 

Restrictions and guidelines

None.

 


Configure packet loss analysis

Packet loss analysis supports TCB packet loss analysis and MOD packet loss analysis in the current software version.

·     TCBA technique that monitors dropped packets of queues through memory management units (MMUs). After you enable TCB, the system will continuously monitor queues. When a packet is dropped in a queue, the system collects the packet drop time, packet drop reason, and original data of the dropped packet, and reports the information to the NMS or analysis system through gRPC. Then, the network administrator can learn the packet drop event on the device.

·     MODA technique used to monitor the packets dropped during the internal forwarding process in the device. When a packet is dropped within a device, the packet drop time, packet drop reason, and dropped packet characteristics will be immediately recorded and sent to the NMS or analysis system. Then, the network administrator can learn the packet drop event within the device.

 

CAUTION

CAUTION:

The packet loss analysis configuration and other configuration might be mutually exclusive. For more information, see "Restrictions and guidelines."

To configure TCB and MOD, you need to enable global settings on the switch. The configuration might affect the switch performance. As a best practice, make sure you can enable the settings for the device before the configuration.

 

Configuration workflow

Figure 45 Configuration workflow

 

Network diagram

See “Network configuration.” Enable TCB and MOD on H3C switches as needed. For H3C switches that support TCB and MOD, see "Restrictions and guidelines." Use device leaf1 as an example.

Procedure

Configure basic network settings

See “Procedure.” Complete the network device configuration, network asset addition, protocol template settings, and protocol settings.

Configure device settings

Configure TCB

1.     Create advanced IPv4 ACL 3001, and configure a rule to permit IP packets from source IP address 192.168.1.1.

<Device> system-view

[Device] acl advanced 3001

[Device-acl-ipv4-adv-3001] rule permit ip source 192.168.1.1 0

[Device-acl-ipv4-adv-3001] quit

Configure rules to match the source IP address, destination IP address, or both source and destination IP addresses of packets or match all packets.

2.     Configure TCB:

# Enable TCB for packets matching ACL 3001 in the outbound direction of queue 1 globally. Do not perform local analysis for captured data packets. Set the queue length above which packet capturing will be started to 10000 bytes. Set the queue length below which packet capturing will be stopped to 5000 bytes. Set the number of packets to be captured before the TCB state machine moves to the frozen state to 1000. Set the capture timer for moving the TCB state machine to the frozen state to 500 microseconds. Set the number of packets captured in the pre-trigger state to 10. Set the number of packets captured in the post-trigger state to 10. Set the number of times that data is reported per minute to 600.

[Device] buffer transient-capture global egress enable acl 3001 start-threshold 10000 stop-threshold 5000 frozen-number 1000 frozen-timer 500 pre-sample-rate 10 post-sample-rate 10 poll-frequency 600

[Device] buffer transient-capture global egress enable

For how to configure gPRC, see “Configuring gRPC.”

To send TCP packet loss information to the analyzer through gRPC, configure the sensor path tcb/tcbpacketinfoevent path.

Configure MOD

1.     Enable and configure MOD:

# Enter MOD view.

[Device] telemetry mod    

[Device-telemetry-mod] reason-list all   //Configure the list of packet drop reasons monitored by MOD. In the current software version, the drivers support eight packet drop reasons.

[Device-telemetry-mod] device-id 2.1.1.11  //Configure the device ID for MOD as the loopback interface address

[Device-telemetry-mod] sampler samp   //Enable sampling for MOD

[Device-telemetry-mod] transport-protocol grpc   //Configure the transmission protocol as gRPC (use gRPC to report the packet drop reason alarm packets)

[Device-telemetry-mod] quit

# Create a sampler.

[Device] sampler samp mode random packet-interval n-power 4    //Perform sampling to the power of 2, that is, one packet is sampled from 16 packets

# Create a flow group in simple MOD mode and enter its view.

[Device] telemetry flow-group  1  mode  simple-mod //Specify an ACL. The flow group takes effect only on packets matching the ACL.

[Device] template source-ip destination-ip source-port destination-port //Configure a flow entry generation rule

# Enable a flow group.

[Device] telemetry apply  flow-group   1

# Set the flow entry aging time to 10 minutes.

[Device] telemetry flow-group  aging-time   10

2.     Enable and configure gRPC.

For how to configure gPRC, see “Configuring gRPC.”

To send MOD packet loss information to the analyzer through gRPC, configure the sensor path telemetryftrace/genevent path.

 

CAUTION

CAUTION:

The TCB and MOD configurations can only be deployed manually, and cannot be deployed through the controller.

 

Configure applications

Navigate to the Analysis > Analysis Options > Global Configuration > Application Configuration page. Click Create. In the dialog box that opens, configure the following parameters to create a new user-defined application:

·     Name: Assign a name to the user-defined application. This field is required. An application name is of up to 36 characters. Only letters, Chinese characters, digits, and underscores (_) are allowed.

·     Protocol: Communication protocol used by the application. Available options include TCP, UDP, and ANY. This field is TCP by default.

·     Server IP Addresses: One or multiple IP addresses of servers that provide services for an application.

·     Server Ports: Communication ports used by servers that provide services for an application.

·     Other parameters are optional.

Figure 46 Application configuration

 

Start parsing tasks on the analyzer

See “Starting parsing tasks on the analyzer.”

Verify the configuration

Navigate to the Analysis > Health Analysis > Network Analysis > Network Health > Overview page. Click a name in the device list to enter the device details page. You can see the packet loss information and cached queue packet loss information on the page.

·     In the packet loss section, you can see the packet loss reason and the number of applications associated with the packet loss. By selecting a time point in the matrix, you can view the specific application related to the packet loss reason at the time point on the right side.

Figure 47 Packet loss information

 

·     On the cache monitoring page, click the packet loss details tab to view the cached queue packet loss trend (sampled, unreal packet loss data).

Figure 48 Cached queue packet loss information

 

Restrictions and guidelines

TCB and MOD are supported only on H3C 6850 switch series, 6825 switch series, 6805 switch series, and S9850 switch series. MOD packet loss analysis is mutually exclusive with INT and Telemetry stream. They cannot take effect at the same time.

 


Configure change analysis

The change analysis page displays the history snapshot data comparison statistics and details of network devices. By default, the snapshot data comparison statistics within the last 24 hours are displayed. To select a time span, use the time selector. You can view data within a longest time span of last 30 days.

Configuration workflow

Figure 49 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure basic network settings

See “Procedure.” Complete the network device configuration, network asset addition, protocol template settings, and protocol settings.

Start parsing tasks on the analyzer

See “Starting parsing tasks on the analyzer.”

Verify the configuration

Navigate to the Analysis > Health Analysis > Network Analysis > Change Analysis page to view the change analysis.

·     The Change Analysis page displays the distribution of devices that have changes and statistics of network changes from the device and change item perspectives.

Figure 50 Change analysis

 

·     In the changed device list, expand the device details to view the configuration, entry, and version change information. Click a change item to view its details, including the location information for the change.

Figure 51 Changed device list

 

 

Figure 52 Change details

 

Restrictions and guidelines

None.

 


Configure issue center (problem center)

The issue center page displays fault statistics throughout the network within the selected time span. Additionally, you can switch between tabs to view the fault information by device, network, protocol, overlay, service, and application.

Configuration workflow

Figure 53 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure network devices

See “Configuring network devices.”

Manage assets

Import assets

See “Add network assets.”

Set the protocols

See “Configuring protocol templates” and “Setting the protocols.”

Enable syslog

See “Setting syslog.”

Start parsing tasks on the analyzer

Start the network health parsing task

See “Starting parsing tasks on the analyzer.”

Start the issue center parsing tasks

Navigate to the Analysis > Analysis Options > Task Management > Analysis Tasks page. Start the issue center Java, issue center, issue center gRPC, issue center monitoring, and issue center alarm tasks.

Figure 54 Issue center parsing tasks

 

Verify the configuration

Navigate to the Diagnosis Analysis > Problem Center page. View the information displayed in the issue center.

1.     The overview page displays the issue overview. The issue list displays the level, name, faulty object, event status, status, and time and duration of the issues. The device and network tabs display the detailed issue cases by category.

2.     Click to expand an issue to view the root cause, impact analysis, and detailed procedure of the issue, as well as the processing recommendations.

3.     Click the button in the Actions column in the issue list to collaborate with the controller for issuing the close-loop operation plan (supported by certain issues).

4.     After being acknowledged and processed, the issue is moved to the history issue list.

Figure 55 Problem center

 

Figure 56 Issue details

 

Figure 57 Issues by category

 

Restrictions and guidelines

The flow analysis-related faults are pushed from the TCP flow analysis service to the issue center. For detailed configuration, see “Configuring TCP flow analysis.”


Configure switch logins

The switch logins record the number of login successes and login failures within the selected time span.

Configuration workflow

Figure 58 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure network devices

See “Configuring network devices.”

Manage assets

Import assets

See “Add network assets.”

Set the protocols

See “Configuring protocol templates” and “Setting the protocols.”

Enable syslog

See “Setting syslog.”

Add a widget

Navigate to the Analysis > Network Analysis > Net Health > Overview page. Add the widget named Network Device Logins.

Figure 59 Adding a widget

 

Verify the configuration

Navigate to the Analysis > Network Analysis > Net Health > Overview page. The widget named Network Device Logins displays the logins of switches.

Figure 60 Network device logins

 

Restrictions and guidelines

None.

 

 


Configure intent verification

Intent verification displays the results for verifying the consistency, existence, isolation, and reachability of intents, verification records, completeness of generated network models, and verification change trends.

Configuration workflow

Figure 61 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure network devices

See “Configuring network devices.”

Manage assets

Import assets

See “Add network assets.”

Set the protocols

See “Configuring protocol templates” and “Setting the protocols.”

Configure intent verification settings

Configure verification tasks

This configuration allows you to periodically verify enabled intents in specific fabrics and set the verification intervals.

Navigate to the Diagnosis Analysis > Intent Verification page, and click Verification Tasks. In the dialog box that opens, enable the intents in specific fabrics and set the verification intervals.

Figure 62 Verification task settings

 

Enable intents

After enabling specific intents, you can verify them in verification tasks.

·     To enable multiple intents in bulk, select the intents, and then click Bulk Enable.

·     To enable a specific intent, turn on the option for the intent on the Enable column.

Figure 63 Enabling intents

 

Add and edit custom intents

Custom intents refer to reachability intents and isolation intents.

Navigate to the Diagnosis Analysis > Intent Verification page, and click Add. Configure the following parameters:

·     Type: Select Reachability or Isolation.

·     Name: Enter the name of the intent.

·     Fabric: Select a fabric for the intent.

Figure 64 Add an intent

 

Verify the configuration

Navigate to the Diagnosis Analysis > Intent Verification page to view the intent verification result.

·     The intent verification page displays the summary data for intent verification, as well as the history trend and current intent list.

·     The intent list allows you to add and delete intents, set verification intervals, and batch enable intents.

·     The verification records allow you to view the intent verification records and the associated intents.

·     The network model records the history network snapshot information. You can click a snapshot to view its details.

·     In the issue report settings, you can specify whether to report issues (to the issue center).

Figure 65 Intent verification result

 

·     The intent list displays the predefined intents and self-defined intents. Click an intent state to enter the network-wide preset verification page and view the intent details.

Figure 66 Intent verification details

 

Restrictions and guidelines

None.


Configure TCP flow analysis

Analyzer collects all TCP session control packets forwarded by network devices in data centers, and analyzes the TCP protocol from the perspectives of fabric, host, application, and session. It also allows you to configure relevant threshold and rule settings. TCP control packets are collected from switches through ERSPAN or Telemetry Stream.

Configuration workflow

Figure 67 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure device settings

IMPORTANT

IMPORTANT:

·     Before performing this task, configure network devices, add network assets, and configure the protocol template and protocol settings.

·     Configure either ERSPAN or Telemetry Stream on the device side.

 

Configure ERSPAN (controller deployment)

1.     Add a collector to the controller

Navigate to the Automation > Common Service Settings > Telemetry > Collectors page. Click Add and configure the following parameters:

¡     Name: Enter the name of the collector, a string of up to 255 characters.

¡     IP Address: Enter the floating IP address 11.1.1.2 of the collector used for collection through ERSPAN, INT, or Telemetry Stream. INT uses port number 5555, and Telemetry stream uses port number 9995 to perform data collection for flow analysis. The collector setting is required only when flow analysis is enabled (collector settings are deployed through the controller).

¡     Port Number: Enter 5555 as the port number for data collection through INT, and 9995 as the port number for data collection through Telemetry Stream.

Figure 68 Adding a collector

 

2.     Configure remote mirroring

Navigate to the Analysis > Data Collection > Telemetry > Remote Mirroring page, click Add and configure the following parameters:

¡     Name: Enter a remote mirroring task name.

¡     Switching Device Name: Specify the name of the mirrored device.

¡     Collector Name: Specify collector name created in step 1.

¡     Interface Name: Not required.

Figure 69 Configuring remote mirroring

 

3.     Configure ERSPAN settings (manual configuration).

When no controller is available, you can perform manual configuration. Skip this step if the configuration has been deployed by the controller.

Perform the following configuration on Leaf 11.

Create an ACL.

[Device] acl advanced name acl_test

[Device-acl-ipv4-adv-acl_test] rule 0 permit tcp syn 1

[Device-acl-ipv4-adv-acl_test] rule 5 permit tcp fin 1

[Device-acl-ipv4-adv-acl_test] rule 10 permit tcp rst 1

[Device] quit

Create a traffic class.

[Device] traffic classifier cla_test operator and

[Device-classifier-cla_test] if-match acl name acl_test

[Device-classifier-cla_test] quit

Create a traffic behavior.

[Device] traffic behavior be_test

[Device-behavior be_test]

mirror-to interface destination-ip 11.1.1.2 source-ip 192.168.12.23 //11.1.1.2 is the collector floating IP address. 192.168.12.23 is the device management IP address.

[Device-behavior be_test] quit

Create a QoS policy.

[Device] qos policy policy_test

[Device-qospolicy-polict_test] classifier cla_test behavior be_test

Globally apply the QoS policy to the incoming traffic.

[Device-qospolicy-polict_test] qos apply policy policy_test global inbound

Display QoS policies applied globally (for illustration only).

[Device] display qos policy global

Figure 70 Displaying QoS policies applied globally

 

 

NOTE:

·     The deployed ERSPAN configuration varies by device role. The main difference is whether to match the flag bit in the TCP packets with VXLAN encapsulation.

·     If the collector is attached to an M-LAG device (only one M-LAG device can be attached currently), you need to first configure remote mirroring for the two M-LAG devices. Then perform automatic onboarding of the M-LAG devices (the IPL fail-permit settings are deployed by default). The VLAN interface for IPL fail-permit is used as the output interface in the route from the M-LAG device not attached to the collector to the network adapter of the collector. To select a remote mirroring interface for the device, use the IPL aggregate interface of the device to the peer M-LAG device.

 

Configure Telemetry Stream (controller deployment)

1.     Similar to ERSPAN, before enabling the Telemetry Stream function for the switch through the controller, you need to configure collector settings on the controller. For more information, see "Configure gRPC."

2.     Configure Telemetry Stream settings through the controller.

Navigate Telemetry Automation > Common Service Settings > Telemetry > Telemetry Stream page, click Add and configure the following parameters:

¡     Name: Enter a task name.

¡     Switching Device Name: Specify the name of the device for data collection.

¡     Source IP Address: Specify the source IP address for sending telemetry stream packets, which is the loopback interface address.

¡     Source Port: Specify the source port for sending telemetry stream packets, which is fixed at 12.

¡     Collector Name: Specify collector name created in step 1.

¡     Sampling Rate: Specify sampling rate as 2 to the nth power. As a best practice, set the value to 0. For example, if you set the value to 2, the sampling rate is 1/4.

¡     Service Loopback Group Member Interface: Specify an idle interface on the switch. Selecting the interface will remote its settings.

¡     Device Interfaces: Specify the interfaces to enable Telemetry Stream.

Figure 71 Adding Telemetry Stream

 

3.     Configure Telemetry stream (manual configuration).

When no controller is available, you can perform manual configuration. Skip this step if the configuration has been deployed by the controller.

# Enable telemetry stream timestamp.

[Device] telemetry stream timestamp enable

# Specify a device ID.

[Device] telemetry stream device-id 192.168.12.23   //Device management IP address

# Specify the source IP address for telemetry stream packets.

[Device] telemetry stream collector source 2.1.1.11 destination 11.1.1.2 source-port 12 destination-port 9995   //2.1.1.11 is the loopback interface address. 11.1.1.2 is the collector floating IP address.

# Create service loopback group 1.

[Device]service-loopback group 1 type telemetry-stream 

# Assign a port to the service loopback group.

[Device] interface Twenty-FiveGigE1/0/40

[Device-Twenty-FiveGigE1/0/40] port service-loopback group 1 

[Device-Twenty-FiveGigE1/0/40] quit

# Create a sampler.

[Device] sampler samp_test mode random packet-interval n-power 0

# Create an ACL.

[Device] acl advanced name acl_test

[Device-acl-ipv4-adv-acl_test] rule 0 permit tcp syn 1

[Device-acl-ipv4-adv-acl_test] rule 5 permit tcp fin 1

[Device-acl-ipv4-adv-acl_test] rule 10 permit tcp rst 1

[Device-acl-ipv4-adv-acl_test] rule 15 permit vxlan inner-protocol tcp inner-syn 1

[Device-acl-ipv4-adv-acl_test] rule 20 permit vxlan inner-protocol tcp inner-fin 1

[Device-acl-ipv4-adv-acl_test] rule 25 permit vxlan inner-protocol tcp inner-rst 1

[Device-acl-ipv4-adv-acl_test] quit

# Configure a telemetry stream action on GigabitEthernet 1/0/1. Configure this setting on the traffic collection interface.

[Device] interface Twenty-FiveGigE1/0/1

[Device-Twenty-FiveGigE1/0/1] telemetry stream action 1 acl name acl_test sampler samp_test

[Device-Twenty-FiveGigE1/0/1] quit

# Display telemetry stream configuration.

[Device]dis telemetry stream

Configure collector settings

Add a collector node

This section applies to INT flow analysis, TCP flow analysis, and UDP flow analysis.

Navigate to the Analysis Options > Collector > Collector Parameters page, and click Add Node to configure the following parameters:

·     Host IP: Enter the management IP address of the collector.

·     Username: Enter the username of the collector.

·     Password: Enter the password used for logging in to the collector.

Figure 72 Adding a collector node

 

Configure a cluster

Navigate to the Analysis Options > Collector > Collector Parameters > SeerCollector page, and click Add Cluster to configure the following parameters:

·     Cluster Name: Enter the name of the cluster.

·     Collector Node: Select the added collector node and configure its network settings. For more information, see “Configuring a node.”

·     Collector floating IP address: For more information, see Table 2. Before configuring the collector floating IP address, configure the node. For information about how to configure the node, see "Configuring a node."

Figure 73 Configuring a cluster

 

Configure a node

Click Configuration for a selected node, and then configure the following parameters in the dialog box that opens:

·     Data reporting network port physical IP address: Enter the management IP address of the collector. For more information, see Table 2.

·     Physical IP address of the device management network port: Enter the management IP address of the collector. For more information, see Table 2.

·     PTP clock synchronization network port physical IP address: Enter the management IP address of the collector. For more information, see Table 2.

·     Physical IP address of data collection network port: Enter the IP address of the collector network adapter. For more information, see Table 2.

Figure 74 Configuring a node

 

Configure flow analysis

Configure applications

See “Configuring applications.”

Manage hosts

Navigate to the Analysis Options > Resources > Assets > Host management page, and specify the host discovery scope as needed.

Figure 75 Host discovery scope

 

Configure the application cluster

Navigate to the Analysis Options > Global Configure > Application Cluster Configuration page, and configure the application cluster as needed.

Figure 76 Setting the IP address range for the application cluster

 

Start the parsing task

Navigate to the Analysis Options > Task Management page. In the Analysis Task area, start the TCP flow parsing task.

Figure 77 Starting the TCP flow parsing task

 

Verify the configuration

Navigate to the Health Analysis > Application Analysis > TCP Flow Analysis page to view the TCP flow analysis result and perform statistics analysis from the fabric, host, application, and session perspectives.

·     The fabric summary data include number of fabrics, number of hosts, latency, and number of connections. In addition, the system also displays the connection establishment trend graph, link latency statistics, inter-fabric session interaction, and fabric information. Click a fabric in the fabric list to view its details, including statistics information about the fabric.

·     The host, application, and session pages provides statistics about network traffic from various perspectives, including the top 10 connection establishment failures (failure ratio), application events, and session statistics (session details), as well as prediction data for applications and sessions.

Figure 78 TCP flow analysis

 

Figure 79 Host page

 

Figure 80 Application page

 

Figure 81 Session page

 

Restrictions and guidelines

None.


Configure illegal analysis

By collecting TCP flows from network devices and using the specified traffic interaction compliance rules, illegal analysis analyzes illegal TCP flows with the specified time range.

By collecting TCP flows from network devices and using the specified attack session count threshold and connection establishment failure rate threshold settings, SYN flood attack analysis analyzes illegal TCP flows that exceed the thresholds.

Configuration workflow

Figure 82 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure device settings

See “Configuring device settings.”

Configure collector settings

See “Configuring collector settings.”

Configure flow analysis

Configure applications

See “Configuring applications.”

Manage hosts

See “Managing hosts.”

Configure threshold settings

Navigate to the Health Analysis > Flow Analysis > TCP Flow Analysis > Thresholds page, and configure the following parameters:

·     Global Link Latency Anomaly Threshold: The system identifies an anomaly when the global link latency exceeds the threshold.

·     SYN Flood Attack—TCP Connection Establishment Failure Rate Threshold: The system identifies a TCP response anomaly when the TCP connection establishment failure rate reaches or exceeds the threshold.

·     SYN Flood Attack—TCP Connection Establishment Request Rate Threshold: The system identifies a SYN flood attack when the TCP connection establishment request rate on a host reaches the threshold.

Figure 83 Threshold settings

 

Configure rules

The system uses traffic interaction compliance rules to determine whether TCP flows are illegal. Navigate to the Health Analysis > Flow Analysis > TCP Flow Analysis > Rules page. Click Create Rule and configure the following parameters:

·     Rule Name: Enter a rule name.

·     Interaction Compliance Constraints: Select Deny the source object from accessing the destination object.

·     Source Object (Consumer) IP and Port Info/Destination Object (Provider) IP and Port Info: Click Select Application to select an existing application or customize the IP and port information.

Figure 84 Configuring rules

 

Configure parsing tasks

1.     Navigate to the Analysis Options > Task Management page. In the Analysis Task area, start the TCP stream parsing task.

Figure 85 Starting the TCP stream parsing task

 

2.     Navigate to the Analysis Options > Task Management page. In the Analysis Task area, start the SynFloodAttack task.

Figure 86 Starting the SynFloodAttack task

 

3.     Navigate to the Analysis Options > Task Management page. In the Analysis Task area, start the illegal traffic analysis task.

Figure 87 Starting the illegal traffic analysis task

Verify the configuration

Navigate to the Health Analysis > Flow Analysis > TCP Flow Analysis > Illegal Analysis page to view the illegal flow analysis result.

·     The illegal traffic analysis page displays the number of sessions with illegal traffic, impacted applications, illegal session trend statistics, illegal host distribution, and heat map of sessions matching specific rules. Select a data point in the heat map to view the rule details, illegal session trend, connection establishment failure ratio, and IP session information of the top 10 illegal sessions.

Figure 88 Illegal analysis

 

Figure 89 Heat map of sessions matching specific rules

 

Figure 90 Drilling down from the heat map

 

·     On the SYNFlood page, you can see the number of hosts under attack, trend graph of hosts under attack, trend graph of applications under attack, distribution of attacked objects, and original issue list. Click an item in the original issue list to view the attacked host details, including the basic information, analysis result, and IP session list.

Figure 91 SYNFlood information 1

 

Figure 92 SYNFlood information 2

 

Figure 93 SYNFlood details

 

Restrictions and guidelines

None.


Configure application health

Configuration workflow

See “Configuration workflow.”

Network configuration

See “Network configuration.”

Procedure

Configure device settings

See “Configure device settings.

Configure collector settings

See “Configure collector settings.”

Configure settings on the application health page

Configure applications

See "Configure applications."

Configure hosts

See the host management section in "Configure flow analysis."

Start parsing tasks

Navigate to the Analysis Options > Task Management page. In the Analysis Task area, start the TcpStreamParse task.

Figure 94 Starting the TcpStreamParse task

 

Verify the configuration

1.     Navigate to the Health Analysis > Applications Analysis > Applications Health page to view the application health result, including application health trend and top 10 application statistics.

Figure 95 Application health

 

2.     Click an application in the top 10 application statistics to view the detailed index data about the application. On the details page, click an IP address to display session information filtered by source and destination IP addresses on the traffic analysis session page.

Figure 96 Drilling down to the top 10 application details

 


Configure issue analysis

The system provides analysis on network issues and application issues. The Network Issue page displays the statistics of different types of failures that have occurred in the system within the specified time range. It can narrow down the scope to help you locate the devices with the specific failures. The Application Issue page displays the statistics of packet drops and mirroring anomalies on the TCP sessions or MOD devices that have occurred in the system within the specified time range. It can narrow down the scope to help you locate the TCP or MOD sessions with the specific issues.

Configuration workflow

Network issue workflow

For more information, see “Configuration workflow.”

Applications issue workflow

See “Configuration workflow.”

Network configuration

See “Network configuration.”

Procedure

Configure network issues

See “Procedure.”

Configure application issues

For more information, see “Procedure.”

Verify the configuration

Navigate to the Diagnose Diagnosis Analysis > Issue Analysis page to view the issue result. Issue analysis contains network issues and application issues. The associated statistics are displayed by category.

The Statistics tab of a network issue displays history trend for the issue. The Affected Scope tab displays the devices affected by the issue. Click a device name to display the time when the issued occurred, as well as certain detailed device information.

Figure 97 Issue analysis

 

Restrictions and guidelines

None.


Configure UDP flow analysis

UDP flow analysis displays the flow statistics of devices in the system. You can click a pie chart to enter the associated device or session list page to view detailed information.

Configuration workflow

Figure 98 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure device settings

See “Configuring device settings.”

Configure collector settings

See “Configure collector settings.”

Configure flow analysis

See “Starting the parsing task.”

Configure parsing tasks

For more information, see “Starting the parsing task.”

Verify the configuration

Navigate to the Health Analysis > Flow Analysis > UDP Flow Analysis page to view the UDP flow analysis result. UDP flow analysis collects flow statistics by device, source host, and destination host, and displays the device list and session list.

Click a device or host on the radar chart to view details about the device or session in the device list or session list.

Figure 99 UDP flow analysis

 

Figure 100 Device list

 

Figure 101 Session list

 

Restrictions and guidelines

You cannot configure both Telemetry Stream and ERSPAN. On the device, you need to configure an ACL to match the UDP data.

·     If large amount of UDP data exists, filter data by source or destination. For example:

[Device] acl advanced name acl_test

[Device-ipv4-adv-acl_test] rule permit udp source 1.1.1.0 0.0.0.255 destination 2.2.2.0 0.0.0.255

Configure the service IP address as needed. Specify the source, destination, or both as needed.

·     When Telemetry Stream is enabled, the ACL must deny destination UDP port 9995 to avoid multiple mirrorings. For example:

[Device] acl advanced name acl_test

[Device-ipv4-adv-acl_test] rule 0 permit tcp syn 1

[Device-ipv4-adv-acl_test] rule 1 permit tcp ecn 3

[Device-ipv4-adv-acl_test] rule 5 permit tcp fin 1

[Device-ipv4-adv-acl_test] rule 10 permit tcp rst 1

[Device-ipv4-adv-acl_test] rule 15 permit udp destination-port neq 9995

#


Configure INT flow analysis

You can use INT data to obtain latency and path information of application flows, and use traffic information of application flows to display the routing paths of a specific flow in the network, as well as the latency and traffic data for each hop.

Configuration workflow

Figure 102 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure INT settings on the device

INT-based traffic monitoring is supported on only H3C S12500G, S6850, and S6805 devices.

Add a collector to the controller

Navigate to the Analytics > Data Collection > Telemetry > Collectors page, and click Add to add a collector. Specify the IP address as the floating IP address of the collector.

Figure 103 Adding a collector

 

Configure INT settings of the entry node

1.     Add an INT service.

Navigate to the Automation > Common Service Settings > Telemetry > INT page, add a device and add the entry node, and configure rule settings for the entry node.

¡     Name: Enter the INT service name.

¡     Switching Device Name: Select a switch by its name.

¡     Source IP Address: Specify the source IP address of the device, which is the loopback interface address of the device.

¡     Source IP Address: Specify source port number 7788.

¡     Collector Name: Select the added collector.

Figure 104 Adding the entry node

 

IMPORTANT

IMPORTANT:

·     For the exit node interface role, Source IP Address, Source Port, and Collector Name must be all specified.

·     If no interfaces are available for selection when you add nodes, see "Restrictions and guidelines."

 

2.     Add a node.

Click Add Node to configure the following basic INT parameters:

¡     Name: Enter the INT interface service name.

¡     Interface Name: Select the INT interface, the inbound interface for traffic.

¡     Interface Role: Select the entry node. Options are entry node, transit node, and exit node.

¡     Sampling Rate: Indicates the sampling rate of copied INT packet count to the original packet count. The random sampling mode is adopted. The sampling rate is 2 to the nth power. If you set n to 10, the sampling rate is 1/1024. If you set n to 0, the sampling rate is 100%.

Figure 105 Adding a node

 

3.     Add a rule.

On the basic INT settings page, click Add Rule, and then configure the following parameters:

¡     Rule Name: Enter the rule name.

¡     IP Version: Specify the IP version, IPv4 or IPv6.

¡     Protocol Name: Specify the protocol. Options are TCP, UDP, and ICMP.

Figure 106 Adding a rule

 

The settings deployed through the controller are displayed on the device as follows (you can also manually configure the settings if no controller is available):

# Display the deployed ACL settings.

[Device] dis acl name ifa_acl

Advanced IPv4 ACL named ifa_acl, 3 rules,

ACL's step is 5, start ID is 0

 rule 0 permit tcp

 rule 1 permit icmp

 rule 2 permit udp

 

# Display the QoS policy applied to the inbound direction of the interface.

[Device] display qos policy interface

Interface: Twenty-FiveGigE1/0/1

  Direction: Inbound

  Policy: IN_WGE1/0/1

   Classifier: ifa_cla

     Operator: OR

     Rule(s) :

      If-match acl name ifa_cla

     Behavior: ifa_be

      Accounting enable:

        0 (Packets)

      Mirroring:

        Mirror to the ifa-processor sampler ifa_samp vxlan   

# Display interface configuration:

[Device-Twenty-FiveGigE1/0/1]display this

#

interface Twenty-FiveGigE1/0/1

 port link-mode bridge

port link-type trunk

 port trunk permit vlan 1 11 22

 speed 10000

 telemetry ifa role ingress

 qos apply policy IN_WGE1/0/1 inbound

 port link-aggregation group 22

 

# Globally deploy a device ID to identify an INT node.

[Device] telemetry ifa device-id 192.168.12.23   //Device management IP address

# Display globally deployed sampler configuration for traffic mirroring.

[Device] display sampler

 Sampler name: ifa_samp

  Mode: random;  Packet-interval: 10;  IsNpower : Y

Configure INT settings of the transit node

1.     Navigate to the Analytics > Data Collection > Telemetry > INT page, add a device, and add a transit node.

Figure 107 Adding a transit node

 

2.     Click Add Node.

Figure 108 Adding a node

 

The settings deployed through the controller are displayed on the device as follows (you can also manually configure the settings if no controller is available):

# Configure the inbound interface of packets.

[Device] interface Twenty-FiveGigE1/0/1

[Device-Twenty-FiveGigE1/0/1] display this                                             

#                                                                              

interface Twenty-FiveGigE1/0/1                                                

 port link-mode bridge                                                          

 description for_leaf1                                                         

 port access vlan 20                                                           

 speed 10000                                                                    

 telemetry ifa role transit  

#

[Device-Twenty-FiveGigE1/0/1] quit

Globally deploy a device ID to identify an INT node.

[Device] telemetry ifa device-id 192.168.12.29   //Device management IP address

Configure INT settings of the exit node

1.     Navigate to the Automation > Common Service Settings > Telemetry > INT page, add a device, and add an exit node.

Figure 109 Adding an exit node

 

2.     Click Add Node.

Figure 110 Adding a node

 

The settings deployed through the controller are displayed on the device as follows (you can also manually configure the settings if no controller is available):

# Configure the inbound interface of packets.

[Device] interface Twenty-FiveGigE1/0/1

[Device-Twenty-FiveGigE1/0/1] display this                                              

#                                                                              

interface Twenty-FiveGigE1/0/1                                                

 port link-mode bridge                                                         

 description for_spine                                                         

 port access vlan 21                                                           

 telemetry ifa role egress

#

[Device-Twenty-FiveGigE1/0/1] quit

# Globally deploy a device ID to identify an INT node.

[Device] telemetry ifa device-id 192.168.12.25   //Device management IP address

# Globally deploy INT packet parameters that the exit node sends to the collector.

[Device] telemetry ifa collector source 2.1.1.222 destination 11.1.1.2 source-port

7788 destination-port 5555   //2.1.1.22 is the device loopback interface address. 11.1.1.2 is the collector floating IP address.

Configure collector settings

For more information, see “Configure collector settings.”

Configure applications

For more information, see “Configure applications.”

Configure parsing tasks

Navigate to the Analysis Options > Task Management page. In the Analysis Task area, start INT-associated tasks: IntNetconf and IntAnalysis.

Figure 111 Add INT-associated parsing tasks

 

Verify the configuration

1.     Navigate to the Health Analysis > Service Quality Analysis > On-Path Flow Analysis page to view the INT flow analysis result.

The INT page displays the application flow count trend, top 10 flows by latency, top 10 flows by size, device latency, data center topology, and INT session information.

Figure 112 INT flow analysis

 

2.     Click a session to view its details, including the latency trend, (unreal) flow trend, and path for the application flow.

Figure 113 Application flow path

 

Figure 114 Latency trend graph

 

Figure 115 Flow trend

 

Restrictions and guidelines

·     You cannot configure both INT and telemetry stream.

·     For M-LAG devices, you must add both of the two M-LAG devices. You can specify only the name of a physical interface as the interface name. If the inbound interface is an aggregate interface, specify all member interfaces of this aggregate interface as the interface names. In addition, configure the undo mac-address static source-check enable command on the aggregation group.

·     When the INT interface is an aggregation member port, perform the following tasks:

a.     Navigate to the Automation > DC Networks > Fabrics page on the controller.

b.     Locate the device. Configure the device, and enable the function of sending aggregation member port information to the controller, as shown in Figure 88.

Figure 116 Sending aggregation member port information to the controller

 


Configure intelligent prediction

Based on statistical learning and machine learning, intelligent prediction analyzes key performance indicator (KPI) data and timer series data, predicts the future trends of the data, generates baseline and prediction results, and locates anomalies.

Configuration workflow

Figure 117 Configuration workflow

 

 

Network configuration

See “Network configuration.”

Procedure

Configure basic network settings

Configure network devices, add network assets, and configure the protocol template and protocol settings. For more information, see “Procedure.”

Starting parsing tasks on the analyzer

For more information, see “Start parsing tasks on the analyzer.

Enabling AI prediction

Navigate to the Predict Analysis > AI Task Management page. Select tasks and click Start.

Figure 118 Enabling AI prediction

 

Verify the configuration

Navigate to the Predict Analysis > AI Tasks page to view device details and AI prediction for KPIs.

The AI prediction page displays the predictable device list. You can click to view details about a specific device.

The details page displays the trend graph for device KPIs, as well as the predicted future trend.

You can select a KPI to view the trend and predicted data for the KPI.

Figure 119 Intelligent prediction

 

Restrictions and guidelines

·     Before using intelligent prediction, navigate to the Analysis > Predict Analysis > AI Task Management page to start relevant AI prediction tasks. A task will run at 01:00 or 03:00 every morning.

·     For finer detection precision, the system must reserve data within a minimum of one week to start anomaly detection. The line charts display faulty points only when anomalies are detected.

·     The KPI line chart displays average actual values with a 5-minute granularity. The faulty points are the transient values delivered by parsing tasks. The lines of the faulty points and actual values can never overlap.


Configure health report

The health report module displays the network-wide health report tasks created by the current operator. The task list displays the task name, status, Email, creation time, next execution time, and period type for the tasks.

Configuration workflow

Figure 120 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure mail server settings

1.     Navigate to the System > System Settings > Mail Server Settings page and configure the following parameters:

¡     Server Address: Specify the IP address or domain name of the mail server.

¡     Server Port: Specify the port number of the mail server.

¡     Secure Connection (SSL/TLS): Select the secure connection mode.

¡     Client Authentication: The username and password are required after you select this option.

¡     Sender’s Mail Address: Specify the mail address of the sender.

2.     Click Send Test Mail to verify that the settings take effect.

3.     Click OK.

Figure 121 Configuring mail server settings

 

Create a network-wide health report task

1.     Navigate to the Analysis > Health Analysis > Health Report page, click Create Task, and then configure the following parameters:

¡     Report Type: Select the report type as daily, weekly, or monthly.

¡     Start Time: Specify the task start time.

¡     Task Name: Enter a task name.

¡     Dead Time: Specify the expiration time. If you do not specify this field, the task never expires.

¡     Email: Enter the mail address for the recipient, and then click Add.

2.     Click OK.

Figure 122 Creating a health report task

 

Immediately generate a health report

Navigate to the Analysis > Health Analysis > Health Report page, and then click Immediately Generated. Configure the following parameters:

·     Report Type: Select the report type as daily, weekly, monthly, or user-defined.

·     Start Time: Specify the data start time.

·     End Time: Specify the data end time.

·     Areas: Select a statistics area. Options are all areas and logical area.

·     Generation: Select a generation mode.

¡     Download File: Select this option and click OK. The browser will download the health report attachment.

¡     Email: Select this option and click Add. Then click OK to immediately send the generated health report to the specified Email address.

Figure 123 Immediately generating a health report

 

Verify the configuration

The health report can be automatically generated as scheduled. Alternatively, you can immediately generate a health report manually on the page. You can obtain the health report by download the file or send it to the specified Email address.

The health report contains resource overview, issue center overview, health details, application analysis, change analysis, and network-wide issue overview.

Figure 124  Health report

 

Restrictions and guidelines

If the mail server address is a domain name, you need to configure a DNS server when deploying the Unified Platform. Alternatively, you can log in to the Installer platform after deployment, and navigate to the DEPLOY > Clusters > Cluster Parameters page to edit the DNS server setting.

To download a report file that is immediately generated, you need to enable the browser to allow pop-up windows.


Configure RoCE network analysis

RDMA over Converged Ethernet (RoCE) is a network protocol that allows RDMA over Ethernet network. Devices that support RoCE network analysis include only H3C 6850 switch series and servers with Mellanox mlx4 and mlx5 drivers.

RoCE network analysis displays RDMA-based server traffic analysis from the perspectives of sessions, flows, servers, and clusters. By default, the system displays the change trends for various indexes within the most recent 24 hours. You can adjust the time range to view data within the most recent 15 days.

Configuration workflow

Figure 125 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure switch settings

Configure basic network settings

Configure network devices, add network assets, and configure the protocol template and protocol settings. For more information, see “Procedure.”

Configure the RoCE network settings

1.     Configure PFC settings.

# Configure WRED table settings for PFC.

[Device]qos wred queue table QOS-EGRESS-100G-PORT

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 0 drop-level 0 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 0 drop-level 1 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 0 drop-level 2 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 1 drop-level 0 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 1 drop-level 1 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 1 drop-level 2 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 2 drop-level 0 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 2 drop-level 1 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 2 drop-level 2 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 3 drop-level 0 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 3 drop-level 1 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 3 drop-level 2 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 4 drop-level 0 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 4 drop-level 1 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 4 drop-level 2 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 5 drop-level 0 low-limit 1000 high-limit 131072 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 5 drop-level 1 low-limit 1000 high-limit 131072 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 5 drop-level 2 low-limit 1000 high-limit 131072 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 5 weighting-constant 0

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 5 ecn

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 6 drop-level 0 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 6 drop-level 1 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 6 drop-level 2 low-limit 3500 high-limit 20000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 6 ecn

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 7 drop-level 0 low-limit 37999 high-limit 38000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 7 drop-level 1 low-limit 37999 high-limit 38000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] queue 7 drop-level 2 low-limit 37999 high-limit 38000 discard-probability 20

[Device-wred-table-QOS-EGRESS-100G-PORT] quit

# Apply the WRED table to an interface.

[Device] interface HundredGigE1/0/1

[Device-HundredGigE1/0/1] priority-flow-control deadlock enable

[Device-HundredGigE1/0/1] priority-flow-control enable

[Device-HundredGigE1/0/1] priority-flow-control no-drop dot1p 5

[Device-HundredGigE1/0/1] flow-interval 5

[Device-HundredGigE1/0/1] priority-flow-control dot1p 5 reserved-buffer 17

[Device-HundredGigE1/0/1] priority-flow-control dot1p 5 ingress-buffer static 100//Set the static back pressure frame triggering threshold.

[Device-HundredGigE1/0/1] qos trust dscp

[Device-HundredGigE1/0/1] qos wred apply QOS-EGRESS-100G-PORT

[Device-HundredGigE1/0/1] quit

2.     Configure ECN settings.

# Configure WRED table settings for ECN.

[Device] qos wred queue table aaa

[Device-wred-table-aaa] queue 5 drop-level 0 low-limit 1 high-limit 2 [Device-wred-table-aaa] discard-probability 100

[Device-wred-table-aaa] queue 5 drop-level 1 low-limit 1 high-limit 2 discard-probability 100

[Device-wred-table-aaa] queue 5 drop-level 2 low-limit 1 high-limit 2 discard-probability 100

[Device-wred-table-aaa] queue 5 ecn

[Device] quit

# Apply the WRED table to an interface.

[Device] interface WGE1/0/1

[Device-Twenty-FiveGigE1/0/1] qos wred apply aaa

[Device] quit

3.     Configure gRPC settings.

[Device] telemetry

[Device-telemetry] sensor-group evt_SRZRKAS7GR7CM2RQ3IPOLECG7A

[Device-telemetry-sensor-group-evt_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/portquedropevent

[Device-telemetry-sensor-group-evt_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/portqueoverrunevent

[Device-telemetry-sensor-group-evt_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path netanalysis4/rocev2connectionevent

[Device-telemetry-sensor-group-evt_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path netanalysis4/rocev2statisticevent

[Device-telemetry-sensor-group-evt_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor-group grp_SRZRKAS7GR7CM2RQ3IPOLECG7A

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path acl/ipv4namedadvancerules

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/bufferusages

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/commbufferusages

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/commheadroomusages

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/ecnandwredstatistics

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/egressdrops

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/ingressdrops

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/pfcspeeds

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/pfcstatistics

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path buffermonitor/portqueconfigurations

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path device/base

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path device/extphysicalentities

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path device/physicalentities

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path device/transceivers

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path ifmgr/ethportstatistics

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path ifmgr/interfaces

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path ifmgr/statistics

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path mqc/globalcategorypolicyaccount

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path mqc/ifcategorypolicyaccount

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path mqc/ifpolicyaccount

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path mqc/rules

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path pfc/pfcports/port

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path pfc/pfcports/port/portnodrops/portnodrop

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path qstat/queuestat

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path route/ipv4routes

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

sensor path wred/ifqueuewreds/ifqueuewred

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]sensor path wred/ifqueuewreds/ifqueuewred/dropparameters/dropparameter

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

destination-group grp_VOXJZRJTRI2BPL6YLRRSMB2AMY

[Device-telemetry-destination-group-grp_VOXJZRJTRI2BPL6YLRRSMB2AMY]

ipv4-address 192.168.16.100 port 50051

[Device-telemetry-destination-group-grp_VOXJZRJTRI2BPL6YLRRSMB2AMY]

subscription grp_VOXJZRJTRI2BPL6YLRRSMB2AMY

[Device-telemetry-subscription-grp_VOXJZRJTRI2BPL6YLRRSMB2AMY]

sensor-group evt_SRZRKAS7GR7CM2RQ3IPOLECG7A

[Device-telemetry-subscription-grp_VOXJZRJTRI2BPL6YLRRSMB2AMY] quit

[Device-telemetry] sensor-group grp_SRZRKAS7GR7CM2RQ3IPOLECG7A sample-interval 10

source-address 2.1.1.11

[Device-telemetry-sensor-group-grp_SRZRKAS7GR7CM2RQ3IPOLECG7A]

destination-group grp_VOXJZRJTRI2BPL6YLRRSMB2AMY

[Device-telemetry-destination-group-grp_VOXJZRJTRI2BPL6YLRRSMB2AMY]

quit

[Device-telemetry] quit

4.     Set the upper and lower limits for the average queue length and drop probability for each queue.

[Device]interface Twenty-FiveGigE 1/0/20

[Device Twenty-FiveGigE 1/0/20] qos wred queue 5 drop-level 0 low-limit 4000 high-limit 30000 discard-probability 30

[Device Twenty-FiveGigE 1/0/20] qos wred queue 5 drop-level 1 low-limit 4001 high-limit 30001

[Device Twenty-FiveGigE 1/0/20] qos wred queue 5 drop-level 2 low-limit 4002 high-limit 30002 discard-probability 2

[Device Twenty-FiveGigE 1/0/20] quit

Configure these settings as needed because they conflict with the qos wred apply command configured on the switch port. The associated information is displayed as three metrics in red, yellow, and green on the Network Health > Queue page, indicating the drop levels.

5.     Enable RoCE.

[Device] netanalysis rocev2 mode bidir

[Device] netanalysis rocev2 drop global

[Device] netanalysis rocev2 statistics global

 

CAUTION

CAUTION:

The commands for enabling RoCE conflict with Telemetry Stream.

 

Configure RoCE server settings

Support for RoCE functions require specific network adapters on the server. Currently, HGE network adapters from Mellanox are used.

An RoCE server requires the following settings before it is ready for use.

Configure basic server environment settings

1.     Install H3Linux.

For more information, see H3C SeerAnalyzer Deployment Guide.

2.     Prepare the installation CD, and mount the ISO through the HDM virtual media.

Figure 126 Mounting the image

 

3.     Create a file folder.

mkdir  -p  /mnt

4.     Mount the system file.

mount /dev/sr0 /mnt

5.     Create a local directory.

mkdir /data/localyum

6.     Copy the file to the local directory.

cp –rf /mnt/* /data/localyum

7.     Create the repo file.

cd /etc/yum.repos.d/

(As a best practice, back up the repo file in the directory first.)

cp CentOS-Media.repo local_yum.repo

8.     Configure the yum file.

vi  local_yum.repo

Replace the baseurl with the previously created directory /data/localyum, and set the value of enabled to 1.

Figure 127 Configuring the yum file

 

9.     Back up the base file.

cd /etc/yum.repos.d/

mv CentOS-Base.repo CentOS-Base.repo_bak

10.     Download and make usable all the metadata for the currently enabled yum repositories.

yum clean all

yum makecache

yum repolist all

11.     Verify the system RPM packages.

Figure 128 Verifying the system RPM packages

 

12.     Install yum relevant packages.

yum -y install zlib-devel bzip2-devel

yum -y install  openssl-devel  ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel --skip-broken

yum install createrepo pciutils gcc gcc-c++ flex bison -y

yum install gtk2 atk cairo tcl tcsh tk -y

13.     Install Python.

Enter /data/localyum/Packages to install RPM:

[root@server60 Packages]# rpm -ivh python-libs-2.7.5-76.el7.x86_64.rpm python-devel-2.7.5-76.el7.x86_64.rpm python-2.7.5-76.el7.x86_64.rpm --force

14.     (Optional.) Install iPerf.

Use iPerf for bandwidth measurement. Enter /data/localyum/Packages to install RPM:

[root@server61 Packages]# rpm -ivh iperf3-*.x86_64.rpm

Installing the Mellanox driver

1.     Download the driver.

https://content.mellanox.com/ofed/MLNX_OFED-4.9-2.2.4.0/MLNX_OFED_LINUX-4.9-2.2.4.0-rhel7.6-x86_64.tgz 

2.     Decompress the file.

tar zxvf  https://content.mellanox.com/ofed/MLNX_OFED-4.9-2.2.4.0/MLNX_OFED_LINUX-4.9-2.2.4.0-rhel7.6-x86_64.tgz

Enter the decompressed directory to execute ./mlnxofedinstall --add-kernel-support

 

 

NOTE:

If the operation fails, obtain installation package *-ext.tgz from /tmp/MLNX_OFEX**, decompress it, and then execute the ./mlnxofedinstall –all installation command.

 

3.     Enable the driver.

/etc/init.d/openibd restart

systemctl enable openibd

4.     Verify the configuration.

#ibdev2netdev             //If the network adapter is up, the configuration takes effect.

Figure 129 Verifying the configuration

 

 

NOTE:

To install the driver, first disable the firewall with the systemctl stop firewalld.service command.

 

Preconfigure the server adapter (taking RoCE priority=5 as an example)

1.     Prepare for configuration.

Install OFED and the enable the openibd service (systemctl status openibd).

# mst start

Figure 130 Enabling the service

 

2.     Configure the TOS for the network adapter (invalid upon reboot).

ibdev2netdev             //If the network adapter is up, the configuration takes effect.

Figure 131 Viewing the Mellanox network adapter

 

Assign QoS priorities.

mlnx_qos -i enp161s0 -p 0,1,2,3,4,5,6,7

Set the RoCE mode to v2.

# cma_roce_mode -d mlx5_0 -p 1 -m 2

Set the TOS value.

# cma_roce_tos -d mlx5_0 -t 160    (1010 0000. The first three bits represents 0 to 7. 101 represents 5.)

Specify the priority trust mode to DSCP.

# mlnx_qos -i enp161s0 --trust dscp

3.     Configure PFC settings for the network adapter (invalid upon reboot).

Enable PFC for the queue with priority 5.

# mlnx_qos -i enp161s0 --pfc 0,0,0,0,0,1,0,0    (represent 0 to 7)

4.     Verify the configuration.

5.     Verify that the PFC settings take effect on the network adapter as configured.

Figure 132 Verifying the configuration

 

6.     Configure DCQCN settings for the network adapter (invalid upon reboot).

Enable DCQCN on priority 5 as RP and NP (you do not need to run the commands if the value is 1).

# echo 1 > /sys/class/net/enp161s0/ecn/roce_np/enable/5

# echo 1 > /sys/class/net/ enp161s0/ecn/roce_rp/enable/5

7.     Configure DCQCN settings.

# mlxconfig -d /dev/mst/mt4115_pciconf0 -y s ROCE_CC_PRIO_MASK_P1=0x20  (0b00100000=0x20=32)

#mlxconfig -d /dev/mst/mt4115_pciconf0 -y s CNP_DSCP_P1=48 CNP_802P_PRIO_P1=6

8.     Verify the configuration.

# mlxconfig -d /dev/mst/mt4115_pciconf0 q | grep 'CNP\|MASK'

Verify that the settings take effect as configured.

Figure 133 Verifying the configuration

 

9.     Enable ECN for TCP flows.

sysctl -w net.ipv4.tcp_ecn=1

net.ipv4.tcp_ecn = 1

Configure RoCE-associated parsing tasks

Navigate to the Analysis Options > Task Management page. In the Analysis Task area, start the grpcAnalysis task. Skip this step if the task has been started.

Figure 134 Starting the grpcAnalysis task

 

Configure server and cluster settings for RoCE network analysis

1.     Add a host

Navigate to the Analysis > Health Analysis > Network Analysis > RoCE Network Analysis page. On the Server tab, click Server management and then click Add Host. Configure the following parameters:

¡     IP: Enter the management IP address of the server.

¡     Username: Specify the username used for logging in to the server.

¡     Password: Specify the password used for logging in to the server.

Figure 135 Adding a host

 

2.     Configure RoCE network cluster settings

Navigate to the Analysis > Health Analysis > Network Analysis > RoCE Network Analysis page. On the Server tab, click cluster management and then click Add to add a cluster.

Figure 136 Configuring RoCE network cluster settings

 

Verify the configuration

Navigate to the Analysis > Health Analysis > Network Analysis > RoCE Network Analysis page to view the RoCE network analysis result.

·     The overview page displays the RoCE-associated data about links, topologies, switches, and servers.

Figure 137 RoCE overview

 

Figure 138 Switch details

 

Figure 139 Server details

 

·     The session page displays sessions by 4-tuples. Each session is identified by the 4-tuple, instead of the source and destination IP addresses. You can enable data collection for all sessions or specific session.

Figure 140 Session information

 

·     The flow page displays flows by 4-tuples. Each flow is identified by the 4-tuple with different source and destination IP addresses. You can view the switches and servers on the associated path. To view the flow path information for a flow, expand the flow entry in the flow list.

Figure 141 Flow information

 

Figure 142 Flow path information

 

·     The server page displays statistics collected by servers when RoCE flows pass through. To view data about a NIC, expand the NIC entry in the NIC list.

Figure 143 Server statistics information

 

·     The cluster page classifies NICs to different clusters, and collects NIC statistics by cluster. Double-click the inter-cluster topology edge to view statistics about the single cluster.

Figure 144 Cluster statistics

 

Restrictions and guidelines

Devices that support RoCE network analysis include only H3C 6850 switch series and servers with Mellanox mlx4 and mlx5 drivers.


Configure the cross-DC network

H3C Super Analyzer-DC (referred to as SUA in this document) applies to the cross-DC scenario and supports incorporating multiple analyzers. It can provide NetStream, sFlow, and network-wide data collection analysis for devices and implement analysis for network flows across multiple DCs. The cross-DC network function depends on analyzer configuration in the DC scenario. When network-wide data are required on the egress link, you need to configure analyzer settings in the NPA scenario. For more information, see analyzer configuration in the DC/NPA scenario.

Configuration workflow

Figure 145 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure site settings

Navigate to the Analysis > Cross DC Network > Configuration > Sites page to add a site, and specify an analyzer and fabric for the site.

Figure 146 Configuring analyzer and fabric settings for a site

 

Configure analyzer settings

1.     Navigate to the Analysis > Cross DC Network > Configuration > Analyzers page to enter the login information of the configured analyzer to complete analyzer configuration.

Figure 147 Configuring the analyzer settings

 

2.     After configuring analyzer settings, navigate to the Analysis > Analysis Options > Task Management page.

The analyzer automatically runs the SUA flow processing tasks without requiring additional configuration.

 

Configure fabric settings

Navigate to the Analysis > Cross DC Network > Configuration > Fabrics page. The system automatically refreshes the fabric information for the newly added analyzer. If a change occurs in the fabric configuration for the incorporated analyzer, you need to manually refresh the fabric list to for consistency with the analyzer. You can view the devices and ports incorporated by the fabric on the fabric page.

Figure 148 Configuring fabric settings

 

Configure egress link settings

Navigate to the Analysis > Cross DC Network > Configuration > Egress Links page to select inter-site egress devices and ports to complete egress link settings for the fabric of the configured site and incorporated analyzer. The data source of an egress link can be NetStream or network-wide data. The NetStream data requires NetStream configuration for either end of the egress link. The network-wide data requires NPA installation to the analyzer and collection link configuration.

Figure 149 Configuring egress link settings

 

Configure application settings

Navigate to the Analysis > Cross DC Network > Configuration > Applications page to add an application of the specified type. Optional application types are NetStream and network-wide data. NetStream application data requires configuring application IP, port, and protocol for application identification. Network-wide data requires configuring an egress link first to obtain the associated application.

Figure 150 Configuring application settings

 

Verify the configuration

Navigate to the Analysis > Cross-DC Network > Health Overview page to view the cross-site traffic and application analysis result. The page displays the site, fabric, application, and egress link statistics information, as well as the top 5 cross-site applications and cross-site traffic distribution. You can click the details link for the top 5 cross-site application list to view details about cross-site applications. You can click the traffic distribution chart to view cross-site traffic details.

Figure 151 Health summary

 

Restrictions and guidelines

None.


Configure link analysis

The flow link page displays associated indexes for all flow links, helping you obtain the overall network status and ranking of certain link indexes. The link composition analysis displays trend graphs of total flow rate and total packet rate for selected links, as well as the application distribution statistics. You can perform comparison analysis and composition analysis for different links based on the same KPI.

Configuration workflow

Figure 152 Configuration workflow

 

Network configuration

See “Network configuration.”

Procedure

Configure basic network settings

See “Procedure.” Complete the network device configuration, network asset addition, protocol template settings, and protocol settings.

Configure device settings

To view link data, you can perform this configuration on either end of the link.

Netstream configuration:

[Device] ip netstream export version 9 origin-as                           //Configure NetStream version

[Device] ip netstream export host 191.168.10.10 9996 vpn-instance mgmt    //191.168.10.10 is the northbound service VIP

[Device] ip netstream export source interface M-GigabitEthernet0/0/1        //Configure the source port for outputting packets

[Device] ip netstream timeout active 1   //Set the aging timer for active flows. As a best practice, set the aging timer to one minute.

[Device] sampler net mode random packet-interval n-power 10   //Configure the sampling rate (n-power is 10, that is 2 to the power of 10)

As a best practice, configure the sampling rate as 1024 (n-power is 10, that is, 2 to the power of 10). To edit the device sampling rate, you need to edit the sampling rate parameter in the SUA flow processing task on the analyzer task management page. Otherwise, the application identification accuracy will be affected.

Enable NetStream sampling in both the inbound and outbound directions.

[Device] interface WGE1/0/19

[Device-Twenty-FiveGigE1/0/19] ip netstream inbound sampler net

[Device-Twenty-FiveGigE1/0/19] ip netstream outbound sampler net

Configure link settings

Navigate to the Analysis Options > Global Configure > Link Configuration page to create a flow link and configure the following parameters:

·     Link Name: Specify a link name, which cannot be modified. The name is a case-sensitive string of up to 50 characters that can contain only Chinese characters, letters, digits, underscores (_), hyphens (-), dots (.), at sign (@), and brackets (() and []).

·     Device: Select a device from the drop-down box.

·     Select Interface: Select an interface from the interface list.

Figure 153 Configure link settings

 

Start parsing tasks for the analyzer

Navigate to the Analysis Options > Task Management page, and enable the NetStream flow processing task in the Analysis Tasks area.

Figure 154 Enabling the NetStream parsing task

 

Verify the configuration

Navigate to the Health Analysis > Link analysis > Link Traffic page to view the flow link data. The page displays the flow link list, as well as the link traffic, traffic rate, packet rate, and data packet information.

Click the name of a link to view its details, including the inbound and outbound traffic rate, packet rate trend, and application distribution.

Figure 155 Flow link list

 

Figure 156 Flow link overview

 

Restrictions and guidelines

After the analyzer receives NetStream data from the NetStream-enabled device, you can view the device and NetStream interface configured for the device only after adding a flow link.


Configure vSwitch health monitoring (OVS)

vSwitch health monitoring displays the following information:

·     CPU and memory usage trend of vSwitches.

·     Byte sending and receiving rate, packet sending and receiving rate, incoming and outgoing packet loss rate, and incoming and outgoing error packet rate for vSwitch interfaces.

·     vSwitch health trend, vSwitch device list, and vSwitch device status.

Configuration workflow

Figure 157 Configuration workflow

 

Network configuration

You need to import vSwitch assets from the controller, and make sure the network configuration is the same as the controller.

Procedure

Configure a data source for the controller

Navigate to the Analysis Options > Resources > Assets > Data sources page to add a data source for the DC controller.

Add vSwitch assets

Navigate to the Analysis Options > Resources > Assets > Asset List page to import vSwitches from the controller.

Verify the configuration

·     Navigate to the Health Analysis > Network Analysis > Network Health > Overview page to view the vSwitch network health status, including vSwitch health trend, vSwitch health status by category, and vSwitch list.

Figure 158 vSwitch health status

 

Figure 159 vSwitch list

 

·     Navigate to the Health Analysis > Health Overview > Topo > Physical Topo page to view the vSwitch topology information.

Figure 160 vSwitch topology

 

·     In the vSwitch list, click a device name to view its details.

Figure 161 vSwitch details-1

 

Figure 162 vSwitch details-2

 

Figure 163 vSwitch details-3

 

·     Navigate to the Health Analysis > Network Analysis > Network Health > vSwitch page to view detailed vSwitch information.

Figure 164 vSwitch information

 

Figure 165 vSwitch interface information

 

Restrictions and guidelines

None.


FAQ

Remote mirroring configuration cannot be deployed to 12500X switches through the controller, and must be manually configured. How can I configure remote mirroring?

To configure remote mirroring:      

1.     Create a service loopback group.

service-loopback group 1 type tunnel

2.     Assign an interface to the service loopback group. When you assign an interface to a service loopback group, the system removes the configuration on the interface.

interface FortyGigE 1/4/0/1

port service-loopback group 1

All configurations on the interface will be lost. Continue?[Y/N]:y

3.     Create GRE tunnel interface Tunnel 1. Specify the IP address of interface Loopback 0 or an in-band IP address reachable at Layer 3 as the source address of the tunnel interface. Specify the destination IP address as the IP address of the collection NIC of the collector or floating IP address of the collector.

interface Tunnel1 mode gre

source loopback0

destination 192.8.0.1   # (IP address of the collection NIC of the collector or floating IP address of the collector)

4.     Create a monitoring group. Assign the tunnel interface created above to the monitoring group.

monitoring-group 1

monitoring-port Tunnel 1

5.     Create an ACL. Configure rules in the ACL as needed.

acl advanced name erspan_global_acl

rule 0 permit tcp syn 1

rule 5 permit tcp fin 1

rule 10 permit tcp rst 1

6.     Create a traffic class, and specify an ACL as the match criterion. Specify match criteria as needed.

traffic classifier cls_erspan

if-match acl name erspan_global_acl inner

if-match vxlan any

7.     Configure an action of mirroring traffic to the specified monitoring group.

traffic behavior be_erspan

mirror-to monitoring-group 1

8.     Create a QoS policy. Associate the traffic class with the traffic behavior in the QoS policy.

qos policy erspan

classifier cls_erspan behavior be_erspan

9.     Apply the QoS policy.

qos apply policy erspan global inbound

10.     Display information about QoS policies.

dis qos policy global

Direction: Inbound

Policy: erspan

Classifier: cls_erspan

Operator: AND

Rule(s) :

If-match acl name erspan_global_acl

Behavior: be_erspan

Mirroring:

Mirror to monitoring group 1

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网