AD-NET Solution Hardware Configuration Guide-5W106

HomeSupportConfigure & DeployConfiguration GuidesAD-NET Solution Hardware Configuration Guide-5W106
Download Book
  • Released At: 20-06-2023
  • Page Views:
  • Downloads:
Table of Contents
Related Documents

Contents

Overview·· 1

Introduction· 1

Applicable component versions· 1

Hardware requirements for Unified Platform·· 2

Required application installation packages· 2

Hardware configuration requirements for physical servers and VMs· 5

General requirements for Unified Platform deployment 5

Hardware requirements for Unified Platform (E0711 and later) 6

Hardware requirements for Unified Platform (versions earlier than E0709, including E06xx) 10

Hardware requirements for AD-Campus· 13

Deployment on physical servers (x86-64: Intel64/AMD64) 13

Standalone deployment of controller 14

Standalone deployment of SeerAnalyzer 15

Converged deployment of the controller and analyzer 18

DHCP server deployment 21

EPS deployment 22

SMP deployment 23

EIA deployment 23

EAD deployment 25

WSM deployment 26

Oasis deployment 28

Deployment on physical servers (x86-64 Hygon servers) 28

Standalone deployment of controller 29

Standalone deployment of SeerAnalyzer 31

Converged deployment of the controller and analyzer 33

DHCP server deployment 36

EPS deployment 37

EIA deployment 38

EAD deployment 39

WSM deployment 41

Oasis deployment 43

Deployment on physical servers (ARM Kunpeng/Phytium servers) 43

Standalone deployment of controller 44

Standalone deployment of SeerAnalyzer 46

Converged deployment of the controller and analyzer 49

DHCP server deployment 51

EPS deployment 52

EIA deployment 53

EAD deployment 55

WSM deployment 56

Oasis deployment 58

Hardware requirements for deployment on VMs· 59

Requirements for test and demo deployment 59

Hardware requirements for AD-DC·· 61

Deployment on physical servers (x86-64: Intel64/AMD64) 61

General drive requirements· 61

Standalone deployment of controller 62

Hardware requirements for standalone deployment of analyzer 67

Hardware requirements for quorum node deployment 70

Hardware requirements for DTN physical server deployment 71

Hardware requirements for converged deployment of the controller and analyzer 71

Hardware requirements for Super Controller deployment 75

Hardware requirements for Super Controller deployment in a converged manner 77

Hardware requirements for Super Analyzer deployment 77

Hardware requirements for optional application packages· 79

Deployment on physical servers (domestic servers) 79

General drive requirements· 79

Standalone deployment of controller 80

Hardware requirements for standalone deployment of analyzer 90

Hardware requirements for quorum node deployment 98

Hardware requirements for DTN physical server deployment 98

Hardware requirements for converged deployment of the controller and analyzer 99

Hardware requirements for Super Controller deployment 109

Hardware requirements for Super Controller deployment in a converged manner 110

Hardware requirements for Super Analyzer deployment 111

Hardware requirements for optional application packages· 113

Hardware requirements for deployment on VMs· 114

Standalone deployment of controller 115

Hardware requirements for standalone deployment of analyzer 118

Requirements for test and demo deployment 121

Standalone deployment of controller 121

Hardware requirements for standalone deployment of analyzer 121

Hardware requirements for AD-WAN·· 123

Hardware requirements for AD-WAN carrier network deployment 123

Deployment on physical servers· 123

Hardware requirements for deployment on VMs· 135

Hardware requirements for small-scale testing and demo deployment 138

Hardware requirements for SD-WAN branch access network deployment 138

Deployment on physical servers· 138

Hardware requirements for deployment on VMs· 146

Hardware requirements for SeerAnalyzer (NPA/TRA/LGA) 149

Hardware requirements for SeerAnalyzer-NPA· 149

Deployment on physical servers· 149

Hardware requirements for deployment on VMs· 158

Requirements for test and demo deployment 159

Hardware requirements for SeerAnalyzer-LGA· 160

Deployment on physical servers· 160

Hardware requirements for deployment on VMs· 162

Hardware requirements for SeerAnalyzer-TRA· 164

Deployment on physical servers· 164

Hardware requirements for deployment on VMs· 165

Hardware requirements for license server deployment 166

Hardware requirements for multi-scenario converged deployment 167

Separate hardware requirements for each component 167

Hardware resource calculation rules for multi-scenario converged deployment 172

Hardware resource calculation rules for controller converged deployment 172

Hardware resource calculation rules for analyzer converged deployment 173

Converged deployment of the controller and analyzer 174

Hardware requirements for AD-NET appliance· 175

Hardware requirements and applicable scenarios· 175

Appendix· 177

Hardware requirements for SeerAnalyzer history versions· 177

Hardware requirements for SeerAnalyzer E61xx· 177

Hardware requirements for SeerAnalyzer E23xx· 189

 


Overview

Introduction

This document provides hardware requirements for AD-NET products, including hardware requirements for single-controller deployment, converged deployment of controller and analyzer, and multi-controller deployment.

·     For hardware requirements of Unified Platform, see "Hardware requirements for Unified Platform."

·     For hardware requirements of AD-Campus scenarios, see "Hardware requirements for AD-Campus."

·     For hardware requirements of AD-DC scenarios, see "Hardware requirements for AD-DC."

·     For hardware requirements of AD-WAN scenarios, see "Hardware requirements for AD-WAN."

·     For hardware requirements of SeerAnalyzer scenarios including NPA, TRA, and LGA, see "Hardware requirements for SeerAnalyzer (NPA/TRA/LGA)."

·     For hardware requirements of the license server, see "Hardware requirements for license server deployment."

·     For hardware requirements of converged deployment, see "Hardware requirements for multi-scenario converged deployment."

Applicable component versions

This document is applicable to AD-NET 5.3 and later.

This document is applicable to SeerAnalyzer E62xx and later. For hardware requirements of SeerAnalyzer history versions, see "Hardware requirements for SeerAnalyzer history versions".


Hardware requirements for Unified Platform

Unified Platform supports single-node and cluster deployment modes. In single-node deployment mode, Unified Platform is deployed on a single master node and offers all its functions on this master node. In cluster mode, Unified Platform is deployed on a cluster that contains three master nodes and N (≥ 0) worker nodes, delivering high availability and service continuity. You can add worker nodes to the cluster for service expansion. Unified Platform can be deployed on physical servers or virtual machines (VMs).

Required application installation packages

Table 1 Required application installation packages

Software package name

Feature description

Remarks

common_H3Linux.iso

H3Linux operating system installation package.

Required.

general_PLAT_Middle

Middleware image

Required.

Supported in D0712 and later

common_PLAT_glusterfs

Provides local shared storage functionalities.

Required.

general_PLAT_portal

Provides portal, unified authentication, user management, service gateway, and help center functionalities.

Required.

general_PLAT_kernel

Provides the permission, resource identity, license, configuration center, resource group, and logging functionalities.

Required.

general_PLAT_kernel-base

Provides the alarm, access parameter template, monitor template, report, and forwarding via mail or SMS functionalities.

Required in basic network scenarios.

general_PLAT_network

Provides the basic management functionalities (including network resources, network performance, network topology, and iCC)

Required in basic network scenarios.

 

For the recommended hardware configuration of the required software packages, see "Hardware requirements for Unified Platform (E0711 and later)," and "Hardware requirements for Unified Platform (versions earlier than E0709, including E06xx)." For the hardware configuration of optional software packages, see Table 2.

Table 2 Optional application installation packages

Software package name

Feature description

CPU (cores)

Memory (GB)

Remarks

x86-64 (Intel64/AMD64)

x86-64 (Hygon) + ARM (Kunpeng 920/Phytium)

general_PLAT_kernel-region

Provides the hierarchical management function

0.5

1

6

Dependent on general_PLAT_kernel-base

general_PLAT_Dashboard

Provides the dashboard framework

0.5

1

4

Dependent on general_PLAT_kernel-base

general_PLAT_widget

Provides the widgets on the dashboard of the platform

0.5

1

2

Dependent on general_PLAT_Dashboard

general_Websocket

Provides the southbound WebSocket feature.

1

2

8

 

general_PLAT_suspension

Provides the maintenance tag feature.

0.2

0.4

1

 

ITOA-Syslog

Provides the syslog feature.

1.5

3

8

 

CMDB

Provides database configuration and management.

4

8

16

Dependent on general_PLAT_kernel-base

general_PLAT_aggregation

Provides the alarm aggregation feature.

1.5

3

8

Dependent on general_PLAT_kernel-base

general_PLAT_netconf

Provides the NETCONF invalidity check and NETCONF channel features.

3

6

10

 

general_PLAT_network-ext

Provides the network tool service.

0.5

1

1

Dependent on general_PLAT_network

general_PLAT_oneclickcheck

Provides one-click check.

0.5

1

2

Dependent on general_PLAT_kernel-base

nsm-webdm

Provides NE panels.

2

4

4

Dependent on general_PLAT_network

Analyzer-Collector

Provides data collection through SNMP/NETCONF/GRPC/Syslog/Netstream/Sflow/SNMP-Trap/Flow.

8

16

30

This installation package is required for the analyzer, and you can install the package only on the master nodes. To deploy the analyzer on Unified Platform, first deploy this package. The hardware requirements for deploying this application depend on services running on the analyzer (device quantity, entry quality, tunnel quantity, user quality, and application quality). For more information, see hardware requirements in H3C SeerAnalyzer Deployment Guide.

Analyzer-AIOPS

Provides open prediction and anomaly detection APIs.

4

8

12

 

general_PLAT_imonitor

Provides the monitoring and alarm feature.

2.5

5

8

Dependent on portal

This component can be installed only on the Matrix webpage (port 8443). It cannot be installed on the Unified Platform webpage (port 30000).

general_PLAT_autodeploy

Provides automated device deployment.

2

4

16

Dependent on network

This component can be installed only on the Unified Platform webpage (port 30000). It cannot be installed on the Matrix webpage (port 8443).

general_PLAT_snapshot

Provides configuration rollback through a snapshot

1

2

8

portal, kernel

general_PLAT_quickreport

Provides the fast report feature.

2.5

5

8

Dependent on gfs, portal, kernel, and base

 

Hardware configuration requirements for physical servers and VMs

This section provides the hardware requirements for Unified Platform deployment. For information about a specific scenario, see the hardware requirements of the scenario.

General requirements for Unified Platform deployment

Unified Platform can be deployed on physical servers or VMs. To deploy Unified Platform on a VM, make sure the VM runs on VMware ESXi 6.7.0 or H3C CAS E0706 or later.

Unified Platform supports single-node deployment and cluster deployment.

Table 3 shows the general requirements for Unified Platform deployment.

Table 3 General requirements

Item

Requirements

Drive:

·     The drives must be configured in RAID 1, 5, or 10.

·     Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

·     The capacity requirement refers to the capacity after RAID setup.

Drive configuration option 1

·     System drive: SSDs configured in RAID, with an IOPS of 5000 or higher.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID, with an IOPS of 5000 or higher.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

Network ports

·     Non-bonding mode: 1 × 1Gbps network port.

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 1 Gbps or above network ports, forming a Linux bonding interface.

 

 

NOTE:

For Unified Platform E0706 and later versions, the etcd partition can share a physical drive with other partitions. As a best practice, use a separate physical drive for the etcd partition. For information about whether etcd can use a separate physical disk, see the configuration guide for the solution.

 

Hardware requirements for Unified Platform (E0711 and later)

This section provides the hardware requirements for Unified Platform deployment on physical servers and VMs. The CPU requirements are as follows:

·     The number of CPUs is the number of CPU cores when Unified Platform is deployed on physical servers and the number of vCPUs when Unified Platform is deployed on VMs. For example, converged deployment of Unified Platform and the basic network component requires eight CPUs. To deploy them on a physical server, prepare eight CPU cores. To deploy them on a VM, prepare eight vCPUs.

·     The CPUs must be exclusively used and cannot be overcommitted. For deployment on physical servers, if the CPUs support double-threading, you can prepare half of the required CPU cores.

Table 4 Single-node deployment of Unified Platform

Components

Minimum node requirements

Maximum resources that can be managed

Unified Platform (OS+ glusterfs +portal+kernel)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 4 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 8 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 8 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 8 cores, 2.1 GHz

·     Memory: 20 GB.

·     System drive: 400 GB

·     ETCD drive: 50 GB

Lite mode:

·     Network devices: 200

·     Network performance collection instances: 10000

·     Online users: 5

·     Concurrent users: 1

·     Notification rate: ≤ 5/sec

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 8 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 16 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 16 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 16 cores, 2.1 GHz

·     Memory: 32 GB.

·     System drive: 500 GB

·     ETCD drive: 50 GB

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 8 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 16 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 16 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 16 cores, 2.1 GHz

·     Memory: 56 GB.

·     System drive: 600 GB

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 1000

·     Network performance collection instances: 50000

·     Online users: 50

·     Concurrent users: 10

·     Notification rate: ≤ 40/sec

 

Table 5 Cluster deployment

Components

Minimum node requirements

Maximum resources that can be managed

Unified Platform (OS+ glusterfs +portal+kernel)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 6 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 12 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 12 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 12 cores, 2.1 GHz

·     Memory: 24 GB.

·     System drive: 500 GB. 300 GB is required for system and platform deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 200

·     Network performance collection instances: 10000

·     Online users: 5

·     Concurrent users: 1

·     Notification rate: ≤ 40/sec

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 8 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 16 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 16 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 16 cores, 2.1 GHz

·     Memory: 48 GB.

·     System disk: 600 GB 400 GB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 10 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 20 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 20 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 20 cores, 2.1 GHz

·     Memory: 80 GB.

·     System disk: 800 GB 600 GB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 2000

·     Network performance collection instances: 200000

·     Online users: 100

·     Concurrent users: 20

·     Notification rate: ≤ 80/sec

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 12 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 24 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 24 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 24 cores, 2.1 GHz

·     Memory: 96 GB.

·     System drive: 1 TB. 800 GB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 5000

·     Network performance collection instances: 500000

·     Online users: 200

·     Concurrent users: 25

·     Notification rate: ≤ 80/sec

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 16 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 32 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 32 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 32 cores, 2.1 GHz

·     Memory: 112 GB.

·     System drive: 1.2 TB. 1 TB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 10000

·     Network performance collection instances: 1000000

·     Online users: 300

·     Concurrent users: 30

·     Notification rate: ≤ 100/sec

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 18 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 36 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 36 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 36 cores, 2.1 GHz

·     Memory: 128 GB.

·     System drive: 1.5 TB. 1.3 TB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 15000

·     Network performance collection instances: 1500000

·     Online users: 500

·     Concurrent users: 40

·     Notification rate: ≤ 100/sec

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 20 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 40 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 40 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 40 cores, 2.1 GHz

·     Memory: 160 GB.

·     System drive: 1.8 TB. 1.6 TB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 20000

·     Network performance collection instances: 2000000

·     Online users: 600

·     Concurrent users: 50

·     Notification rate: ≤ 100/sec

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 24 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 48 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 48 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 48 cores, 2.1 GHz

·     Memory: 192 GB.

·     System drive: 2.4 TB. 2.2 TB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Common mode (default):

·     Network devices: 30000

·     Network performance collection instances: 3000000

·     Online users: 800

·     Concurrent users: 75

·     Notification rate: ≤ 100/sec

 

Hardware requirements for Unified Platform (versions earlier than E0709, including E06xx)

Requirements for deployment on physical servers

Unified Platform supports single-node deployment and cluster deployment.

Table 6 Single-node deployment of Unified Platform

Components

Minimum node requirements

Drive and NIC requirements

Unified Platform (OS+ glusterfs +portal+kernel)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 4 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 8 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 8 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 8 cores, 2.1 GHz

·     Memory: 32 GB.

·     System drive: 600 GB

·     ETCD drive: 50 GB

Drives:

·     The drives must be configured in RAID 1, 5, or 10.

·     Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

·     Drive configuration option 1:

¡     System drive: SSDs configured in RAID, with an IOPS of 5000 or higher.

¡     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Drive configuration option 2:

¡     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID, with an IOPS of 5000 or higher.

¡     ETCD drive: 7.2K RPM SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

Network ports:

·     Non-bonding mode: 1 × 1Gbps network port.

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 1 Gbps or above network ports, forming a Linux bonding interface.

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 8 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 16 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 16 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 16 cores, 2.1 GHz

·     Memory: 64 GB.

·     System drive: 1.7 TB

·     ETCD drive: 50 GB

 

To add other components, see Table 2 to select an installation package and add hardware resources.

Table 7 Cluster deployment

Components

Minimum node requirements

Drive and NIC requirements

Unified Platform (OS+ glusterfs +portal+kernel)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 4 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 8 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 8 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 8 cores, 2.1 GHz

·     Memory: 32 GB.

·     System drive: 500 GB. 300 GB is required for system and platform deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Drives:

·     The drives must be configured in RAID 1, 5, or 10.

·     Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

·     Drive configuration option 1:

¡     System drive: SSDs configured in RAID, with an IOPS of 5000 or higher.

¡     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Drive configuration option 2:

¡     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID, with an IOPS of 5000 or higher.

¡     ETCD drive: 7.2K RPM SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

Network ports:

·     Non-bonding mode: 1 × 1Gbps network port.

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 1 Gbps or above network ports, forming a Linux bonding interface.

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     CPU: Select CPUs based on the server type.

¡     x86-64 (Intel64/AMD64): 8 cores, 2.0 GHz or above

¡     x86-64 (Hygon server): Hygon G5 7380 16 cores, 2.2 GHz

¡     ARM (Kunpeng processor): Kunpeng 920 16 cores, 2.6 GHz

¡     ARM (Phytium processor): Phytium S2500 16 cores, 2.1 GHz

·     Memory: 64 GB.

·     System drive: 1.5 TB. 1.3 TB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

 

 

NOTE:

For Unified Platform versions earlier than E0706 (including E06xx), the etcd partition requires a separate physical drive.

 

Hardware requirements for deployment on VMs

Table 8 Single-node deployment

Components

Minimum node requirements

Drive and NIC requirements

Unified Platform (OS+ glusterfs +portal+kernel)

·     vCPUs: 8 or more

·     Memory: 32 GB.

·     System drive: 600 GB

·     ETCD drive: 50 GB

Drives:

·     Two virtual drives for the VM, each corresponding to a physical drive on the server.

·     Install the ETCD drive on a different physical drive than any other drives.

·     Make sure etcd has exclusive use of the drive where it is installed.

·     IOPS: 5000 or higher.

Network ports:

·     Non-bonding mode: 1 × 1Gbps network port.

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 1 Gbps or above network ports, forming a Linux bonding interface.

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     vCPUs: 8 or more

·     Memory: 64 GB.

·     System drive: 1.7 TB

·     ETCD drive: 50 GB

 

To add other components, see Table 2 to select an installation package and add hardware resources.

Table 9 Hardware requirements for cluster deployment

Components

Minimum node requirements

Drive and NIC requirements

Unified Platform (OS+ glusterfs +portal+kernel)

·     vCPUs: 8 or more

·     Memory: 32 GB.

·     System drive: 500 GB. 300 GB is required for system and platform deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

Drives:

·     Two virtual drives for the VM, each corresponding to a physical drive on the server.

·     Install the ETCD drive on a different physical drive than any other drives.

·     Make sure etcd has exclusive use of the drive where it is installed.

·     IOPS: 5000 or higher

Network ports:

·     Non-bonding mode: 1 × 1Gbps network port.

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 1 Gbps or above network ports, forming a Linux bonding interface.

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

·     vCPUs: 16 or more

·     Memory: 64 GB.

·     System drive: 1.5 TB. 1.3 TB is required for system, platform, and component deployment, and 200 GB is reserved for Gluster.

·     ETCD drive: 50 GB

 

IMPORTANT

IMPORTANT:

·     To ensure stability of Unified Platform, do not overcommit hardware resources such as CPU, memory, and drive.

·     For Unified Platform versions earlier than E0706 (including E06xx), the etcd partition requires a separate physical drive.

 


Hardware requirements for AD-Campus

AD-Campus can be deployed in single-node mode or cluster mode, and can be deployed on a physical server or VM.

You can deploy the controller and the analyzer separately or in a converged manner.

If you deploy only SeerAnalyzer or deploy the controller and SeerAnalyzer separately in a cluster, the SeerAnalyzer nodes use load balancing mode. Decrease of SeerAnalyzer nodes will decrease analytics performance.

 

IMPORTANT

IMPORTANT:

·     For Unified Platform versions earlier than E0706 (including E06xx), the etcd partition requires a separate physical drive.

·     For Unified Platform E0706 and later versions, the etcd partition can share a physical drive with other partitions. As a best practice, use a separate physical drive for the etcd partition.

 

Deployment on physical servers (x86-64: Intel64/AMD64)

Table 10 General requirements

Item

Requirements

Drive:

·     The drives must be configured in RAID 1, 5, or 10.

·     Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

·     The capacity requirement refers to the capacity after RAID setup.

System drive

SSDs or SATA/SAS HDDs configured in RAID that provides a minimum total drive size of 2.4 TB.

For HDDs, the rotation speed must be 7.2K RPM or higher.

ETCD drive

SSDs or 7.2K RPM or above SATA/SAS HDDs configured in RAID that provides a minimum total drive size of 50 GB.

Installation path: /var/lib/etcd.

Data drive

SSDs or SATA/SAS HDDs. As a best practice, configure RAID 5 by using three or more data drives.

Network ports

·     Non-bonding mode: One NIC that has one 1Gbps network port. If the node also acts as an analyzer, one more network port is required. As a best practice, use a 10 Gbps network port.

·     Bonding mode (mode 2 and mode 4 are recommended): Two NICs, each NIC having two 1Gbps network ports. If the node also acts as an analyzer, two more network ports are required. As a best practice, use two 10 Gbps network ports to set up a Linux bonding interface.

Upon converged deployment of the analyzer, controller, and Unified Platform, configure the controller and Unified Platform to share a network port, and configure the analyzer to use a separate northbound network port. If only one northbound network port is available, configure the controller and the northbound network of the analyzer to share a network port, and configure Unified Platform to use a separate network port. If the analyzer uses a unified northbound and southbound network, configure the analyzer and controller to share a network port.

 

 

NOTE:

If the NIC of the server provides 10-GE and GE ports, configure Unified Platform and the analyzer to share the 10-GE port and the controller to use the GE port alone.

 

In the following tables, the radio of switches to ACs/APs is 1:3.

Standalone deployment of controller

Table 11 Standalone deployment of controller on a single node (Unified Platform + vDHCP + SE + EIA + WSM (Oasis excluded) )

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 24 cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     2000 online users

·     400 switches, ACs, and APs in total

Controller node

1

·     CPU: 24 cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     5000 online users

·     1000 switches, ACs, and APs in total

 

Table 12 Standalone deployment of controller in a cluster (Unified Platform + vDHCP + SE + EIA + WSM (Oasis excluded) )

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 22 core, 2.0 GHz.

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     2000 online users

·     400 switches, ACs, and APs in total

Controller node

3

·     CPU: 22 core, 2.0 GHz.

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     5000 online users

·     1000 switches, ACs, and APs in total

Controller node

3

·     CPU: 24 cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     10000 online users

·     2000 switches, ACs, and APs in total

Controller node

3

·     CPU: 24 cores, 2.0 GHz

·     Memory: 144 GB.

·     System drive: 2.7 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     20000 online users

·     4000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 26 cores, 2.0 GHz

·     Memory: 144 GB.

·     System drive: 3.0 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     40000 online users

·     8000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 36 cores, 2.0 GHz

·     Memory: 160 GB.

·     System drive: 3.2 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     60000 online users

·     12000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 36 cores, 2.0 GHz

·     Memory: 176 GB.

·     System drive: 3.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     100000 online users

·     20000 devices, including switches, ACs, and APs.

 

Standalone deployment of SeerAnalyzer

The hardware requirements for analyzer deployment vary by analyzer version. This document describes only the hardware requirements for SeerAnalyzer E62xx and later. For hardware requirements of SeerAnalyzer history versions, see "Hardware requirements for SeerAnalyzer history versions".

Table 13 Independent deployment of the analyzer in single-node mode (Unified Platform + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     2000 online users

·     400 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     5000 online users

·     1000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     10000 online users

·     2000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 224 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     20000 online users

·     4000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 28 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 5 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     40000 online users

·     8000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 32 cores (total physical cores), 2.0 GHz

·     Memory: 288 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 7 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     60000 online users

·     12000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 40 cores (total physical cores), 2.0 GHz

·     Memory: 384 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 11 TB after RAID setup. 8 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Table 14 Standalone deployment of SeerAnalyzer in a cluster (Unified Platform + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     2000 online users

·     400 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     5000 online users

·     1000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     10000 online users

·     2000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 160 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     20000 online users

·     4000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 192 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     40000 online users

·     8000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 28 cores (total physical cores), 2.0 GHz

·     Memory: 224 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 5 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     60000 online users

·     12000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 32 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 6 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Converged deployment of the controller and analyzer

Converged deployment of the controller and analyzer supports the following modes:

·     Converged deployment of the controller and analyzer in single-node mode. For the hardware requirements, see Table 15.

·     Converged deployment of the controller and analyzer in 3+N cluster mode. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. For hardware requirements, see the hardware requirements for standalone controller deployment and standalone analyzer deployment.

 

 

NOTE:

For converged deployment of the controller and analyzer in 3+N cluster mode, the Oasis and Analyzer-Collector services required by the analyzer will be deployed on the master nodes. Therefore, in addition to the hardware resources required by the controller, you must also reserve the hardware resources required by the Oasis and Analyzer-Collector services on the master nodes. For the hardware requirements of the Oasis service, see Table 31. For the hardware requirements of the Analyzer-Collector service, see Table 2.

 

·     Converged deployment of the controller and analyzer in a three-node cluster mode. In this mode, deploy the controller and analyzer on three master nodes. For hardware requirements, see Table 16.

This section describes the hardware requirements for converged deployment of SeerAnalyzer E62xx or later and the controller.

Table 15 Converged deployment of the controller and analyzer in single-node mode (Unified Platform + vDHCP + SE + EIA + WSM + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller and analyzer

1

·     CPU: 38 cores, 2.0 GHz

·     Memory: 272 GB.

·     System drive: 2.4 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     2000 online users

·     400 devices, including switches, ACs, and APs.

Controller and analyzer

1

·     CPU: 38 cores, 2.0 GHz

·     Memory: 304 GB.

·     System drive: 2.4 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     5000 online users

·     1000 devices, including switches, ACs, and APs.

 

Table 16 Converged deployment of the controller and analyzer in three-node cluster mode (Unified Platform + vDHCP + SE + EIA + WSM + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller and analyzer

3

·     CPU: 36 cores, 2.0 GHz

·     Memory: 224 GB.

·     System drive: 3 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     2000 online users

·     400 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 36 cores, 2.0 GHz

·     Memory: 224 GB.

·     System drive: 3 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     5000 online users

·     1000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 40 cores, 2.0 GHz

·     Memory: 240 GB.

·     System drive: 3.2 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     10000 online users

·     2000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 38 cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 3.8 TB (after RAID setup)

·     Data drive: 3 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     20000 online users

·     4000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 44 cores, 2.0 GHz

·     Memory: 288 GB.

·     System drive: 4.1 TB (after RAID setup)

·     Data drive: 4 TB or above after RAID setup. If HDDs are used, three HDDs of the same model are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     40000 online users

·     8000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 56 cores, 2.0 GHz

·     Memory: 336 GB.

·     System drive: 4.3 TB (after RAID setup)

·     Data drive: 5 TB or above after RAID setup. If HDDs are used, four HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     60000 online users

·     12000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 58 cores, 2.0 GHz

·     Memory: 400 GB.

·     System drive: 4.5 TB (after RAID setup)

·     Data drive: 8 TB or above after RAID setup. If HDDs are used, six HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     100000 online users

·     20000 devices, including switches, ACs, and APs.

 

DHCP server deployment

The following deployment modes are available:

·     H3C vDHCP Server—vDHCP Server can assign IP addresses to devices during onboarding of the devices and assign IP addresses to users and endpoints that access the network. You can use vDHCP Server to assign IP addresses to users if the number of online endpoints does not exceed 50000. vDHCP Server and Unified Platform are installed on the same server or group of servers.

Table 17 Hardware requirements for vDHCP deployment in single-node or cluster mode

Node configuration

Allocatable IP addresses

Node name

Node quantity

Minimum single-node requirements

vDHCP node

1 or 3

·     CPU: 1 core, 2.0 GHz.

·     Memory: 2 GB.

·     ≤ 15000

vDHCP node

1 or 3

·     CPU: 1 core, 2.0 GHz.

·     Memory: 3 GB.

·     ≤ 50000

 

·     Windows DHCP Server—If you do not use vDHCP Server to assign IP addresses to endpoints and no other DHCP server is deployed, you can use the DHCP server service provided with the Windows operating system to assign IP addresses to endpoints. Make sure the operating system is Windows Server 2012 R2 or a higher version. As a best practice, use Windows Server 2016. If the number of online endpoints is greater than 15000, the Windows DHCP Server must be deployed separately. As a best practice, deploy two Windows DHCP servers for HA.

Table 18 Hardware requirements for Windows DHCP server deployed on a physical server

Allocatable IP addresses

Independent Windows DHCP server

≤ 10000

·     CPU: 8 cores, 2.0 GHz

·     Memory: 16GB

·     Drive: 2 × 300 GB drives in RAID 1

·     RAID controller: 256 MB cache

10000 to 40000

·     CPU: 16 cores, 2.0 GHz

·     Memory: 16GB

·     Drive: 2 × 300 GB drives in RAID 1

·     RAID controller: 256 MB cache

40,000 to 60,000

·     CPU: 24 cores, 2.0 GHz

·     Memory: 32GB

·     Drive: 2 × 500 GB drives in RAID 1

·     RAID controller: 1 GB cache

60,000 to 100,000

·     CPU: 24 cores, 2.0 GHz

·     Memory: 64GB

·     Drive: 2 × 500 GB drives in RAID 1

·     RAID controller: 1 GB cache

 

IMPORTANT

IMPORTANT:

As a best practice, configure the fail-permit DHCP server for the AD-Campus solution unless declared clearly by the user that is not required. As a best practice, uses a Windows DHCP server deployed in standalone mode as the fail-permit DHCP server.

 

·     Other third-party DHCP Server (for example, InfoBlox, BlueCat, and WRD)—User supplied. It cannot be connected to the AD-Campus controller, and it does support name-IP binding.

EPS deployment

EPS supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EPS. The hardware requirements for converged deployment of Unified Platform, controller, and EPS are the sum of the hardware requirements for them.

Table 19 Hardware requirements of EPS single-node deployment

Node configuration

Maximum endpoints that can be managed

Node name

Node quantity

Minimum single-node requirements

EPS nodes

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 12 GB.

·     System drive: 100 GB after RAID setup

≤ 10000

EPS nodes

1

·     CPU: 6 core, 2.0 GHz.

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

≤ 20000

 

Table 20 Hardware requirements for EPS cluster deployment

Node configuration

Maximum endpoints that can be managed

Node name

Node quantity

Minimum single-node requirements

EPS nodes

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 6 GB.

·     System drive: 100 GB after RAID setup

≤ 10000

EPS nodes

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

≤ 20000

EPS nodes

3

·     CPU: 6 core, 2.0 GHz.

·     Memory: 12 GB.

·     System drive: 300 GB after RAID setup

≤ 50000

EPS nodes

3

·     CPU: 8 core, 2.0 GHz.

·     Memory: 16 GB.

·     System drive: 500 GB after RAID setup

≤ 100000

 

SMP deployment

In the AD-Campus solution, unified network security management is supported and SMP can be installed based on Unified Platform. In converged deployment, security logging is not supported. The following tables describe hardware requirements only for SMP. The hardware requirements for converged deployment of Unified Platform, controller, and SMP are the sum of the hardware requirements for them.

In cluster deployment mode, SMP uses only the hardware resources on the master node.

Table 21 Hardware requirements for deploying SMP on physical servers

Security devices

Requirements

1 to 10

·     CPU: 4 core, 2.0 GHz.

·     Memory: 10 GB.

·     Drive: 100 GB

10 to 50

·     CPU: 8 core, 2.0 GHz.

·     Memory: 16 GB.

·     Drive: 200 GB

50 to 100

·     CPU: 16 core, 2.0 GHz.

·     Memory: 32 GB.

·     Drive: 500 GB

 

EIA deployment

EIA supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EIA. The hardware requirements for converged deployment of Unified Platform, controller, and EIA are the sum of the hardware requirements for them.

Table 22 Hardware requirements of EIA single-node deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 24 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 24 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

10000 online users

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

20000 online users

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

40000 online users

Controller node

1

·     CPU: 8 core, 2.0 GHz.

·     Memory: 32 GB.

·     System drive: 500 GB after RAID setup

60000 online users

Controller node

1

·     CPU: 8 core, 2.0 GHz.

·     Memory: 32 GB.

·     System drive: 500 GB after RAID setup

100000 online users

 

Table 23 Hardware requirements for EIA cluster deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

10000 online users

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

20000 online users

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

40000 online users

Controller node

3

·     CPU: 8 core, 2.0 GHz.

·     Memory: 24 GB.

·     System drive: 500 GB after RAID setup

60000 online users

Controller node

3

·     CPU: 8 core, 2.0 GHz.

·     Memory: 24 GB.

·     System drive: 500 GB after RAID setup

100000 online users

 

IMPORTANT

IMPORTANT:

GlusterFS requires 50 GB more space.

 

EAD deployment

EAD supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EAD. The hardware requirements for converged deployment of Unified Platform, controller, EIA, and EAD are the sum of the hardware requirements for them.

Table 24 Hardware requirements for EAD single-node deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 2 core, 2.0 GHz.

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

1

·     CPU: 2 core, 2.0 GHz.

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 14 GB.

·     System drive: 200 GB (after RAID setup)

10000 online users

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 14 GB.

·     System drive: 200 GB (after RAID setup)

20000 online users

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 18 GB.

·     System drive: 200 GB (after RAID setup)

40000 online users

 

Table 25 Hardware requirements for EAD cluster deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 2 core, 2.0 GHz.

·     Memory: 8 GB.

·     System drive: 200 GB or above after RAID setup.

2000 online users

Controller node

3

·     CPU: 2 core, 2.0 GHz.

·     Memory: 8 GB.

·     System drive: 200 GB or above after RAID setup.

5000 online users

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 12 GB.

·     System drive: 200 GB or above after RAID setup.

10000 online users

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 12 GB.

·     System drive: 200 GB or above after RAID setup.

20000 online users

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 16 GB.

·     System drive: 200 GB or above after RAID setup.

40000 online users

Controller node

3

·     CPU: 6 cores, 2.0 GHz

·     Memory: 24 GB.

·     System drive: 500 GB or above after RAID setup.

100000 online users

 

WSM deployment

Table 26 Software package description

Software package name

Feature description

Remarks

WSM

Basic WLAN management, including wireless device monitoring and configuration.

Required.

Oasis

Intelligent WLAN analysis, including AP statistics, endpoint statistics, issue analysis, one-click diagnosis, one-click optimization, wireless security, Doctor AP, issue resolving, gradual optimization, and application analysis.

Optional

 

WSM supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for WSM. The hardware requirements for converged deployment of Unified Platform, controller, and WSM are the sum of the hardware requirements for them.

Table 27 Hardware requirements for WSM deployment in single-node mode (including only WSM)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online APs

Controller node

1

·     CPU: 4 core, 2.0 GHz.

·     Memory: 12 GB.

·     System drive: 400 GB after RAID setup

5000 online APs

 

Table 28 Hardware requirements for WSM deployment in cluster mode (including only WSM)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online APs

Controller node

3

·     CPU: 4 core, 2.0 GHz.

·     Memory: 12 GB.

·     System drive: 400 GB after RAID setup

5000 online APs

Controller node

3

·     CPU: 6 core, 2.0 GHz.

·     Memory: 16 GB.

·     System drive: 600 GB after RAID setup

10000 online APs

Controller node

3

·     CPU: 6 core, 2.0 GHz.

·     Memory: 20 GB.

·     System drive: 800 GB after RAID setup

20000 online APs

 

Table 29 Hardware requirements for WSM deployment in single-node mode (including WSM and Oasis)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 8 core, 2.0 GHz.

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

2000 online APs

Controller node

1

·     CPU: 8 core, 2.0 GHz.

·     Memory: 28 GB.

·     System drive: 600 GB after RAID setup

5000 online APs

 

Table 30 Hardware requirements for WSM deployment in cluster mode (including WSM and Oasis)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 8 core, 2.0 GHz.

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

2000 online APs

Controller node

3

·     CPU: 8 core, 2.0 GHz.

·     Memory: 28 GB.

·     System drive: 600 GB after RAID setup

5000 online APs

Controller node

3

·     CPU: 12 core, 2.0 GHz.

·     Memory: 40 GB.

·     System drive: 1 TB after RAID setup

10000 online APs

Controller node

3

·     CPU: 12 core, 2.0 GHz.

·     Memory: 52 GB.

·     System drive: 1.5 TB after RAID setup

20000 online APs

 

Oasis deployment

The following tables describe hardware requirements only for Oasis. The hardware requirements for converged deployment of Unified Platform, controller, and Oasis are the sum of the hardware requirements for them.

Table 31 Hardware requirements for Oasis deployment (applicable to both single-node and cluster modes)

Node name

Minimum single-node requirements

Maximum resources that can be managed

Controller node

·     CPU: 4 core, 2.0 GHz.

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

5000 online APs

Controller node

·     CPU: 6 core, 2.0 GHz.

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

10000 online APs

Controller node

·     CPU: 6 core, 2.0 GHz.

·     Memory: 32 GB.

·     System drive: 900 GB after RAID setup

20000 online APs

 

Deployment on physical servers (x86-64 Hygon servers)

IMPORTANT

IMPORTANT:

For deployment on domestic servers, purchase domestic commercial operating systems as needed.

 

Table 32 General requirements

Item

Requirements

Drive:

·     The drives must be configured in RAID 1, 5, or 10.

·     Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

·     The capacity requirement refers to the capacity after RAID setup.

System drive

SSDs or SATA/SAS HDDs configured in RAID that provides a minimum total drive size of 2.4 TB.

For HDDs, the rotation speed must be 7.2K RPM or higher.

ETCD drive

SSDs or 7.2K RPM or above SATA/SAS HDDs configured in RAID that provides a minimum total drive size of 50 GB.

Installation path: /var/lib/etcd.

Data drive

SSDs or SATA/SAS HDDs. As a best practice, configure RAID 5 by using three or more data drives.

Network ports:

·     Non-bonding mode: One NIC that has one 1Gbps network port. If the node also acts as an analyzer, one more network port is required. As a best practice, use a 10 Gbps network port.

·     Bonding mode (mode 2 and mode 4 are recommended): Two NICs, each NIC having two 1Gbps network ports. If the node also acts as an analyzer, two more network ports are required. As a best practice, use two 10 Gbps network ports to set up a Linux bonding interface.

Upon converged deployment of the analyzer, controller, and Unified Platform, configure the controller and Unified Platform to share a network port, and configure the analyzer to use a separate northbound network port. If only one northbound network port is available, configure the controller and the northbound network of the analyzer to share a network port, and configure Unified Platform to use a separate network port. If the analyzer uses a unified northbound and southbound network, configure the analyzer and controller to share a network port.

CPU

Hygon C86 7265, 2.2 GHz

 

 

NOTE:

If the NIC of the server provides 10-GE and GE ports, configure Unified Platform and the analyzer to share the 10-GE port and the controller to use the GE port alone.

 

In the following tables, the radio of switches to ACs/APs is 1:3.

Standalone deployment of controller

Table 33 Standalone deployment of controller on a single node (Unified Platform + vDHCP + SE + EIA + WSM (Oasis excluded) )

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 36 cores

·     Memory: 144 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     2000 online users

·     400 switches, ACs, and APs in total

Controller node

1

·     CPU: 36 cores

·     Memory: 144 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     5000 online users

·     1000 switches, ACs, and APs in total

 

Table 34 Standalone deployment of controller in a cluster (Unified Platform + vDHCP + SE + EIA + WSM (Oasis excluded) )

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 32 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     2000 online users

·     400 switches, ACs, and APs in total

Controller node

3

·     CPU: 32 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     5000 online users

·     1000 switches, ACs, and APs in total

Controller node

3

·     CPU: 36 cores

·     Memory: 144 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     10000 online users

·     2000 switches, ACs, and APs in total

Controller node

3

·     CPU: 40 cores

·     Memory: 144 GB.

·     System drive: 2.7 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     20000 online users

·     4000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 42 cores

·     Memory: 144 GB.

·     System drive: 3.0 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     40000 online users

·     8000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 54 cores

·     Memory: 176 GB.

·     System drive: 3.2 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     60000 online users

·     12000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 58 cores

·     Memory: 192 GB.

·     System drive: 3.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     100000 online users

·     20000 devices, including switches, ACs, and APs.

 

Standalone deployment of SeerAnalyzer

The hardware requirements for analyzer deployment vary by analyzer version. This document describes only the hardware requirements for SeerAnalyzer E62xx and later. For hardware requirements of SeerAnalyzer history versions, see "Hardware requirements for SeerAnalyzer history versions".

Table 35 Independent deployment of the analyzer in single-node mode (Unified Platform + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 30 cores

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     2000 online users

·     400 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 30 cores

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     5000 online users

·     1000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 30 cores

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     10000 online users

·     2000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 36 cores

·     Memory: 224 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     20000 online users

·     4000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 42 cores

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 5 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     40000 online users

·     8000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 48 cores

·     Memory: 288 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 7 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     60000 online users

·     12000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 60 cores

·     Memory: 384 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 11 TB after RAID setup. 8 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Table 36 Standalone deployment of SeerAnalyzer in a cluster (Unified Platform + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 30 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     2000 online users

·     400 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 30 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     5000 online users

·     1000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 30 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     10000 online users

·     2000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 30 cores

·     Memory: 160 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     20000 online users

·     4000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 36 cores

·     Memory: 192 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     40000 online users

·     8000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 42 cores

·     Memory: 224 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 5 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     60000 online users

·     12000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 48 cores

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 6 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Converged deployment of the controller and analyzer

Converged deployment of the controller and analyzer supports the following modes:

·     Converged deployment of the controller and analyzer in single-node mode. For the hardware requirements, see Table 15.

·     Converged deployment of the controller and analyzer in 3+N cluster mode. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. For hardware requirements, see the hardware requirements for standalone controller deployment and standalone analyzer deployment.

 

 

NOTE:

For converged deployment of the controller and analyzer in 3+N cluster mode, the Oasis and Analyzer-Collector services required by the analyzer will be deployed on the master nodes. Therefore, in addition to the hardware resources required by the controller, you must also reserve the hardware resources required by the Oasis and Analyzer-Collector services on the master nodes. For the hardware requirements of the Oasis service, see Table 52. For the hardware requirements of the Analyzer-Collector service, see Table 2.

 

·     Converged deployment of the controller and analyzer in a three-node cluster mode. In this mode, deploy the controller and analyzer on three master nodes. For hardware requirements, see Table 16.

This section describes the hardware requirements for converged deployment of SeerAnalyzer E62xx or later and the controller.

Table 37 Converged deployment of the controller and analyzer in single-node mode (Unified Platform + vDHCP + SE + EIA + WSM + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller and analyzer

1

·     CPU: 56 cores

·     Memory: 288 GB.

·     System drive: 2.4 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     2000 online users

·     400 devices, including switches, ACs, and APs.

Controller and analyzer

1

·     CPU: 56 cores

·     Memory: 320 GB.

·     System drive: 2.4 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     5000 online users

·     1000 devices, including switches, ACs, and APs.

 

Table 38 Converged deployment of the controller and analyzer in three-node cluster mode (Unified Platform + vDHCP + SE + EIA + WSM + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller and analyzer

3

·     CPU: 54 cores

·     Memory: 224 GB.

·     System drive: 3 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     2000 online users

·     400 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 54 cores

·     Memory: 224 GB.

·     System drive: 3 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     5000 online users

·     1000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 60 cores

·     Memory: 256 GB.

·     System drive: 3.2 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     10000 online users

·     2000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 60 cores

·     Memory: 266 GB.

·     System drive: 3.8 TB (after RAID setup)

·     Data drive: 3 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     20000 online users

·     4000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 70 cores

·     Memory: 288 GB.

·     System drive: 4.1 TB (after RAID setup)

·     Data drive: 4 TB or above after RAID setup. If HDDs are used, three HDDs of the same model are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     40000 online users

·     8000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 84 cores

·     Memory: 352 GB.

·     System drive: 4.3 TB (after RAID setup)

·     Data drive: 5 TB or above after RAID setup. If HDDs are used, four HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     60000 online users

·     12000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 90 cores

·     Memory: 416 GB.

·     System drive: 4.5 TB (after RAID setup)

·     Data drive: 8 TB or above after RAID setup. If HDDs are used, six HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     100000 online users

·     20000 devices, including switches, ACs, and APs.

 

DHCP server deployment

The following deployment modes are available:

·     UNIS vDHCP Server—vDHCP Server can assign IP addresses to devices during onboarding of the devices and assign IP addresses to users and endpoints that access the network. You can use vDHCP Server to assign IP addresses to users if the number of online endpoints does not exceed 50000. vDHCP Server and Unified Platform are installed on the same server or group of servers.

Table 39 Hardware requirements for vDHCP deployment in single-node or cluster mode

Node configuration

Allocatable IP addresses

Node name

Node quantity

Minimum single-node requirements

vDHCP node

1 or 3

·     CPU: 1 cores

·     Memory: 2 GB.

≤ 15000

vDHCP node

1 or 3

·     CPU: 1 cores

·     Memory: 3 GB.

≤ 50000

 

·     Windows DHCP Server—If you do not use vDHCP Server to assign IP addresses to endpoints and no other DHCP server is deployed, you can use the DHCP server service provided with the Windows operating system to assign IP addresses to endpoints. Make sure the operating system is Windows Server 2012 R2 or a higher version. As a best practice, use Windows Server 2016. If the number of online endpoints is greater than 15000, the Windows DHCP Server must be deployed separately. As a best practice, deploy two Windows DHCP servers for HA.

Table 40 Hardware requirements for Windows DHCP server deployed on a physical server

Allocatable IP addresses

Independent Windows DHCP server

≤ 10000

·     CPU: 8 cores, 2.0 GHz

·     Memory: 16GB

·     Drive: 2 × 300 GB drives in RAID 1

·     RAID controller: 256 MB cache

10000 to 40000

·     CPU: 16 cores, 2.0 GHz

·     Memory: 16GB

·     Drive: 2 × 300 GB drives in RAID 1

·     RAID controller: 256 MB cache

40,000 to 60,000

·     CPU: 24 cores, 2.0 GHz

·     Memory: 32GB

·     Drive: 2 × 500 GB drives in RAID 1

·     RAID controller: 1 GB cache

60,000 to 100,000

·     CPU: 24 cores, 2.0 GHz

·     Memory: 64GB

·     Drive: 2 × 500 GB drives in RAID 1

·     RAID controller: 1 GB cache

 

IMPORTANT

IMPORTANT:

As a best practice, configure the fail-permit DHCP server for the SDN-Campus solution unless declared clearly by the user that is not required. As a best practice, uses a Windows DHCP server deployed in standalone mode as the fail-permit DHCP server.

 

·     Other third-party DHCP Server (for example, InfoBlox, BlueCat, and WRD)—User supplied. It cannot be connected to the SDN-Campus controller, and it does support name-IP binding.

EPS deployment

EPS supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EPS. The hardware requirements for converged deployment of Unified Platform, controller, and EPS are the sum of the hardware requirements for them.

Table 41 Hardware requirements of EPS single-node deployment

Node configuration

Maximum endpoints that can be managed

Node name

Node quantity

Minimum single-node requirements

EPS nodes

1

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 100 GB after RAID setup

≤ 10000

EPS nodes

1

·     CPU: 10 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

≤ 20000

 

Table 42 Hardware requirements for EPS cluster deployment

Node configuration

Maximum endpoints that can be managed

Node name

Node quantity

Minimum single-node requirements

EPS nodes

3

·     CPU: 6 cores

·     Memory: 6 GB.

·     System drive: 100 GB after RAID setup

≤ 10000

EPS nodes

3

·     CPU: 6 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

≤ 20000

EPS nodes

3

·     CPU: 10 cores

·     Memory: 12 GB.

·     System drive: 300 GB after RAID setup

≤ 50000

EPS nodes

3

·     CPU: 12 cores

·     Memory: 16 GB.

·     System drive: 500 GB after RAID setup

≤ 100000

 

EIA deployment

EIA supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EIA. The hardware requirements for converged deployment of Unified Platform, controller, and EIA are the sum of the hardware requirements for them.

Table 43 Hardware requirements of EIA single-node deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 6 cores

·     Memory: 24 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 24 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

10000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

20000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

40000 online users

Controller node

1

·     CPU: 12 cores

·     Memory: 32 GB.

·     System drive: 500 GB after RAID setup

60000 online users

Controller node

1

·     CPU: 12 cores

·     Memory: 32 GB.

·     System drive: 500 GB after RAID setup

100000 online users

 

Table 44 Hardware requirements for EIA cluster deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

10000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

20000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

40000 online users

Controller node

3

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 500 GB after RAID setup

60000 online users

Controller node

3

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 500 GB after RAID setup

100000 online users

 

IMPORTANT

IMPORTANT:

GlusterFS requires 50 GB more space.

 

EAD deployment

EAD supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EAD. The hardware requirements for converged deployment of Unified Platform, controller, EIA, and EAD are the sum of the hardware requirements for them.

Table 45 Hardware requirements for EAD single-node deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

1

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 14 GB.

·     System drive: 200 GB (after RAID setup)

10000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 14 GB.

·     System drive: 200 GB (after RAID setup)

20000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 18 GB.

·     System drive: 200 GB (after RAID setup)

40000 online users

 

Table 46 Hardware requirements for EAD cluster deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB or above after RAID setup.

2000 online users

Controller node

3

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB or above after RAID setup.

5000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 200 GB or above after RAID setup.

10000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 200 GB or above after RAID setup.

20000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB or above after RAID setup.

40000 online users

Controller node

3

·     CPU: 10 cores

·     Memory: 24 GB.

·     System drive: 500 GB or above after RAID setup.

100000 online users

 

WSM deployment

Table 47 Software package description

Software package name

Feature description

Remarks

WSM

Basic WLAN management, including wireless device monitoring and configuration.

Required.

Oasis

Intelligent WLAN analysis, including AP statistics, endpoint statistics, issue analysis, one-click diagnosis, one-click optimization, wireless security, Doctor AP, issue resolving, gradual optimization, and application analysis.

Optional

 

WSM supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for WSM. The hardware requirements for converged deployment of Unified Platform, controller, and WSM are the sum of the hardware requirements for them.

Table 48 Hardware requirements for WSM deployment in single-node mode (including only WSM)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 6 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online APs

Controller node

1

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 400 GB after RAID setup

5000 online APs

 

Table 49 Hardware requirements for WSM deployment in cluster mode (including only WSM)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 6 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online APs

Controller node

3

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 400 GB after RAID setup

5000 online APs

Controller node

3

·     CPU: 10 cores

·     Memory: 16 GB.

·     System drive: 600 GB after RAID setup

10000 online APs

Controller node

3

·     CPU: 10 cores

·     Memory: 20 GB.

·     System drive: 800 GB after RAID setup

20000 online APs

 

Table 50 Hardware requirements for WSM deployment in single-node mode (including WSM and Oasis)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

2000 online APs

Controller node

1

·     CPU: 12 cores

·     Memory: 28 GB.

·     System drive: 600 GB after RAID setup

5000 online APs

 

Table 51 Hardware requirements for WSM deployment in cluster mode (including WSM and Oasis)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

2000 online APs

Controller node

3

·     CPU: 12 cores

·     Memory: 28 GB.

·     System drive: 600 GB after RAID setup

5000 online APs

Controller node

3

·     CPU: 18 cores

·     Memory: 40 GB.

·     System drive: 1 TB after RAID setup

10000 online APs

Controller node

3

·     CPU: 18 cores

·     Memory: 52 GB.

·     System drive: 1.5 TB after RAID setup

20000 online APs

 

Oasis deployment

The following tables describe hardware requirements only for Oasis. The hardware requirements for converged deployment of Unified Platform, controller, and Oasis are the sum of the hardware requirements for them.

Table 52 Hardware requirements for Oasis deployment (applicable to both single-node and cluster modes)

Node name

Minimum single-node requirements

Maximum resources that can be managed

Controller node

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

5000 online APs

Controller node

·     CPU: 9 cores

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

10000 online APs

Controller node

·     CPU: 9 cores

·     Memory: 32 GB.

·     System drive: 900 GB after RAID setup

20000 online APs

 

Deployment on physical servers (ARM Kunpeng/Phytium servers)

IMPORTANT

IMPORTANT:

For deployment on domestic servers, purchase domestic commercial operating systems as needed.

 

Table 53 General requirements

Item

Requirements

Drive:

·     The drives must be configured in RAID 1, 5, or 10.

·     Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

·     The capacity requirement refers to the capacity after RAID setup.

System drive

SSDs or SATA/SAS HDDs configured in RAID that provides a minimum total drive size of 2.4 TB.

For HDDs, the rotation speed must be 7.2K RPM or higher.

ETCD drive

SSDs or 7.2K RPM or above SATA/SAS HDDs configured in RAID that provides a minimum total drive size of 50 GB.

Installation path: /var/lib/etcd.

Data drive

SSDs or SATA/SAS HDDs. As a best practice, configure RAID 5 by using three or more data drives.

Network ports:

·     Non-bonding mode: One NIC that has one 1Gbps network port. If the node also acts as an analyzer, one more network port is required. As a best practice, use a 10 Gbps network port.

·     Bonding mode (mode 2 and mode 4 are recommended): Two NICs, each NIC having two 1Gbps network ports. If the node also acts as an analyzer, two more network ports are required. As a best practice, use two 10 Gbps network ports to set up a Linux bonding interface.

Upon converged deployment of the analyzer, controller, and Unified Platform, configure the controller and Unified Platform to share a network port, and configure the analyzer to use a separate northbound network port. If only one northbound network port is available, configure the controller and the northbound network of the analyzer to share a network port, and configure Unified Platform to use a separate network port. If the analyzer uses a unified northbound and southbound network, configure the analyzer and controller to share a network port.

CPU

·     Kunpeng 920, 2.6GHz

·     Phytium S2500, 2.1 GHz

 

 

NOTE:

If the NIC of the server provides 10-GE and GE ports, configure Unified Platform and the analyzer to share the 10-GE port and the controller to use the GE port alone.

 

In the following tables, the radio of switches to ACs/APs is 1:3.

Standalone deployment of controller

Table 54 Standalone deployment of controller on a single node (Unified Platform + vDHCP + SE + EIA + WSM (Oasis excluded) )

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 36 cores

·     Memory: 144 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     2000 online users

·     400 switches, ACs, and APs in total

Controller node

1

·     CPU: 36 cores

·     Memory: 144 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     5000 online users

·     1000 switches, ACs, and APs in total

 

Table 55 Standalone deployment of controller in a cluster (Unified Platform + vDHCP + SE + EIA + WSM (Oasis excluded) )

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 32 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     2000 online users

·     400 switches, ACs, and APs in total

Controller node

3

·     CPU: 32 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     5000 online users

·     1000 switches, ACs, and APs in total

Controller node

3

·     CPU: 36 cores

·     Memory: 144 GB.

·     System drive: 2.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     10000 online users

·     2000 switches, ACs, and APs in total

Controller node

3

·     CPU: 40 cores

·     Memory: 144 GB.

·     System drive: 2.7 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     20000 online users

·     4000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 42 cores

·     Memory: 144 GB.

·     System drive: 3.0 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     40000 online users

·     8000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 54 cores

·     Memory: 176 GB.

·     System drive: 3.2 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     60000 online users

·     12000 devices, including switches, ACs, and APs.

Controller node

3

·     CPU: 58 cores

·     Memory: 192 GB.

·     System drive: 3.4 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     100000 online users

·     20000 devices, including switches, ACs, and APs.

 

Standalone deployment of SeerAnalyzer

The analyzer cannot be deployed on Phytium servers.

Table 56 Independent deployment of the analyzer in single-node mode (Unified Platform + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 30 cores

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     2000 online users

·     400 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 30 cores

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     5000 online users

·     1000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 30 cores

·     Memory: 192 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     10000 online users

·     2000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 36 cores

·     Memory: 224 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     20000 online users

·     4000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 42 cores

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 5 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     40000 online users

·     8000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 48 cores

·     Memory: 288 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 7 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     60000 online users

·     12000 switches, ACs, and APs in total

Analyzer node

1

·     CPU: 60 cores

·     Memory: 384 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 11 TB after RAID setup. 8 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Table 57 Standalone deployment of SeerAnalyzer in a cluster (Unified Platform + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 30 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     2000 online users

·     400 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 30 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     5000 online users

·     1000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 30 cores

·     Memory: 128 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     10000 online users

·     2000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 30 cores

·     Memory: 160 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     20000 online users

·     4000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 36 cores

·     Memory: 192 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     40000 online users

·     8000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 42 cores

·     Memory: 224 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 5 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     60000 online users

·     12000 switches, ACs, and APs in total

Analyzer node

3

·     CPU: 48 cores

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 6 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Converged deployment of the controller and analyzer

Converged deployment of the controller and analyzer supports the following modes:

·     Converged deployment of the controller and analyzer in single-node mode. For the hardware requirements, see Table 15.

·     Converged deployment of the controller and analyzer in 3+N cluster mode. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. For hardware requirements, see the hardware requirements for standalone controller deployment and standalone analyzer deployment.

 

 

NOTE:

For converged deployment of the controller and analyzer in 3+N cluster mode, the Oasis and Analyzer-Collector services required by the analyzer will be deployed on the master nodes. Therefore, in addition to the hardware resources required by the controller, you must also reserve the hardware resources required by the Oasis and Analyzer-Collector services on the master nodes. For the hardware requirements of the Oasis service, see Table 73. For the hardware requirements of the Analyzer-Collector service, see Table 2.

 

·     Converged deployment of the controller and analyzer in a three-node cluster mode. In this mode, deploy the controller and analyzer on three master nodes. For hardware requirements, see Table 16.

This section describes the hardware requirements for converged deployment of SeerAnalyzer E62xx or later and the controller. The analyzer cannot be deployed on Phytium servers.

Table 58 Converged deployment of the controller and analyzer in single-node mode (Unified Platform + vDHCP + SE + EIA + WSM + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller and analyzer

1

·     CPU: 56 cores

·     Memory: 288 GB.

·     System drive: 2.4 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     2000 online users

·     400 devices, including switches, ACs, and APs.

Controller and analyzer

1

·     CPU: 56 cores

·     Memory: 320 GB.

·     System drive: 2.4 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     5000 online users

·     1000 devices, including switches, ACs, and APs.

 

Table 59 Converged deployment of the controller and analyzer in three-node cluster mode (Unified Platform + vDHCP + SE + EIA + WSM + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller and analyzer

3

·     CPU: 54 cores

·     Memory: 224 GB.

·     System drive: 3 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     2000 online users

·     400 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 54 cores

·     Memory: 224 GB.

·     System drive: 3 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     5000 online users

·     1000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 60 cores

·     Memory: 256 GB.

·     System drive: 3.2 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     10000 online users

·     2000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 60 cores

·     Memory: 266 GB.

·     System drive: 3.8 TB (after RAID setup)

·     Data drive: 3 TB or above after RAID setup. If HDDs are used, two HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     20000 online users

·     4000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 70 cores

·     Memory: 288 GB.

·     System drive: 4.1 TB (after RAID setup)

·     Data drive: 4 TB or above after RAID setup. If HDDs are used, three HDDs of the same model are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     40000 online users

·     8000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 84 cores

·     Memory: 352 GB.

·     System drive: 4.3 TB (after RAID setup)

·     Data drive: 5 TB or above after RAID setup. If HDDs are used, four HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     60000 online users

·     12000 devices, including switches, ACs, and APs.

Controller and analyzer

3

·     CPU: 90 cores

·     Memory: 416 GB.

·     System drive: 4.5 TB (after RAID setup)

·     Data drive: 8 TB or above after RAID setup. If HDDs are used, six HDDs of the same type are required.

·     ETCD drive: 50 GB or above after RAID setup.

·     100000 online users

·     20000 devices, including switches, ACs, and APs.

 

DHCP server deployment

The following deployment modes are available:

·     UNIS vDHCP Server—vDHCP Server can assign IP addresses to devices during onboarding of the devices and assign IP addresses to users and endpoints that access the network. You can use vDHCP Server to assign IP addresses to users if the number of online endpoints does not exceed 50000. vDHCP Server and Unified Platform are installed on the same server or group of servers.

Table 60 Hardware requirements for vDHCP deployment in single-node or cluster mode

Node configuration

Allocatable IP addresses

Node name

Node quantity

Minimum single-node requirements

vDHCP node

1 or 3

·     CPU: 1 cores

·     Memory: 2 GB.

≤ 15000

vDHCP node

1 or 3

·     CPU: 1 cores

·     Memory: 3 GB.

≤ 50000

 

·     Windows DHCP Server—If you do not use vDHCP Server to assign IP addresses to endpoints and no other DHCP server is deployed, you can use the DHCP server service provided with the Windows operating system to assign IP addresses to endpoints. Make sure the operating system is Windows Server 2012 R2 or a higher version. As a best practice, use Windows Server 2016. If the number of online endpoints is greater than 15000, the Windows DHCP Server must be deployed separately. As a best practice, deploy two Windows DHCP servers for HA.

Table 61 Hardware requirements for Windows DHCP server deployed on a physical server

Allocatable IP addresses

Independent Windows DHCP server

≤ 10000

·     CPU: 8 cores, 2.0 GHz

·     Memory: 16GB

·     Drive: 2 × 300 GB drives in RAID 1

·     RAID controller: 256 MB cache

10000 to 40000

·     CPU: 16 cores, 2.0 GHz

·     Memory: 16GB

·     Drive: 2 × 300 GB drives in RAID 1

·     RAID controller: 256 MB cache

40,000 to 60,000

·     CPU: 24 cores, 2.0 GHz

·     Memory: 32GB

·     Drive: 2 × 500 GB drives in RAID 1

·     RAID controller: 1 GB cache

60,000 to 100,000

·     CPU: 24 cores, 2.0 GHz

·     Memory: 64GB

·     Drive: 2 × 500 GB drives in RAID 1

·     RAID controller: 1 GB cache

 

IMPORTANT

IMPORTANT:

As a best practice, configure the fail-permit DHCP server for the SDN-Campus solution unless declared clearly by the user that is not required. As a best practice, uses a Windows DHCP server deployed in standalone mode as the fail-permit DHCP server.

 

·     Other third-party DHCP Server (for example, InfoBlox, BlueCat, and WRD)—User supplied. It cannot be connected to the SDN-Campus controller, and it does support name-IP binding.

EPS deployment

EPS supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EPS. The hardware requirements for converged deployment of Unified Platform, controller, and EPS are the sum of the hardware requirements for them.

Table 62 Hardware requirements of EPS single-node deployment

Node configuration

Maximum endpoints that can be managed

Node name

Node quantity

Minimum single-node requirements

EPS nodes

1

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 100 GB after RAID setup

≤ 10000

EPS nodes

1

·     CPU: 10 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

≤ 20000

 

Table 63 Hardware requirements for EPS cluster deployment

Node configuration

Maximum endpoints that can be managed

Node name

Node quantity

Minimum single-node requirements

EPS nodes

3

·     CPU: 6 cores

·     Memory: 6 GB.

·     System drive: 100 GB after RAID setup

≤ 10000

EPS nodes

3

·     CPU: 6 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

≤ 20000

EPS nodes

3

·     CPU: 10 cores

·     Memory: 12 GB.

·     System drive: 300 GB after RAID setup

≤ 50000

EPS nodes

3

·     CPU: 12 cores

·     Memory: 16 GB.

·     System drive: 500 GB after RAID setup

≤ 100000

 

EIA deployment

EIA supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EIA. The hardware requirements for converged deployment of Unified Platform, controller, and EIA are the sum of the hardware requirements for them.

Table 64 Hardware requirements of EIA single-node deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 6 cores

·     Memory: 24 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 24 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

10000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

20000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 28 GB.

·     System drive: 300 GB after RAID setup

40000 online users

Controller node

1

·     CPU: 12 cores

·     Memory: 32 GB.

·     System drive: 500 GB after RAID setup

60000 online users

Controller node

1

·     CPU: 12 cores

·     Memory: 32 GB.

·     System drive: 500 GB after RAID setup

100000 online users

 

Table 65 Hardware requirements for EIA cluster deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

10000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 20 GB.

System drive: 300 GB after RAID setup

20000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 20 GB.

·     System drive: 300 GB after RAID setup

40000 online users

Controller node

3

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 500 GB after RAID setup

60000 online users

Controller node

3

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 500 GB after RAID setup

100000 online users

 

IMPORTANT

IMPORTANT:

GlusterFS requires 50 GB more space.

 

EAD deployment

EAD supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for EAD. The hardware requirements for converged deployment of Unified Platform, controller, EIA, and EAD are the sum of the hardware requirements for them.

Table 66 Hardware requirements for EAD single-node deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online users

Controller node

1

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

5000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 14 GB.

·     System drive: 200 GB (after RAID setup)

10000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 14 GB.

·     System drive: 200 GB (after RAID setup)

20000 online users

Controller node

1

·     CPU: 6 cores

·     Memory: 18 GB.

·     System drive: 200 GB (after RAID setup)

40000 online users

 

Table 67 Hardware requirements for EAD cluster deployment

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB or above after RAID setup.

2000 online users

Controller node

3

·     CPU: 4 cores

·     Memory: 8 GB.

·     System drive: 200 GB or above after RAID setup.

5000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 200 GB or above after RAID setup.

10000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 200 GB or above after RAID setup.

20000 online users

Controller node

3

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB or above after RAID setup.

40000 online users

Controller node

3

·     CPU: 10 cores

·     Memory: 24 GB.

·     System drive: 500 GB or above after RAID setup.

100000 online users

 

WSM deployment

Table 68 Software package description

Software package name

Feature description

Remarks

WSM

Basic WLAN management, including wireless device monitoring and configuration.

Required.

Oasis

Intelligent WLAN analysis, including AP statistics, endpoint statistics, issue analysis, one-click diagnosis, one-click optimization, wireless security, Doctor AP, issue resolving, gradual optimization, and application analysis.

Optional

 

WSM supports single-node deployment and cluster deployment. The following tables describe hardware requirements only for WSM. The hardware requirements for converged deployment of Unified Platform, controller, and WSM are the sum of the hardware requirements for them.

Table 69 Hardware requirements for WSM deployment in single-node mode (including only WSM)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 6 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online APs

Controller node

1

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 400 GB after RAID setup

5000 online APs

 

Table 70 Hardware requirements for WSM deployment in cluster mode (including only WSM)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 6 cores

·     Memory: 8 GB.

·     System drive: 200 GB after RAID setup

2000 online APs

Controller node

3

·     CPU: 6 cores

·     Memory: 12 GB.

·     System drive: 400 GB after RAID setup

5000 online APs

Controller node

3

·     CPU: 10 cores

·     Memory: 16 GB.

·     System drive: 600 GB after RAID setup

10000 online APs

Controller node

3

·     CPU: 10 cores

·     Memory: 20 GB.

·     System drive: 800 GB after RAID setup

20000 online APs

 

Table 71 Hardware requirements for WSM deployment in single-node mode (including WSM and Oasis)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

2000 online APs

Controller node

1

·     CPU: 12 cores

·     Memory: 28 GB.

·     System drive: 600 GB after RAID setup

5000 online APs

 

Table 72 Hardware requirements for WSM deployment in cluster mode (including WSM and Oasis)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: 12 cores

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

2000 online APs

Controller node

3

·     CPU: 12 cores

·     Memory: 28 GB.

·     System drive: 600 GB after RAID setup

5000 online APs

Controller node

3

·     CPU: 18 cores

·     Memory: 40 GB.

·     System drive: 1 TB after RAID setup

10000 online APs

Controller node

3

·     CPU: 18 cores

·     Memory: 52 GB.

·     System drive: 1.5 TB after RAID setup

20000 online APs

 

Oasis deployment

The following tables describe hardware requirements only for Oasis. The hardware requirements for converged deployment of Unified Platform, controller, and Oasis are the sum of the hardware requirements for them.

Table 73 Hardware requirements for Oasis deployment (applicable to both single-node and cluster modes)

Node name

Minimum single-node requirements

Maximum resources that can be managed

Controller node

·     CPU: 6 cores

·     Memory: 16 GB.

·     System drive: 200 GB after RAID setup

5000 online APs

Controller node

·     CPU: 9 cores

·     Memory: 24 GB.

·     System drive: 400 GB after RAID setup

10000 online APs

Controller node

·     CPU: 9 cores

·     Memory: 32 GB.

·     System drive: 900 GB after RAID setup

20000 online APs

 

Hardware requirements for deployment on VMs

The AD-Campus 6.0 solution supports deployment of the controller and analyzer on VMs. The supported hypervisors and hypervisor versions are the same as those supported by Unified Platform.

For deployment on a VM, if the host where the VM resides is enabled with hyperthreading, the number of required vCPUs is twice the number of CPU cores required for deployment on a physical server. If the host is not enabled with hyperthreading, the number of required vCPUs is the same as the number of CPU cores required for deployment on a physical server. The memory and drive resources required for deployment on a VM are the same as those required for deployment on a physical server. As a best practice, enable hyperthreading.

 

IMPORTANT

IMPORTANT:

·     Allocate CPU, memory, and drive resources in sizes as recommended to VMs and make sure sufficient physical resources are reserved for the allocation. To ensure stability of the system, do not overcommit hardware resources such as CPU, memory, and drive.

·     For Unified Platform versions earlier than E0706 (including E06xx), the etcd partition requires a separate physical drive. For Unified Platform E0706 and later versions, the etcd partition can share a physical drive with other partitions. As a best practice, use a separate physical drive for the etcd partition.

·     To deploy the controller on a VMware VM, you must enable promiscuous mode and pseudo-transmission on the host where the VM resides.

 

Requirements for test and demo deployment

The hardware configuration in this section is only used for test and demonstration, and cannot be used in site scenarios.

 

IMPORTANT

IMPORTANT:

Performance test is not supported with this resource configuration

 

Table 74 Hardware requirements of controller deployment for small-scale testing (Unified Platform + vDHCP + SE + EIA +  EAD + EPS)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 12 cores, 2.0 GHz

·     Memory: 88 GB.

·     System drive: 2 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     100 online users

·     20 switches

 

Table 75 Hardware requirements of controller deployment for small-scale testing (Unified Platform + vDHCP + SE + EIA + EAD + EPS + WSM (including Oasis) )

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: 16 cores, 2.0 GHz

·     Memory: 104 GB.

·     System drive: 2 TB (after RAID setup)

·     ETCD drive: 50 GB (after RAID setup)

·     100 online users

·     20 switches, ACs, and APs in total

 

Table 76 Hardware requirements of analyzer deployment for small-scale testing (Unified Platform + vDHCP + SE + EIA +  WSM + SA)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Controller and analyzer

1

·     CPU: 22 cores, 2.0 GHz

·     Memory: 224 GB.

·     System drive: 2 TB (after RAID setup)

·     Data drive: 2 TB or above after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     100 online users

·     20 switches, ACs, and APs in total

 


Hardware requirements for AD-DC

The AD-DC controller supports single-node and cluster deployment modes. As a best practice, use the three-node cluster deployment mode. The analyzer component contains management analyzers and collectors. The management analyzer supports multi-node cluster deployment. As a best practice, use three-node cluster deployment. The collector supports single-node and cluster deployment modes. As a best practice, use single-node deployment and deploy a collector for each fabric.

You can deploy two controllers in a disaster recovery system to provide redundancy. In this case, a quorum node is required for auto switchover of master/backup controllers. A quorum node can be deployed only in single-node mode.

The controller supports the simulation feature. To use the simulation feature, you must deploy the DTN component and DTN physical hosts, simulating the actual networking environment through virtual switches.

The DTN component supports the following deployment modes:

·     Converged deployment of the DTN component and the controller. Deploy the DTN component on a master node, requiring no additional worker nodes. In this scenario, reserve hardware resources for the DTN component on the master node.

·     Standalone deployment of the DTN component. Deploy the DTN component on a worker node. In this scenario, an additional worker node is required.

 

IMPORTANT

IMPORTANT:

·     For AD-DC controller versions earlier than E6203, the Unified Platform version is earlier than E0706 (including E06xx), and the etcd partition requires an independent physical drive.

·     For AD-DC controller version E6203 and later, the Unified Platform version is E0706 or later, and the etcd partition can share a physical drive with other partitions. As a best practice, use an independent physical drive for the etcd partition.

 

Deployment on physical servers (x86-64: Intel64/AMD64)

General drive requirements

This section introduces the general drive configuration, including the recommended RAID mode, drive type, and performance requirements. For specific drive capacity requirements, see the configuration requirements of the corresponding scenarios.

Table 77 General requirements

Item

Requirements

General drive requirements:

·     The drives must be configured in RAID 1, 5, or 10.

·     Use SSDs or 7.2K RPM or above HDDs, with a minimum IOPS of 5000.

·     For HDDs, the RAID controller must have 1 GB cache, supports powerfail safeguard module, and have a supercapacitor installed.

System drive

SSD

A minimum total capacity of 1920 GB after RAID setup. Recommended:

·     RAID 5: 3 × 960 GB or 5 × 480 GB drives

·     RAID 10: 4 × 960 GB or 8 × 480 GB drives

HDD

A minimum total capacity of 1920 GB after RAID setup. Recommended:

·     RAID 5: 3 × 1200 GB or 5 × 600 GB drives

·     RAID 10: 4 × 1200 GB or 8 × 600 GB drives

ETCD drive

SSD

A minimum total capacity of 50 GB after RAID setup. Recommended:

RAID 1: 2 × 480GB drives

HDD

A minimum total capacity of 50 GB after RAID setup. Recommended:

RAID 1: 2 × 600GB drives

Data drive

A minimum total capacity of 8 TB after RAID setup. Recommended: RAID 5 set up by 3 or more drives of the same model

A minimum total capacity of 12 TB after RAID setup. Recommended: RAID 5 set up by 5 or more drives of the same model

A minimum total capacity of 24 TB after RAID setup. Recommended: RAID 5 set up by 7 or more drives of the same model

 

Standalone deployment of controller

Deployment of controller in cluster mode (x86-64: Intel64/AMD64)

Table 78 Deployment of controller in cluster mode (x86-64: Intel64/AMD64)  (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller node

3

CPU: 16 cores, 2.0 GHz

Memory: 128 GB.

·     To deploy optional application packages, add hardware resources according to Table 93.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

·     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required on each server deployed with the virtualized DTN host.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

¡     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

¡     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, two more 10Gbps network ports are required.

Device quantity:

·     300 (E63xx)

·     100 (E62xx and earlier versions)

Server quantity:

·     6000 (E63xx)

·     2000 (E62xx and earlier versions)

(Deployed with virtualized DTN host) simulated devices:

·     8 CPU cores, 64GB memory: Manage up to 30 simulated devices.

·     16 CPU, 128GB memory: Manage up to 60 simulated devices.

Standard configuration

Controller node

3

CPU: 20 cores, 2.2 GHz

Memory: 256 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 93.

·     To use the simulation feature, another 100 GB or more memory resources are required on each server deployed with DTN.

·     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required on each server deployed with the virtualized DTN host.

Device quantity:

·     1000 (E63xx)

·     300 (E62xx and earlier versions)

Server quantity:

·     20000 (E63xx)

·     6000 (E62xx and earlier versions)

(Deployed with virtualized DTN host) simulated devices:

·     8 CPU cores, 64GB memory: Manage up to 30 simulated devices.

·     16 CPU, 128GB memory: Manage up to 60 simulated devices.

High-spec configuration

Standalone deployment of the DTN component

1

CPU: 16 cores, 2.0 GHz

Memory: 128 GB.

Note:

To deploy the virtualized DTN host and the DTN component in a converged manner, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 2 × 10Gbps network ports.

¡     To deploy the DTN component and the virtualized DTN host in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 4 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

¡     To deploy the DTN component and the virtualized DTN host in a converged manner, two more 10Gbps network ports are required.

---

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

 

Single-node deployment of controller (x86-64: Intel64/AMD64)

Table 79 Single-node deployment of controller (x86-64: Intel64/AMD64)  (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller node

1

CPU: 16 cores, 2.0 GHz

Memory: 128 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 93.

·     To use the simulation feature, another 100 GB or more memory resources are required.

·     To deploy the virtualized DTN host, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy DTN, one more network port is required.

¡     To deploy the virtualized DTN host, one more network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy DTN, two more network ports are required.

¡     To deploy the virtualized DTN host, two more network ports are required.

Device quantity: 36

Server quantity: 600

(Deployed with virtualized DTN host) simulated devices:

·     8 CPU cores, 64GB memory: Manage up to 30 simulated devices.

·     16 CPU, 128GB memory: Manage up to 60 simulated devices.

--

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     For versions earlier than E6205, you cannot scale up the controller from single-node mode to cluster mode. For E6205 and later versions, you can scale up the controller from single-node mode to cluster mode.

·     In single-node mode, hybrid overlay, security groups, QoS, and interconnection with cloud platforms are not supported.

 

Hardware requirements for standalone deployment of analyzer

The hardware requirements for analyzer deployment vary by analyzer version. This document describes only the hardware requirements for SeerAnalyzer E62xx and later. For hardware requirements of SeerAnalyzer history versions, see "Hardware requirements for SeerAnalyzer history versions".

Deployment of analyzer in cluster mode (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer)

Table 80 Deployment of analyzer in cluster mode (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 192 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 12 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

100

E62xx and earlier versions:

2000 VMs, 4000 TCP flows/sec.

E63xx and later versions:

3000 VMs, 6000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: 32 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 1.92 TB after RAID setup

·     Data drive: 24 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

E62xx and earlier versions:

5000 VMs, 10000 TCP flows/sec.

E63xx and later versions:

6000 VMs, 12000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Single-host deployment of analyzer (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer)

Table 81 Single-host deployment of analyzer (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Hardware requirements for SeerCollector deployment (x86-64: Intel64/AMD64) (collectors required in DC scenarios)

To use the flow analysis features, you must deploy collectors.

Table 82 Hardware requirements for SeerCollector deployment (x86-64: Intel64/AMD64) (collectors required in DC scenarios)

Item

Recommended configuration

Collector node

·     CPU: 20 core, 2.0 GHz.

·     Memory: 128 GB.

·     System drive: 2 × 600 GB SSDs or SAS HDDs in RAID 1.

·     Network port: 1 × 10Gbps collector network port, 1 × 10Gbps management network port

¡     The collecting network port must support DPDK and cannot operate in bonding mode. The management network port can operate in bonding mode.

¡     For x86 servers, it is recommended to use an Intel 82599 network card to provide the collecting network port. Plan in advance which network card will be used for data collection, record the network card information (name and MAC), and configure IP address settings for the network card. After the configuration is deployed, the network card will be managed by DPDK, and you cannot view the network card through Linux kernel commands.

¡     The Mellanox4 [ConnectX-3] network card can be used to provide the collecting network port. As a best practice, use the Mellanox Technologies MT27710 Family or Mellanox Technologies MT27700. If the Mellanox4 [ConnectX-3] network card is used as the collecting network card, the management network must use other network card models. Only ARM versions support Mellanox network ports.

¡     Do not configure DPDK settings on a management network port.

 

 

NOTE:

·     CPU models supported by the analyzer vary by analyzer version. For more information, see the related documentation.

·     A server deployed with the collector requires a network port for the collecting service and a network port for the management service. The collecting network port receives mirrored packets sent by network devices, and the management network port exchanges data with the analyzer.

 

Table 83 Supported SeerCollector collecting network cards

Vendor

Chip

Model

Series

Compatible versions

Intel

JL82599

H3C UIS CNA 1322 FB2-RS3NXP2D-2-port 10Gbps optical port network card (SFP+)

CNA-10GE-2P-560F-B2

All versions

JL82599

H3C UIS CNA 1322 FB2-RS3NXP2DBY-2-port 10Gbps optical port network card (SFP+)

CNA-10GE-2P-560F-B2

All versions

X550

H3C UNIC CNA 560T B2-RS33NXT2A-2-port 10Gbps copper port network card-1*2

 

E6107 and later

X540

UN-NIC-X540-T2-T-10Gb-2P (copper port network card)

 

E6107 and later

X520

UN-NIC-X520DA2-F-B-10Gb-2P

 

E6107 and later

Mellanox

MT27710 Family [ConnectX-4 Lx]

NIC-ETH540F-LP-2P

Mellanox Technologies MT27710 Family

E6107 and later

MT27712A0-FDCF-AE[ConnectX-4Lx]

NIC-620F-B2-25Gb-2P

 

E6305 and later

 

Hardware requirements for quorum node deployment

Table 84 Hardware requirements for quorum node deployment

Node name

Node quantity

Minimum single-node requirements

Quorum node

1

CPU: 2 cores, 2.0 GHz

Memory: 16 GB.

Drives: The drives must be configured in RAID 1 or 10.

·     Drive configuration option 1:

¡     System drive: 2 × 480 GB SSDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup and a minimum IOPS of 5000.

¡     ETCD drive: 2 × 480 GB SSDs in RAID 1, with a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd

·     Drive configuration option 2:

¡     System drive: 2 × 600 GB HDDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup, a minimum RPM of 7.2K, and a minimum IOPS of 5000.

¡     ETCD drive: 2 × 600 GB HDDs in RAID 1, with a minimum total capacity of 50 GB after RAID setup and a minimum RPM of 7.2K. Installation path: /var/lib/etcd

¡     RAID controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present

Network ports: 1 × 10Gbps network port

 

Hardware requirements for DTN physical server deployment

Table 85 Hardware requirements for DTN physical server deployment (x86-64: Intel64/AMD64)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

DTN physical server

Number of servers = number of simulated devices/number of simulated devices that can be managed by each server

CPU: x86-64 (Intel64/AMD64) 16-core 2.0 GHz CPUs that support VX-T/VX-D

Memory: 128 GB or above

Drive:

·     System drive: A total capacity of 600 GB or above after RAID setup.

Network ports:

·     Non-bonding mode: 3 × 10Gbps network ports.

·     Bonding mode: 6 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

Number of simulated devices that can be managed by each server (select one):

·     60 × 2GB simulated devices

·     30 × 4GB simulated devices

Standard configuration

You need to deploy DTN physical hosts only when the simulation feature is used.

DTN physical server

Number of servers = number of simulated devices/number of simulated devices that can be managed by each server

CPU: x86-64(Intel64/AMD64) 20-core 2.2 GHz CPUs that support VX-T/VX-D

Memory: 256 GB or above

Number of simulated devices that can be managed by each server (select one):

·     160 × 2GB simulated devices

·     80 × 4GB simulated devices

High-spec configuration

You need to deploy DTN physical hosts only when the simulation feature is used.

 

Hardware requirements for converged deployment of the controller and analyzer

 

NOTE:

Converged deployment of the controller and analyzer is supported in AD-DC 6.2 and later.

 

Converged deployment of the controller and analyzer supports the following modes:

·     Converged deployment of the controller and analyzer in single-node mode. For the hardware requirements, see Table 87.

·     Converged deployment of the controller and analyzer in a three-node cluster mode. In this mode, deploy the controller and analyzer on three master nodes. For hardware requirements, see Table 86. This mode is available in E63xx and later.

·     Converged deployment of the controller and analyzer in 3+N cluster mode. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. For hardware requirements, see the hardware requirements for standalone controller deployment and standalone analyzer deployment.

This section describes the hardware requirements for converged deployment of SeerAnalyzer E62xx or later and the controller.

Converged deployment of the controller and analyzer in three-node cluster mode (x86-64: Intel64/AMD64)

Table 86 Converged deployment of the controller and analyzer in three-node cluster mode (x86-64: Intel64/AMD64) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

3

CPU: 36 cores, 2.1 GHz

·     Memory: 256 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 93.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 12 TB after RAID setup

Controller network ports

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

300 devices (E63xx)

100 devices (E62xx and earlier versions)

6000 servers (E63xx)

2000 servers (E62xx and earlier versions)

Analyzer 2000 VMs, 4000 TCP flows/sec.

Standard configuration

Controller and analyzer

3

·     CPU: 51 cores, 2.2 GHz

·     Memory: 456 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 93.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 24 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

300 devices (E63xx)

100 devices (E62xx and earlier versions)

6000 servers (E63xx)

2000 servers (E62xx and earlier versions)

Analyzer: 5000 VMs, 10000 TCP flows/sec.

High-spec configuration

 

Converged deployment of the controller and analyzer in single-node mode (x86-64: Intel64/AMD64)

Table 87 Converged deployment of the controller and analyzer in single-node mode (x86-64: Intel64/AMD64) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

1

CPU: 35 cores, 2.1 GHz

Memory: 298 GB.

·     Remarks:

·     To deploy optional application packages, add hardware resources according to Table 93.

·     To use the simulation feature, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 8 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy DTN, one more 10Gbps port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy DTN, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

Device quantity: 36

Server quantity: 600

Analyzer:

1000 VMs, 2000 TCP flows/sec

 

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     In single-node deployment mode, the controller does not support hybrid overlay, security groups, QoS, or interconnection with cloud platforms.

·     If the controller and analyzer are deployed in a converged manner on a single node, you can scale up the system to a cluster only when the controller version is E6205 or later and the analyzer version is E6310 or later (non-CK version).

 

Hardware requirements for Super Controller deployment

 

NOTE:

Super Controller deployment is supported in AD-DC 6.2 and later.

 

Hardware requirements for Super Controller deployment in cluster mode (x86-64: Intel64/AMD64)

Table 88 Hardware requirements for Super Controller deployment in cluster mode (x86-64: Intel64/AMD64)

Node name

Node quantity

CPU

Memory

Drive

NIC

Super controller node

3

16 cores, 2.0 GHz

128 GB or more

System drive: 1920 GB (after RAID setup)

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Remarks:

·     To deploy general_PLAT_kernel-region and general_PLAT_network, add hardware resources according to Table 93.

·     For E6301 and later versions, the ETCD partition can share a physical drive with other partitions.

 

Hardware requirements for Super Controller deployment in single-node mode (x86-64: Intel64/AMD64)

Table 89 Hardware requirements for Super Controller deployment in single-node mode (x86-64: Intel64/AMD64)

Node name

Node quantity

CPU

Memory

Drive

NIC

Super controller node

1

16 cores, 2.0 GHz

128 GB or more

System drive: 1920 GB (after RAID setup)

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10Gbps ports, forming a Linux bonding interface

Remarks:

·     To deploy general_PLAT_kernel-region and general_PLAT_network, add hardware resources according to Table 93.

·     For E6301 and later versions, the ETCD partition can share a physical drive with other partitions.

 

Hardware requirements for Super Controller deployment in a converged manner

 

NOTE:

Super Controller deployment is supported in AD-DC 6.2 and later.

 

Super Controller can be deployed with SeerEngine-DC or SeerAnalyzer in a converged manner in single-node or cluster mode. In this scenario, add additional hardware resources for Super Controller according to Table 90 and Table 91.

Additional hardware resources required for Super Controller deployment in cluster mode

Table 90 Additional hardware resources required for Super Controller deployment in cluster mode

Node name

Node quantity

CPU

Memory

Drive

Super controller node

3

Eight x86-64(Intel64/AMD64) CPUs

64 GB or above

100 GB or above system drive capacity after RAID setup

 

Additional hardware resources required for Super Controller deployment in single-node mode

Table 91 Additional hardware resources required for Super Controller deployment in single-node mode (x86-64: Intel64/AMD64)

Node name

Node quantity

CPU

Memory

Drive

Super controller node

1

Eight x86-64(Intel64/AMD64) CPUs

64 GB or above

100 GB or above system drive capacity after RAID setup

 

Hardware requirements for Super Analyzer deployment

 

NOTE:

Super Analyzer deployment is supported in AD-DC 6.3 and later.

 

Table 92 Hardware requirements (x86-64: Intel64/AMD64)

Node configuration

Maximum number of analyzers

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: x86-64 (Intel64/AMD64), 20 physical cores in total, 2.0 GHz or above

·     Memory: 256 GB.

·     System drive: 1.92TB after RAID set up. Use SSDs or SAS/SATA HDDs.

·     Data drive: 8TB after RAID set up.3 or more SSDs or SATA/SAS HDDs of the same model are required.

·     ETCD drive: 50 GB after RAID setup. Use SSDs. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network port (bonding mode):

¡     Non-bonding mode: 1 × 10Gbps network port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

10

Analyzer node

3

·     CPU: x86-64 (Intel64/AMD64), 20 physical cores in total, 2.0 GHz or above

·     Memory: 192 GB.

·     System drive: 1.92TB after RAID set up. Use SSDs or SAS/SATA HDDs.

·     Data drive: 8TB after RAID set up.3 or more SSDs or SATA/SAS HDDs of the same model are required.

·     ETCD drive: 50 GB after RAID setup. Use SSDs. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps network port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

50

 

 

NOTE:

·     Hardware configuration required by the super controller depends on the analyzer data type. The super controller supports NetStream data and full link data.

·     The super controller supports standalone deployment and converged deployment with the analyzer.

 

Hardware requirements for optional application packages

To deploy optional application packages, reserve additional hardware resources as required in Table 93.

Table 93 Hardware requirements for optional application packages

Component

Feature description

CPU (cores)

Memory (GB)

general_PLAT_network

Provides the basic management functions (including network resources, network performance, network topology, and iCC)

2.5

16

general_PLAT_kernel-region

Provides the hierarchical management function

0.5

6

Syslog

Provides the syslog feature.

1.5

8

general_PLAT_netconf

Provides the NETCONF invalidity check and NETCONF channel features.

3

10

webdm

Provides device panels.

2

4

SeerEngine_DC_DTN_VIRTUALIZATION_HOST-version.zip

Virtualized DTN host, which is deployed with the controller/DTN component in a converged manner. A virtualized DTN host can manage up to 30 simulated devices.

8

64

Virtualized DTN host, which is deployed with the controller/DTN component in a converged manner. A virtualized DTN host can manage up to 60 simulated devices.

16

128

 

Deployment on physical servers (domestic servers)

IMPORTANT

IMPORTANT:

For deployment on domestic servers, purchase domestic commercial operating systems as needed.

 

General drive requirements

This section introduces the general drive configuration, including the recommended RAID mode, drive type, and performance requirements. For specific drive capacity requirements, see the configuration requirements of the corresponding scenarios.

Table 94 General requirements

Item

Requirements

General drive requirements:

·     The drives must be configured in RAID 1, 5, or 10.

·     Use SSDs or 7.2K RPM or above HDDs, with a minimum IOPS of 5000.

·     For HDDs, the RAID controller must have 1 GB cache, supports powerfail safeguard module, and have a supercapacitor installed.

System drive

SSD

A minimum total capacity of 1920 GB after RAID setup. Recommended:

·     RAID 5: 3 × 960 GB or 5 × 480 GB drives

·     RAID 10: 4 × 960 GB or 8 × 480 GB drives

HDD

A minimum total capacity of 1920 GB after RAID setup. Recommended:

·     RAID 5: 3 × 1200 GB or 5 × 600 GB drives

·     RAID 10: 4 × 1200 GB or 8 × 600 GB drives

ETCD drive

SSD

A minimum total capacity of 50 GB after RAID setup. Recommended:

RAID 1: 2 × 480GB drives

HDD

A minimum total capacity of 50 GB after RAID setup. Recommended:

RAID 1: 2 × 600GB drives

Data drive

A minimum total capacity of 8 TB after RAID setup. Recommended: RAID 5, 3 or more drives of the same model

A minimum total capacity of 12 TB after RAID setup. Recommended: RAID 5, 5 or more drives of the same model

A minimum total capacity of 24 TB after RAID setup. Recommended: RAID 5, 7 or more drives of the same model

 

Standalone deployment of controller

Deployment of controller in cluster mode (x86-64 Hygon servers)

Table 95 Deployment of controller in cluster mode (x86-64 Hygon servers)  (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller node

3

CPU: 32 cores, 2 × Hygon G5 5380 16-core , 2.5 GHz CPUs

Memory: 128 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

·     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required on each server deployed with the virtualized DTN host.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

¡     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

¡     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, two more 10Gbps network ports are required.

Device quantity:

·     300 (E63xx)

·     100 (E62xx and earlier versions)

Server quantity:

·     6000 (E63xx)

·     2000 (E62xx and earlier versions)

(Deployed with virtualized DTN host) simulated devices:

·     8 CPU cores, 64GB memory: Manage up to 30 simulated devices.

·     16 CPU, 128GB memory: Manage up to 60 simulated devices.

Standard configuration

Controller node

3

CPU: 32 cores, 2 × Hygon G5 5380 16-core, 2.5 GHz CPUs

Memory: 256 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     Another 100 GB or more memory resources are required on each server deployed with DTN.

·     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required on each server deployed with the virtualized DTN host.

Device quantity:

·     1000 (E63xx)

·     300 (E62xx and earlier versions)

Server quantity:

·     20000 (E63xx)

·     6000 (E62xx and earlier versions)

(Deployed with virtualized DTN host) simulated devices:

·     8 CPU cores, 64GB memory: Manage up to 30 simulated devices.

·     16 CPU, 128GB memory: Manage up to 60 simulated devices.

High-spec configuration

Standalone deployment of the DTN component

1

CPU: 32 cores, 2 × Hygon G5 5380 16-core, 2.5 GHz CPUs

Memory: 128 GB.

Note:

To deploy the virtualized DTN host and the DTN component in a converged manner, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required.

Drive:

·     System drive 1920 GB or above after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 2 × 10Gbps network ports.

¡     To deploy the DTN component and the virtualized DTN host in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 4 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

¡     To deploy the DTN component and the virtualized DTN host in a converged manner, two more 10Gbps network ports are required.

---

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

 

Deployment of controller in standalone mode (x86-64 Hygon servers)

Table 96 Deployment of controller in standalone mode (x86-64 Hygon servers) (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller node

1

CPU: 32 cores, 2 × Hygon G5 5380 16-core , 2.5 GHz CPUs

Memory: 128 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

·     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, another 8 CPU cores and 64 GB memory/16 CPU cores and 128 GB memory resources are required on each server deployed with the virtualized DTN host.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

¡     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

¡     To deploy the virtualized DTN host and the controller/DTN component in a converged manner, two more 10Gbps network ports are required.

Device quantity: 36

Server quantity: 600

(Deployed with virtualized DTN host) simulated devices:

·     8 CPU cores, 64GB memory: Manage up to 30 simulated devices.

·     16 CPU cores, 128GB memory: Manage up to 60 simulated devices.

 

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     For versions earlier than E6205, you cannot scale up the controller from single-node mode to cluster mode. For E6205 and later versions, you can scale up the controller from single-node mode to cluster mode.

·     In single-node mode, hybrid overlay, security groups, QoS, and interconnection with cloud platforms are not supported.

 

Deployment of controller in cluster mode (ARM Kunpeng servers)

 

NOTE:

AD-DC 5.3 and later versions support deployment on ARM Kunpeng servers.

 

Table 97 Deployment of controller in cluster mode (ARM Kunpeng servers)  (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

Minimum single-node requirements

Controller node

3

CPU: 48 cores, 2 × Kunpeng 920 24-core 2.6 GHz CPUs

Memory: 128 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Device quantity:

·     300 (E63xx)

·     100 (E62xx and earlier versions)

Server quantity:

·     6000 (E63xx)

·     2000 (E62xx and earlier versions)

Standard configuration

Controller node

3

CPU: 48 cores, 2 × Kunpeng 920 24-core 2.6 GHz CPUs

Memory: 256 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Device quantity:

·     1000 (E63xx)

·     300 (E62xx and earlier versions)

Server quantity:

·     20000 (E63xx)

·     6000 (E62xx and earlier versions)

High-spec configuration

Standalone deployment of the DTN component

1

CPU: 48 cores, 2 × Kunpeng 920 24-core 2.6 GHz CPUs

Memory: 128 GB.

Drive:

·     System drive 1920 GB or above after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 2 × 10Gbps network ports

·     Bonding mode: 4 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

---

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

 

Deployment of controller in standalone mode (ARM Kunpeng servers)

Table 98 Deployment of controller in standalone mode (ARM Kunpeng servers) (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

Minimum single-node requirements

Controller node

1

CPU: 48 cores, 2 × Kunpeng 920 24-core 2.6 GHz CPUs

Memory: 128 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Device quantity: 36

Server quantity: 600

 

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     For versions earlier than E6205, you cannot scale up the controller from single-node mode to cluster mode. For E6205 and later versions, you can scale up the controller from single-node mode to cluster mode.

·     In single-node mode, hybrid overlay, security groups, QoS, and interconnection with cloud platforms are not supported.

 

Deployment of controller in cluster mode (ARM Phytium servers)

 

NOTE:

AD-DC 6.0 and later versions support deployment on ARM Phytium servers.

 

Table 99 Deployment of controller in cluster mode (ARM Phytium servers)  (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

Minimum single-node requirements

Controller node

3

CPU: 128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 128 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Device quantity:

·     300 (E63xx)

·     100 (E62xx and earlier versions)

Server quantity:

·     6000 (E63xx)

·     2000 (E62xx and earlier versions)

Standard configuration

Controller node

3

CPU: 128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 256 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Device quantity:

·     1000 (E63xx)

·     300 (E62xx and earlier versions)

Server quantity:

·     20000 (E63xx)

·     6000 (E62xx and earlier versions)

High-spec configuration

Standalone deployment of the DTN component

1

CPU: 128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 128 GB.

Drive:

·     System drive 1920 GB or above after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode: 4 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

---

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

 

Deployment of controller in single-node mode (ARM Phytium servers)

 

NOTE:

AD-DC 6.0 and later versions support deployment on ARM Phytium servers.

 

Table 100 Deployment of controller in standalone mode (ARM Phytium servers) (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

Minimum single-node requirements

Controller node

1

CPU: 128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 128 GB.

Note:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Device quantity: 36

Server quantity: 600

 

 

Hardware requirements for standalone deployment of analyzer

The hardware requirements for analyzer deployment vary by analyzer version. This document describes only the hardware requirements for SeerAnalyzer E62xx and later. For hardware requirements of SeerAnalyzer history versions, see "Hardware requirements for SeerAnalyzer history versions".

Deployment of analyzer in cluster mode (x86-64 Hygon servers) (Unified Platform + SeerAnalyzer)

Table 101 Deployment of analyzer in cluster mode (x86-64 Hygon servers) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 48 physical cores in total, 2.2GHz. Recommended: 2 × Hygon G5 7360 24-core, 2.2 GHz CPUs

·     Memory: 256 GB

·     System drive: 1.92 TB after RAID setup

·     Data drive: 24 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

5000 VMs, 10000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Deployment of analyzer in single-node mode (x86-64 Hygon servers) (Unified Platform + SeerAnalyzer)

Table 102 Deployment of analyzer in single-node mode (x86-64 Hygon servers) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 48 physical cores in total, 2.2GHz. Recommended: 2 × Hygon G5 7360 24-core, 2.2 GHz CPUs

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Deployment of analyzer in cluster mode (ARM Kunpeng servers) (Unified Platform + SeerAnalyzer)

Table 103 Deployment of analyzer in cluster mode (ARM Kunpeng servers) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 64 physical cores in total, 2.6GHz. Recommended: 2 × Kunpeng 920 5232 32-core, 2.6 GHz CPUs

·     Memory: 256 GB

·     System drive: 1.92 TB after RAID setup

·     Data drive: 24 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

5000 VMs, 10000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Deployment of analyzer in single-node mode (ARM Kunpeng servers) (Unified Platform + SeerAnalyzer)

Table 104 Deployment of analyzer in single-node mode (ARM Kunpeng servers) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 64 physical cores in total, 2.6GHz. Recommended: 2 × Kunpeng 920 5232 32-core, 2.6 GHz CPUs

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB after RAID setup

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Deployment of analyzer in cluster mode (ARM Phytium servers) (Unified Platform + SeerAnalyzer)

Table 105 Deployment of analyzer in cluster mode (ARM Phytium servers) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 128 physical cores in total, 2.1GHz. Recommended: 2 × Phytium S2500 64-core, 2.1 GHz CPUs

·     Memory: 256 GB

·     System drive: 1.92 TB after RAID setup

·     Data drive: 12 TB after RAID setup. 7 or more drives of the same model are required.

·     ETCD drive: 50 GB after RAID setup

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

100

2000 VMs, 4000 TCP flows/sec.

2 TCP flows for each VM per second.

Analyzer node

3

·     CPU: 128 physical cores in total, 2.1GHz. Recommended: 2 × Phytium S2500 64-core, 2.1 GHz CPUs

·     Memory: 256 GB

·     System drive: 1.92 TB after RAID setup

·     Data drive: 24 TB after RAID setup. 7 or more drives of the same model are required.

·     ETCD drive: 50 GB after RAID setup

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

5000 VMs, 10000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Deployment of analyzer in single-node mode (ARM Phytium servers) (Unified Platform + SeerAnalyzer)

Table 106 Deployment of analyzer in single-node mode (ARM Phytium servers) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 128 physical cores in total, 2.1GHz. Recommended: 2 × Phytium S2500 64-core, 2.1 GHz CPUs

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB after RAID setup

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

 

 

NOTE:

You can calculate the overall TCP flow size based on the total number of VMs in the data center, and calculate the required hardware configuration on the basis of 2 TCP flows per second for each VM.

 

Hardware requirements for SeerCollector deployment (x86-64 Hygon servers) (collectors required in DC scenarios)

To use the flow analysis features, you must deploy collectors.

Table 107 Hardware requirements for SeerCollector deployment (x86-64 Hygon servers) (collectors required in DC scenarios)

Item

Recommended configuration

Collector node

·     CPU: 48 physical cores in total, 2.2GHz. Recommended: 2 × Hygon G5 7360 24-core, 2.2 GHz CPUs

·     Memory: 128 GB.

·     System drive: 2 × 600 GB SSDs or SAS HDDs in RAID 1.

·     Network port: 1 × 10Gbps collector network port, 1 × 10Gbps management network port

¡     The collecting network port must support DPDK and cannot operate in bonding mode. The management network port can operate in bonding mode.

¡     For x86 servers, it is recommended to use an Intel 82599 network card to provide the collecting network port. Plan in advance which network card will be used for data collection, record the network card information (name and MAC), and configure IP address settings for the network card. After the configuration is deployed, the network card will be managed by DPDK, and you cannot view the network card through Linux kernel commands.

¡     The Mellanox4 [ConnectX-3] network card can be used to provide the collecting network port. As a best practice, use the Mellanox Technologies MT27710 Family or Mellanox Technologies MT27700. If the Mellanox4 [ConnectX-3] network card is used as the collecting network card, the management network must use other network card models. Only ARM versions support Mellanox network ports.

¡     Do not configure DPDK settings on a management network port.

 

Hardware requirements for SeerCollector deployment (ARM Kunpeng servers) (collectors required in DC scenarios)

To use the flow analysis features, you must deploy collectors.

Table 108 Hardware requirements for SeerCollector deployment (ARM Kunpeng servers) (collectors required in DC scenarios)

Item

Recommended configuration

Collector node

·     CPU: 64 physical cores in total, 2.6GHz. Recommended: 2 × Kunpeng 920 5232 32-core, 2.6 GHz CPUs

·     Memory: 128 GB.

·     System drive: 2 × 600 GB SSDs or SAS HDDs in RAID 1.

·     Network port: 1 × 10Gbps collector network port, 1 × 10Gbps management network port

¡     The collecting network port must support DPDK and cannot operate in bonding mode. The management network port can operate in bonding mode.

¡     The Mellanox4 [ConnectX-3] network card can be used to provide the collecting network port. As a best practice, use the Mellanox Technologies MT27710 Family or Mellanox Technologies MT27700. If the Mellanox4 [ConnectX-3] network card is used as the collecting network card, the management network must use other network card models. Only ARM versions support Mellanox network ports.

¡     Do not configure DPDK settings on a management network port.

 

Hardware requirements for SeerCollector deployment (ARM Phytium servers) (collectors required in DC scenarios)

To use the flow analysis features, you must deploy collectors.

Table 109 Hardware requirements for SeerCollector deployment (ARM Phytium servers) (collectors required in DC scenarios)

Item

Recommended configuration

Collector node

·     CPU: 128 physical cores in total, 2.1GHz. Recommended: 2 × Phytium S2500 64-core, 2.1 GHz CPUs

·     Memory: 128 GB.

·     System drive: 2 × 600 GB SSDs or SAS HDDs in RAID 1.

·     Network port: 1 × 10Gbps collector network port, 1 × 10Gbps management network port

¡     The collecting network port must support DPDK and cannot operate in bonding mode. The management network port can operate in bonding mode.

¡     The Mellanox4 [ConnectX-3] network card can be used to provide the collecting network port. As a best practice, use the Mellanox Technologies MT27710 Family or Mellanox Technologies MT27700. If the Mellanox4 [ConnectX-3] network card is used as the collecting network card, the management network must use other network card models. Only ARM versions support Mellanox network ports.

¡     Do not configure DPDK settings on a management network port.

 

 

NOTE:

·     CPU models supported by the analyzer vary by analyzer version. For more information, see the related documentation.

·     A server deployed with the collector requires a network port for the collecting service and a network port for the management service. The collecting network port receives mirrored packets sent by network devices, and the management network port exchanges data with the analyzer.

 

Table 110 Supported SeerCollector collecting network cards

Vendor

Chip

Model

Series

Compatible versions

Intel

JL82599

H3C UIS CNA 1322 FB2-RS3NXP2D-2-port 10Gbps optical port network card (SFP+)

CNA-10GE-2P-560F-B2

All versions

JL82599

H3C UIS CNA 1322 FB2-RS3NXP2DBY-2-port 10Gbps optical port network card (SFP+)

CNA-10GE-2P-560F-B2

All versions

X550

H3C UNIC CNA 560T B2-RS33NXT2A-2-port 10Gbps copper port network card-1*2

 

E6107 and later

X540

UN-NIC-X540-T2-T-10Gb-2P (copper port network card)

 

E6107 and later

X520

UN-NIC-X520DA2-F-B-10Gb-2P

 

E6107 and later

Mellanox

MT27710 Family [ConnectX-4 Lx]

NIC-ETH540F-LP-2P

Mellanox Technologies MT27710 Family

E6107 and later

MT27712A0-FDCF-AE[ConnectX-4Lx]

NIC-620F-B2-25Gb-2P

 

E6305 and later

 

Table 111 SeerCollector system drive partitioning

RAID plan

Partition name

Mount point

Minimum capacity

Remarks

2 × 600GB drives in RAID1

/dev/sda1

/boot/efi

200MB

EFI System Partition. Only the UEFI mode requires this partition.

/dev/sda2

/boot

1024MB

-

/dev/sda3

/

590GB

 

/dev/sda4

swap

4GB

Swap partition

 

 

NOTE:

·     SeerCollector does not require data drives.

·     When the system drive size is larger than 1.5TB, SeerCollector supports automatic system drive partitioning. If the system drive size is not larger than 1.5TB, manually partition the system drive based on Table 111.

 

Hardware requirements for quorum node deployment

Table 112 Hardware requirements for quorum node deployment

Node name

Node quantity

Minimum single-node requirements

Quorum node

1

Optional CPU settings are as follows:

·     32 cores, 2 × Hygon G5 5380 16-core, 2.5 GHz CPUs

·     48 cores, 2 × Kunpeng 920 24-core, 2.6 GHz CPUs

·     128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 16 GB.

Drives: The drives must be configured in RAID 1 or 10.

Drive configuration option 1:

·     System drive: 2 × 480 GB SSDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: 2 × 480 GB SSDs configured in RAID 1, with a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

Drive configuration option 2:

·     System drive: 2 × 600 GB 7.2K RPM or above HDDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: 2 × 600 GB 7.2K RPM or above HDDs in RAID, with a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

Network port: 1 × 10Gbps network port

 

Hardware requirements for DTN physical server deployment

Table 113 DTN physical server deployment (domestic servers)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

DTN physical server

Number of servers = number of simulated devices/number of simulated devices that can be managed by each server

Optional CPU settings are as follows:

·     32 cores, 2 × Hygon G5 5380 16-core, 2.5 GHz CPUs

·     48 cores, 2 × Kunpeng 920 24-core, , 2.6 GHz CPUs

·     128 cores, 2 × Phytium S2500 64-core, , 2.1 GHz CPUs

Memory: 256 GB or above

Drive:

·     System drive: A total capacity of 600 GB or above after RAID setup.

Network ports:

·     Non-bonding mode: 3 × 10Gbps network ports.

·     Bonding mode: 6 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

Number of simulated devices that can be managed by each server (select one):

·     160 × 2GB simulated devices

·     80 × 4GB simulated devices

High-spec configuration

You need to deploy DTN physical hosts only when the simulation feature is used.

DTN physical server

Number of servers = number of simulated devices/number of simulated devices that can be managed by each server

Optional CPU settings are as follows:

·     32 cores, 2 × Hygon G5 5380 16-core, 2.5 GHz CPUs

·     48 cores, 2 × Kunpeng 920 24-core, , 2.6 GHz CPUs

·     128 cores, 2 × Phytium S2500 64-core, , 2.1 GHz CPUs

Memory: 128 GB or above

Number of simulated devices that can be managed by each server (select one):

·     60 × 2GB simulated devices

·     30 × 4GB simulated devices

Standard configuration

You need to deploy DTN physical hosts only when the simulation feature is used.

 

Hardware requirements for converged deployment of the controller and analyzer

 

NOTE:

Converged deployment of the controller and analyzer is supported in AD-DC 6.2 and later.

 

Converged deployment of the controller and analyzer supports the following modes:

·     Converged deployment of the controller and analyzer in single-node mode.

·     Converged deployment of the controller and analyzer in three-node cluster mode. In this mode, deploy the controller and analyzer are deployed in a converged manner on the three master nodes in the cluster. This mode is available in E63xx and later.

·     Converged deployment of the controller and analyzer in 3+N cluster mode. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. For hardware requirements, see the hardware requirements for standalone controller deployment and standalone analyzer deployment.

This section describes the hardware requirements for converged deployment of SeerAnalyzer E62xx or later and the controller.

Converged deployment of the controller and analyzer in three-node cluster mode (x86-64 Hygon servers)

Table 114 Converged deployment of the controller and analyzer in three-node cluster mode (x86-64 Hygon servers) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

3

CPU: 48 cores, 2 × Hygon G5 7360 24-core 2.2 GHz CPUs

Memory: 256 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 12 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

300 devices (E63xx)

100 devices (E62xx and earlier versions)

6000 servers (E63xx)

2000 servers (E62xx and earlier versions)

Analyzer: 2000 VMs, 4000 TCP flows/sec.

Standard configuration

Controller and analyzer

3

CPU: 64 cores, 2 × Hygon G5 7380 32-core 2.2 GHz CPUs

Memory: 456 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 24 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

300 devices (E63xx)

100 devices (E62xx and earlier versions)

6000 servers (E63xx)

2000 servers (E62xx and earlier versions)

Analyzer: 5000 VMs, 10000 TCP flows/sec.

High-spec configuration

Standalone deployment of the DTN component

1

CPU: 32 cores, 2 × Hygon G5 5380 16-core 2.5 GHz CPUs

Memory: 128 GB.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode: 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

---

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

 

Converged deployment of the controller and analyzer in single-node mode (x86-64 Hygon servers)

Table 115 Converged deployment of the controller and analyzer in single-node mode (x86-64 Hygon servers) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

1

CPU: 64 cores, 2 × Hygon G5 7380 32-core 2.2 GHz CPUs

Memory: 298 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To use the simulation feature, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 8 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy DTN, one more 10Gbps port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy DTN, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

Device quantity: 36

Server quantity: 600

Analyzer

1000 VMs, 2000 TCP flows/sec

 

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     In single-node deployment mode, the controller does not support hybrid overlay, security groups, QoS, or interconnection with cloud platforms.

·     If the controller and analyzer are deployed in a converged manner on a single node, you can scale up the system to a cluster only when the controller version is E6205 or later and the analyzer version is E6310 or later (non-CK version).

 

Converged deployment of the controller and analyzer in three-node cluster mode (ARM Kunpeng servers)

Table 116 Converged deployment of the controller and analyzer in three-node cluster mode (ARM Kunpeng servers) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

3

CPU: 96 cores, 2 × Kunpeng 920 48-core 2.6 GHz CPUs

Memory: 256 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 12 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

300 devices (E63xx)

100 devices (E62xx and earlier versions)

6000 servers (E63xx)

2000 servers (E62xx and earlier versions)

Analyzer: 2000 VMs, 4000 TCP flows/sec.

Standard configuration

Controller and analyzer

3

CPU: 128 cores, 2 × Kunpeng 920 64-core 2.6 GHz CPUs

Memory: 456 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 24 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

300 devices (E63xx)

100 devices (E62xx and earlier versions)

6000 servers (E63xx)

2000 servers (E62xx and earlier versions)

Analyzer: 5000 VMs, 10000 TCP flows/sec.

High-spec configuration

Standalone deployment of the DTN component

1

CPU: 48 cores, 2 × Kunpeng 920 24-core 2.6 GHz CPUs

Memory: 128 GB.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 2 × 10Gbps network ports

·     Bonding mode: 4 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

---

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

 

Converged deployment of the controller and analyzer in single-node mode (ARM Kunpeng servers)

Table 117 Converged deployment of the controller and analyzer in single-node mode (ARM Kunpeng servers) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

1

CPU: 96 cores, 2 × Kunpeng 920 48-core 2.6 GHz CPUs

Memory: 298 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To use the simulation feature, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 8 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy DTN, one more 10Gbps port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy DTN, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

Device quantity: 36

Server quantity: 600

Analyzer:

1000 VMs, 2000 TCP flows/sec

 

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     In single-node deployment mode, the controller does not support hybrid overlay, security groups, QoS, or interconnection with cloud platforms.

·     If the controller and analyzer are deployed in a converged manner on a single node, you can scale up the system to a cluster only when the controller version is E6205 or later and the analyzer version is E6310 or later (non-CK version).

 

Converged deployment of the controller and analyzer in three-node cluster mode (ARM Phytium servers)

Table 118 Converged deployment of the controller and analyzer in three-node cluster mode (ARM Phytium servers) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

3

CPU:  128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 256 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 12 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy vBGP, one more 10Gbps network port is required.

¡     To deploy a 3+3 disaster recovery cluster, one more 10Gbps network port is required.

¡     To deploy DTN and the controller in a converged manner, one more 10Gbps network port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy vBGP, two more 10Gbps network ports are required.

¡     To deploy a 3+3 disaster recovery cluster, two more 10Gbps network ports are required.

¡     To deploy DTN and the controller in a converged manner, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

300 devices (E63xx)

100 devices (E62xx and earlier versions)

6000 servers (E63xx)

2000 servers (E62xx and earlier versions)

Analyzer:

2000 VMs, 4000 TCP flows/sec.

Standard configuration

Standalone deployment of the DTN component

1

CPU:  128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 128 GB.

Drive:

·     System drive 1920 GB or above after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

Network ports:

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode: 4 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

---

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

 

Converged deployment of the controller and analyzer in single-node mode (ARM Phytium servers)

Table 119 Converged deployment of the controller and analyzer in single-node mode (ARM Phytium servers) (Unified Platform + controller + analyzer)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller and analyzer

1

CPU: 128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

Memory: 298 GB.

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To use the simulation feature, another 100 GB or more memory resources are required on each server deployed with DTN.

Drive:

·     System drive: 1920 GB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     Data drive: 8 TB after RAID setup

Controller network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

¡     To deploy DTN, one more 10Gbps port is required.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

¡     To deploy DTN, two more 10Gbps network ports are required.

Analyzer network ports:

·     Non-bonding mode: 1 × 10Gbps network port.

·     Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Controller:

Device quantity: 36

Server quantity: 600

Analyzer:

1000 VMs, 2000 TCP flows/sec

 

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     In single-node deployment mode, the controller does not support hybrid overlay, security groups, QoS, or interconnection with cloud platforms.

·     If the controller and analyzer are deployed in a converged manner on a single node, you can scale up the system to a cluster only when the controller version is E6205 or later and the analyzer version is E6310 or later (non-CK version).

 

Hardware requirements for Super Controller deployment

 

NOTE:

Super Controller deployment is supported in AD-DC 6.2 and later.

 

Super Controller deployment in cluster mode

Table 120 Super Controller deployment in cluster mode

Node name

Node quantity

CPU

Memory

Drive

NIC

Super controller node

3

Optional CPU settings are as follows:

·     32 cores: 2 × Hygon G5 5380 16-core, 2.5 GHz CPUs

·     48 cores: 2 × Kunpeng 920 24-core, 2.6 GHz CPUs

·     128 cores: 2 × Phytium S2500 64-core, 2.1 GHz CPUs

128 GB or more

System drive: 1920 GB (after RAID setup)

Non-bonding mode: 1 × 10Gbps network port.

Bonding mode: 2 × 10 Gbps network ports, forming a Linux bonding interface.

Remarks:

To deploy general_PLAT_kernel-region and general_PLAT_network, add hardware resources according to Table 126.

For E6301 and later versions, the ETCD partition can share a physical drive with other partitions.

 

Super Controller deployment in single-node mode

Table 121 Super Controller deployment in single-node mode

Node name

Node quantity

CPU

Memory

Drive

NIC

Super controller node

1

Optional CPU settings are as follows:

·     32 cores, 2 × Hygon G5 5380 16-core, 2.5 GHz CPUs

·     48 cores, 2 × Kunpeng 920 24-core, 2.6 GHz CPUs

·     128 cores, 2 × Phytium S2500 64-core, 2.1 GHz CPUs

128 GB or more

System drive: 1920 GB (after RAID setup)

Non-bonding mode: 1 × 10Gbps network port.

Bonding mode: 2 × 10Gbps ports, forming a Linux bonding interface

Remarks:

·     To deploy general_PLAT_kernel-region and general_PLAT_network, add hardware resources according to Table 126.

·     For E6301 and later versions, the ETCD partition can share a physical drive with other partitions.

 

Hardware requirements for Super Controller deployment in a converged manner

 

NOTE:

Super Controller deployment is supported in AD-DC 6.2 and later.

 

Super Controller can be deployed with the controller/analyzer in a converged manner in single-node mode or three-node cluster mode. In this scenario, add additional hardware resources for Super Controller.

Additional hardware resources required for Super Controller deployment in cluster mode

Table 122 Additional hardware resources required for Super Controller deployment in cluster mode

Node name

Node quantity

CPU

Memory

Drive

Super controller node

3

·     X86-64 Hygon server, 10 more cores

·     ARM Kunpeng server, 20 more cores

·     ARM Phytium server, 40 more cores

64 GB or above

100 GB or above system drive capacity after RAID setup

 

Additional hardware resources required for Super Controller deployment in single-node mode

Table 123 Additional hardware resources required for Super Controller deployment in single-node mode

Node name

Node quantity

CPU

Memory

Drive

Super controller node

1

·     X86-64 Hygon server, 10 more cores

·     ARM Kunpeng server, 20 more cores

·     ARM Phytium server, 40 more cores

64 GB or above

100 GB or above system drive capacity after RAID setup

 

Hardware requirements for Super Analyzer deployment

 

NOTE:

Super Analyzer deployment is supported in AD-DC 6.3 and later.

 

Table 124 Hardware requirements (x86-64 Hygon servers)

Node configuration

Maximum number of analyzers

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 48 physical cores in total, 2.2GHz. Recommended: 2 × Hygon G5 7360 24-core, 2.2 GHz CPUs

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network port (bonding mode):

¡     Non-bonding mode: 1 × 10Gbps network port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

10

Analyzer node

3

·     CPU: 48 physical cores in total, 2.2GHz. Recommended: 2 × Hygon G5 7360 24-core, 2.2 GHz CPUs

·     Memory: 192 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps network port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

50

 

Table 125 Hardware requirements (ARM Kunpeng servers)

Node configuration

Maximum number of analyzers

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 64 physical cores in total, 2.6GHz. Recommended: 2 × Kunpeng 920 5232 32-core, 2.6 GHz CPUs

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network port (bonding mode):

¡     Non-bonding mode: 1 × 10Gbps network port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

10

Analyzer node

3

·     CPU: 64 physical cores in total, 2.6GHz. Recommended: 2 × Kunpeng 920 5232 32-core, 2.6 GHz CPUs

·     Memory: 192 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup

·     ETCD drive: 50 GB (after RAID setup)

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps network port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

50

 

 

NOTE:

·     Hardware configuration required by the super controller depends on the analyzer data type. The super controller supports NetStream data and full link data.

·     The super controller supports standalone deployment and converged deployment with the analyzer.

 

Hardware requirements for optional application packages

To deploy optional application packages, reserve additional hardware resources as required in Table 126.

Table 126 Hardware requirements for optional application packages

Component

Feature description

CPU (cores)

(X86-64 Hygon servers/ARM Kunpeng servers/ARM Phytium servers)

Memory (GB)

general_PLAT_network

Provides the basic management functions (including network resources, network performance, network topology, and iCC)

5

16

general_PLAT_kernel-region

Provides the hierarchical management function

1

6

Syslog

Provides the syslog feature.

3

8

general_PLAT_netconf

Provides the NETCONF invalidity check and NETCONF channel features.

6

10

webdm

Provides device panels.

4

4

SeerEngine_DC_DTN_VIRTUALIZATION_HOST-version.zip

Virtualized DTN host, which is deployed with the controller/DTN component in a converged manner. A virtualized DTN host can manage up to 30 simulated devices.

8

In the current software version, only x86-64 Hygon servers are supported.

64

Virtualized DTN host, which is deployed with the controller/DTN component in a converged manner. A virtualized DTN host can manage up to 60 simulated devices.

16

In the current software version, only x86-64 Hygon servers are supported.

128

 

Hardware requirements for deployment on VMs

AD-DC supports deployment on VMs. Supported hypervisors:

·     VMware ESXi of version 6.7.0.

·     H3C CAS E0706 or higher.

For deployment on a VM, if the host where the VM resides is enabled with hyperthreading, the number of required vCPUs is twice the number of CPU cores required for deployment on a physical server. If the host is not enabled with hyperthreading, the number of required vCPUs is the same as the number of CPU cores required for deployment on a physical server. The memory and drive resources required for deployment on a VM are the same as those required for deployment on a physical server. As a best practice, enable hyperthreading. In this section, hyperthreading is enabled.

 

IMPORTANT

IMPORTANT:

·     The configuration in this section is applicable to AD-DC 6.2 and later.

·     Deployment on VM is supported only in non-cloud interconnection scenarios.

·     Allocate CPU, memory, and drive resources in sizes as recommended to VMs and make sure sufficient physical resources are reserved for the allocation. To ensure stability of the system, do not overcommit hardware resources such as CPU, memory, and drive.

·     To deploy the controller on a VMware VM, you must enable promiscuous mode and pseudo-transmission on the host where the VM resides.

·     In E63xx and later versions, the DTN physical host can be deployed on a VMware VM. For more information, see H3C SeerEngine-DC Simulation Network Configuration Guide.

·     The vBGP component cannot be deployed on VMs.

·     As a best practice to ensure high availability, deploy the three controller nodes to VMs on different hosts.

·     As a best practice, deploy the controller on physical servers if the network contains more than 30 leaf nodes.

 

Standalone deployment of controller

Table 127 Deployment of controller in cluster mode (Unified Platform + controller)

Node name

Node quantity

Minimum single-node requirements

Maximum resources that can be managed

Controller node

3

·     vCPUs:

¡     Intel/AMD CPU: 32 cores if hyperthreading is enabled, and 16 cores if hyperthreading is not enabled. 2.0GHz

¡     Hygon CPU: 48 cores if hyperthreading is enabled, and 24 cores if hyperthreading is not enabled. 2.0GHz

¡     Kunpeng CPU: 48 cores. 2.0GHz

¡     Phytium CPU: 96 cores. 2.0GHz

·     Memory: 128 GB. To deploy DTN and the controller in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

·     Drive:

¡     System drive: 1.92 TB, IOPS: 5000 or higher

¡     ETCD drive: 50 GB, IOPS: 5000 or higher

·     Network ports: 1 × 10Gbps network port

¡     To deploy a 3+3 disaster recovery cluster, one more network port is required.

¡     To deploy DTN and the controller in a converged manner, one more network port is required.

Device quantity: 36

Server quantity: 600

Standalone deployment of the DTN component

1

·     vCPUs:

¡     Intel/AMD CPU: 32 cores if hyperthreading is enabled, and 16 cores if hyperthreading is not enabled. 2.0GHz

¡     Hygon CPU: 48 cores if hyperthreading is enabled, and 24 cores if hyperthreading is not enabled. 2.0GHz

¡     Kunpeng CPU: 48 cores. 2.0GHz

¡     Phytium CPU: 96 cores. 2.0GHz

·     Memory: 128 GB.

·     Drive:

¡     System drive: 1.92 TB, IOPS: 5000 or higher

¡     ETCD drive: 50 GB, IOPS: 5000 or higher

·     Network ports: 2 × 10Gbps network ports

The configuration is applicable only to scenarios where the DTN component is deployed on a worker node.

DTN host

1

·     vCPUs:

¡     Intel/AMD CPU: 32 cores if hyperthreading is enabled, and 16 cores if hyperthreading is not enabled. 2.0GHz

¡     Hygon CPU: 48 cores if hyperthreading is enabled, and 24 cores if hyperthreading is not enabled. 2.0GHz

¡     Kunpeng CPU: 48 cores. 2.0GHz

¡     Phytium CPU: 96 cores. 2.0GHz

·     Memory: 128 GB.

·     System drive: 600 GB, IOPS: 5000 or higher

·     Network ports: 3 × 10Gbps network ports

Applicable to E63xx and later versions

Remarks:

To deploy optional application packages, add hardware resources according to Table 126.

 

Table 128 Deployment of controller in single-node mode (Unified Platform + controller)

Node name

Node quantity

Minimum single-node requirements

Maximum resources that can be managed

Controller node

1

·     vCPUs:

¡     Intel/AMD CPU: 32 cores if hyperthreading is enabled, and 16 cores if hyperthreading is not enabled. 2.0GHz

¡     Hygon CPU: 48 cores if hyperthreading is enabled, and 24 cores if hyperthreading is not enabled. 2.0GHz

¡     Kunpeng CPU: 48 cores. 2.0GHz

¡     Phytium CPU: 96 cores. 2.0GHz

·     Memory: 128 GB.

·     Drive:

¡     System drive: 1.92 TB, IOPS: 5000 or higher

¡     ETCD drive: 50 GB, IOPS: 5000 or higher

·     Network ports: 1 × 10Gbps network port. To deploy DTN, one more network port is required.

Device quantity: 36

Server quantity: 600

DTN host

1

·     vCPUs:

¡     Intel/AMD CPU: 32 cores if hyperthreading is enabled, and 16 cores if hyperthreading is not enabled. 2.0GHz

¡     Hygon CPU: 48 cores if hyperthreading is enabled, and 24 cores if hyperthreading is not enabled. 2.0GHz

¡     Kunpeng CPU: 48 cores. 2.0GHz

¡     Phytium CPU: 96 cores. 2.0GHz

·     Memory: 128 GB.

·     System drive: 600 GB, IOPS: 5000 or higher

·     Network ports: 3 × 10Gbps network ports

Applicable to E63xx and later versions

Remarks:

·     To deploy optional application packages, add hardware resources according to Table 126.

·     To use the simulation feature, another 100 GB or more memory resources are required on each server deployed with DTN.

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     For versions earlier than E6205, you cannot scale up the controller from single-node mode to cluster mode. For E6205 and later versions, you can scale up the controller from single-node mode to cluster mode.

·     In single-node mode, hybrid overlay, security groups, QoS, and interconnection with cloud platforms are not supported.

 

Hardware requirements for standalone deployment of analyzer

 

NOTE:

The configuration in this section is applicable to AD-DC 6.2 and later.

 

Table 129 Standalone deployment of analyzer in cluster mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 40 vCPU cores, 2.0 GHz

·     Memory: 192 GB.

·     System drive: 1.92 TB

·     Data drive: 8TB with a minimum random read/write capacity of 200M/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

¡     Inter-cluster 10GE bandwidth

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: 48 vCPU cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB

·     Data drive: 12TB with a minimum random read/write capacity of 200M/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

¡     Inter-cluster 10GE bandwidth

100

2000 VMs, 4000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: 64 vCPU cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB

·     Data drive: 12TB with a minimum random read/write capacity of 200M/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

¡     Inter-cluster 10GE bandwidth

200

5000 VMs, 10000 TCP flows/sec.

2 TCP flows for each VM per second.

 

Table 130 Standalone deployment of analyzer in single-node mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 40 vCPU cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB

·     Data drive: 8TB with a minimum random read/write capacity of 200MB/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

 

IMPORTANT

IMPORTANT:

·     You can calculate the overall TCP flow size based on the total number of VMs in the data center, and calculate the required hardware configuration on the basis of 2 TCP flows per second for each VM.

·     Allocate CPU, memory, and drive resources in sizes as recommended and make sure sufficient physical resources are available for the allocation. Do not overcommit hardware resources.

·     Only H3C CAS VMs are supported. CAS VMs require local storage and the drive capacity after RAID setup must meet the requirement. Use 3 or more drives of the same model to set up local RAID.

·     Install the ETCD drive on a different physical drive than any other drives. Make sure ETCD has exclusive use of the drive where it is installed.

·     DC collectors cannot be deployed on VMs.

·     If a single system drive does not meet the requirements, you can mount the system drive partitions to different drives.

 

Requirements for test and demo deployment

The hardware configuration in this section is only used for test and demonstration, and cannot be used in site scenarios.

 

IMPORTANT

IMPORTANT:

·     Performance test is not supported with this resource configuration.

·     The configuration in this section is applicable to AD-DC 6.2 and later.

 

Standalone deployment of controller

Table 131 Single-node deployment of controller (x86-64: Intel64/AMD64)  (Unified Platform + controller)

Node configuration

Maximum resources that can be managed

Remarks

Node name

Node quantity

CPU/memory requirements

Drive/network port requirements

Controller node

1

CPU: 12 cores, 2.0 GHz

Memory: 96 GB.

Drive:

·     System drive: 1.0 TB

·     ETCD drive: 50 GB

Network ports: Non-bonding mode, 1 × 10Gbps network port.

Device quantity: 10

--

 

 

NOTE:

·     Only the Unified Platform GFS, portal, kernel, kernel-base, Dashboard, and widget components are installed. To install other components, add hardware resources are required.

·     To use the simulation feature, add hardware resources as required.

 

Hardware requirements for standalone deployment of analyzer

Table 132 Single-host deployment of analyzer (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 16 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     Drive: 4 TB after RAID setup

·     Network ports: 2 × 10Gbps network ports

10

500 VMs, 1000 TCP flows/sec.

2 TCP flows for each VM per second.

 

 

NOTE:

·     The hardware configuration in this section is only used for test and demonstration, and cannot be used in site scenarios or performance test.

·     For recommended drive partitioning, see Table 133.

 

Table 133 Recommended drive partitioning

Node configuration

Partition size

/sa_data

200GB

/sa_data/mpp_data

1100GB

/sa_data/kafka_data

700GB

/var/lib/docker

50GB

/var/lib/docker

400GB

/var/lib/ssdata

350GB

 

 


Hardware requirements for AD-WAN

Hardware requirements for AD-WAN carrier network deployment

The AD-WAN carrier network supports single-node deployment and cluster deployment, and can be deployed on physical servers or VMs.

You can deploy the controller and the analyzer separately or in a converged manner.

When the controller and analyzer are deployed in a converged manner, you must deploy a DTN physical host if device simulation is used.

 

 

NOTE:

·     For Unified Platform versions earlier than E0706 (including E06xx), the etcd partition requires a separate physical drive.

·     For Unified Platform E0706 and later versions, the etcd partition can share a physical drive with other partitions. As a best practice, use a separate physical drive for the etcd partition.

 

Deployment on physical servers

Standalone deployment of the AD-WAN carrier controller

Table 134 Hardware requirements for deployment of the AD-WAN carrier controller in cluster mode (Unified Platform + controller)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller

200

Intel Xeon V3 or higher, 12 or more cores

2.0 GHz or above

96 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 2 × 1Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 1 Gbps network ports, each two forming a Linux bonding interface.

Controller

2000

Intel Xeon V3 or higher, 16 or more cores

2.0 GHz or above

144 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

 

Table 135 Hardware requirements for deployment of the AD-WAN carrier controller in single-node mode (Unified Platform + controller)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller

200

Intel Xeon V3 or higher, 14 or more cores

2.0 GHz or above

102 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 2 × 1Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 1 Gbps network ports, each two forming a Linux bonding interface.

Controller

2000

Intel Xeon V3 or higher, 18 or more cores

2.0 GHz or above

150 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

 

Converged deployment of the AD-WAN carrier controller

This section describes the hardware requirements for converged deployment of SeerAnalyzer E62xx or later and the controller.

The following converged deployment modes are available:

·     Converged deployment of the controller and analyzer in single-node mode. For the hardware requirements, see Table 136.

·     Converged deployment of the controller and analyzer in a three-node cluster mode. In this mode, deploy the controller and analyzer on three master nodes. For hardware requirements, see Table 137.

·     Converged deployment of the controller and analyzer in 3+N cluster mode. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. Table 139 shows the hardware requirements for the carrier controller in this mode. Table 140 shows the hardware requirements for the analyzer in 3+1 cluster mode. Table 141 shows the hardware requirements for the analyzer in 3+3 cluster mode.

 

 

NOTE:

In 3+N cluster deployment mode, the analyzer is deployed on worker nodes. In this scenario, the hardware requirements vary by analyzer version. This document describes only the hardware requirements for SeerAnalyzer E62xx and later. For hardware requirements of SeerAnalyzer history versions, see "Hardware requirements for SeerAnalyzer history versions".

 

Table 136 Hardware requirements for converged deployment of the carrier controller and analyzer in single-node mode (Unified Platform + controller + analyzer)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller + analyzer

200

Intel Xeon V3 or higher, 30 or more cores

2.0 GHz or above

208 GB or more

Remarks:

To use the simulation feature and deploy DTN and the analyzer in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Data drive: 2 TB or above. Use 2 or more drives of the same model

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Data drive: The capacity requirement refers to the capacity after RAID setup.

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Data drive: The capacity requirement refers to the capacity after RAID setup.

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

DTN physical server

A single server can manage up to 30 simulated devices (VSR)

Number of required servers = number of simulated devices/30

x86-64 (Intel64/AMD64) 16-core 2.0 GHz CPUs that support VX-T/VX-D

128 GB or more

System drive: 600 GB or above

3 × 10Gbps network ports

DTN physical server

A single server can manage up to 60 simulated devices (VSR)

Number of required servers = number of simulated devices/60

x86-64 (Intel64/AMD64) 20-core 2.2 GHz CPUs that support VX-T/VX-D

256 GB or more

Simulation analyzer

200

16 physical cores, 2.0 GHz

160GB or above

System drive: 1.92 TB (after RAID setup)

Data drive: 1 TB or above after RAID setup

ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     If the controller and analyzer are deployed in a converged manner on a single node, you cannot scale up the system to a cluster.

·     The simulation analyzer requires an exclusive Unified Platform. For more information, see H3C SeerAnalyzer-WAN Simulation Network Configuration Guide.

 

Table 137 Converged deployment of the carrier controller and analyzer in three-node cluster mode (x86-64: Intel64/AMD64) (Unified Platform + controller + analyzer)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller + analyzer

200

Intel Xeon V3 or higher, 28 or more cores

2.0 GHz or above

176 GB or more

Remarks:

To use the simulation feature and deploy DTN and the analyzer in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Data drive: The capacity requirement refers to the capacity after RAID setup.

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Data drive: The capacity requirement refers to the capacity after RAID setup.

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

Controller + analyzer

2000

Intel Xeon V3 or higher, 32 or more cores

2.0 GHz or above

320 GB or more

Remarks:

To use the simulation feature and deploy DTN and the analyzer in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Data drive: 4 TB or above. Use 3 or more drives of the same model

DTN physical server

A single server can manage up to 30 simulated devices (VSR)

Number of required servers = number of simulated devices/30

x86-64 (Intel64/AMD64) 16-core 2.0 GHz CPUs that support VX-T/VX-D

128 GB or more

System drive: 600 GB (after RAID setup)

3  × 10Gbps network ports

DTN physical server

A single server can manage up to 60 simulated devices (VSR)

Number of required servers = number of simulated devices/60

x86-64 (Intel64/AMD64) 20-core 2.2 GHz CPUs that support VX-T/VX-D

256 GB or more

Simulation analyzer

200

16 physical cores, 2.0 GHz

160GB or above

System drive: 1.92 TB (after RAID setup)

Data drive: 1 TB or above after RAID setup

ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

 

Table 138 Hardware requirements for converged deployment of the carrier controller and analyzer in three-node cluster mode (x86-64 Hygon servers) (Unified Platform + controller + analyzer)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller + analyzer

200

2 × Hygon C86 7265 CPUs, total cores ≥ 24

2.0 GHz or above

176 GB or more

Remarks:

To use the simulation feature and deploy DTN and the analyzer in a converged manner, another 100 GB or more memory resources are required on each server deployed with DTN.

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Data drive: The capacity requirement refers to the capacity after RAID setup.

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Data drive: The capacity requirement refers to the capacity after RAID setup.

 

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

DTN physical server

A single server can manage up to 30 simulated devices (VSR)

Number of required servers = number of simulated devices/30

x86-64 (Intel64/AMD64) 16-core 2.0 GHz CPUs that support VX-T/VX-D

128 GB or more

System drive: 600 GB (after RAID setup)

3  × 10Gbps network ports

DTN physical server

A single server can manage up to 60 simulated devices (VSR)

Number of required servers = number of simulated devices/60

x86-64 (Intel64/AMD64) 20-core 2.2 GHz CPUs that support VX-T/VX-D

256 GB or more

Simulation analyzer

200

16 physical cores, 2.0 GHz

160GB or above

System drive: 1.92 TB (after RAID setup)

Data drive: 1 TB or above after RAID setup

ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

 

Table 139 Hardware requirements for a master node in 3+N cluster deployment mode (Unified Platform including analyzer + controller)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller

200

Intel Xeon V3 or higher, 20 or more cores

2.0 GHz or above

126 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 1 Gbps network ports, each two forming a Linux bonding interface.

Controller

2000

Intel Xeon V3 or higher, 24 or more cores

2.0 GHz or above

174 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

 

Table 140 Hardware requirements for a worker node in 3+1 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 160 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

 

Table 141 Hardware requirements for a worker node in 3+3 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 16 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

1000

Analyzer node

3

·     CPU: 18 cores (total physical cores), 2.0 GHz

·     Memory: 140 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

2000

 

Hardware requirements for network slicing deployment

 

NOTE:

The network slicing component is optional and can be deployed only on master nodes.

 

Table 142 Additional hardware resources required by the network slicing component

Node configuration

Maximum number of devices

Node name

Node quantity

Minimum single-node requirements

Network slicing node

1

·     CPU: Intel Xeon V3 or later, 1 core, 2.0GHz

·     Memory: 2.5 GB

2000

 

Hardware requirements for quorum node deployment

Table 143 Hardware requirements for quorum node deployment

Node name

Node quantity

Minimum single-node requirements

Quorum node

1

CPU: 2 cores, 2.0 GHz

Memory: 16 GB.

Drives: The drives must be configured in RAID 1 or 10.

·     Drive configuration option 1:

¡     System drive: 2 × 480 GB SSDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup and a minimum IOPS of 5000.

¡     ETCD drive: 2 × 480 GB SSDs in RAID 1, with a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd

·     Drive configuration option 2:

¡     System drive: 2 × 600 GB HDDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup, a minimum RPM of 7.2K, and a minimum IOPS of 5000.

¡     ETCD drive: 2 × 600 GB HDDs in RAID 1, with a minimum total capacity of 50 GB after RAID setup and a minimum RPM of 7.2K. Installation path: /var/lib/etcd

¡     RAID controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present

Network port: 1 × 10Gbps network port

 

Hardware requirements for deployment on VMs

AD-WAN supports deployment on VMs. Supported hypervisors:

·     VMware ESXi of version 6.7.0.

·     H3C CAS E0706 or higher.

For deployment on a VM, if the host where the VM resides is enabled with hyperthreading, the number of required vCPUs is twice the number of CPU cores required for deployment on a physical server. If the host is not enabled with hyperthreading, the number of required vCPUs is the same as the number of CPU cores required for deployment on a physical server. The memory and drive resources required for deployment on a VM are the same as those required for deployment on a physical server. As a best practice, enable hyperthreading. In this section, hyperthreading is enabled.

 

IMPORTANT

IMPORTANT:

·     As a best practice, use physical servers for deployment if the network contains more than 200 devices.

·     Allocate CPU, memory, and drive resources in sizes as recommended to VMs and make sure sufficient physical resources are reserved for the allocation. To ensure stability of the system, do not overcommit hardware resources.

·     To deploy the controller on a VMware VM, you must enable promiscuous mode and pseudo-transmission on the host where the VM resides.

·     The DTN physical host cannot be deployed on VMs.

 

Standalone deployment of the AD-WAN carrier controller

Table 144 Deployment of the AD-WAN carrier controller in a VM cluster (Unified Platform + controller)

Components

Device quantity

vCPUs

Memory

Drive

Network ports:

Controller

200

≥ 24 cores

2.0 GHz or above

96 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

2 × 1Gbps network ports

 

Table 145 Deployment of the AD-WAN carrier controller on a VM (Unified Platform + controller)

Components

Device quantity

vCPUs

Memory

Drive

Network ports:

Controller

200

≥ 28 cores

2.0 GHz or above

102 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

2 × 1Gbps network ports

 

Converged deployment of the AD-WAN carrier controller

The converged deployment of the controller and analyzer in 3+N cluster mode is available. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. Table 146 shows the hardware requirements for the controller in this mode. Table 147 shows the hardware requirements for the analyzer in 3+1 cluster mode. Table 148 shows the hardware requirements for the analyzer in 3+3 cluster mode.

Table 146 Hardware requirements for a master node in 3+N cluster deployment mode (Unified Platform including analyzer + controller)

Components

Device quantity

vCPUs

Memory

Drive

Network ports:

Controller

200

≥ 40 cores

2.0 GHz or above

126 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

2 × 10Gbps network ports

 

Table 147 Hardware requirements for a worker node in 3+1 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 48 vCPU cores, 2.0 GHz

·     Memory: 160 GB.

·     System drive: 1.92 TB

·     Data drive: 2TB with a minimum random read/write capacity of 100M/s

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

 

Table 148 Hardware requirements for a worker node in 3+3 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 32 vCPU cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 1.92 TB

·     Data drive: 2TB with a minimum random read/write capacity of 100M/s

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

1000

 

Hardware requirements for small-scale testing and demo deployment

The hardware configuration in this section is only used for test and demonstration, and cannot be used in site scenarios.

 

IMPORTANT

IMPORTANT:

·     Do not use the following configurations in a production environment.

·     For hardware requirements of converged deployment scenarios, see the physical server or VM deployment sections.

·     Performance test is not supported with this resource configuration.

 

Table 149 Hardware requirements for carrier controller deployment for small-scale testing (including Unified Platform)

Components

Device quantity

vCPUs

Memory

Drive

Network ports

Controller

10

≥ 16 cores

2.0 GHz or above

64 GB or more

System drive: 1.0 TB or above

ETCD drive: 50 GB or above

·     Non-bonding mode: 1 × 1Gbps Ethernet port.

·     Bonding mode: 2 × ports, forming a Linux bonding interface.

 

Hardware requirements for SD-WAN branch access network deployment

Single-node deployment and cluster deployment are supported. The SD-WAN branch controller can be deployed on physical servers or VMs.

 

 

NOTE:

·     For Unified Platform versions earlier than E0706 (including E06xx), the etcd partition requires a separate physical drive.

·     For Unified Platform E0706 and later versions, the etcd partition can share a physical drive with other partitions. As a best practice, use a separate physical drive for the etcd partition.

 

Deployment on physical servers

Standalone deployment of the branch controller

Table 150 Hardware requirements for branch controller deployment in cluster mode (including Unified Platform)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller

200

Intel Xeon V3 or higher, 12 or more cores

2.0 GHz or above

128 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 1 × 1Gbps network port.

·     Bonding mode (mode 2 or mode 4 is recommended): 2 × 1 Gbps or above network ports, forming a Linux bonding interface.

Controller

2000

Intel Xeon V3 or higher, 16 or more cores

2.0 GHz or above

128 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

 

Table 151 Hardware requirements for branch controller deployment in single-node mode (including Unified Platform)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller

200

Intel Xeon V3 or higher, 12 or more cores

2.0 GHz or above

128 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 1 × 1Gbps network port.

·     Bonding mode (mode 2 or mode 4 is recommended): 2 × 1 Gbps or above network ports, forming a Linux bonding interface.

Controller

2000

Intel Xeon V3 or higher, 16 or more cores

2.0 GHz or above

128 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

 

Converged deployment of the branch controller

This section describes the hardware requirements for converged deployment of SeerAnalyzer E62xx or later and the controller.

The following converged deployment modes are available:

·     Converged deployment of the branch controller and analyzer in single-node mode. For the hardware requirements, see Table 152. To deploy the security controller, add additional hardware resources as required in Table 157.

·     Converged deployment of the branch controller and analyzer in three-node cluster mode. In this mode, deploy the controller and analyzer on three master nodes. For hardware requirements, see Table 153. To deploy the security controller, add additional hardware resources for the 3 master nodes as required in Table 158.

·     Converged deployment of the branch controller and analyzer in 3+N cluster mode. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. Table 154 shows the hardware requirements for the branch controller in this mode. Table 155 shows the hardware requirements for the analyzer in 3+1 cluster mode. Table 156 shows the hardware requirements for the analyzer in 3+3 cluster mode. To deploy the security controller, add additional hardware resources for the 3 master nodes as required in Table 158. The security controller must be deployed on master nodes.

 

 

NOTE:

In 3+N cluster deployment mode, the analyzer is deployed on worker nodes. In this scenario, the hardware requirements vary by analyzer version. This document describes only the hardware requirements for SeerAnalyzer E62xx and later. For hardware requirements of SeerAnalyzer history versions, see "Hardware requirements for SeerAnalyzer history versions".

 

Table 152 Hardware requirements for converged deployment of the branch controller and analyzer in single-node mode (Unified Platform + controller + analyzer)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller + analyzer

200

Intel Xeon V3 or higher, 28 or more cores

2.0 GHz or above

192 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

 

IMPORTANT

IMPORTANT:

In single-node deployment mode, follow these restrictions and guidelines:

·     The single-node deployment mode cannot provide redundancy, and device failures will cause service interruption. As a best practice, configure a remote backup server.

·     If the controller and analyzer are deployed in a converged manner on a single node, you cannot scale up the system to a cluster.

 

Table 153 Hardware requirements for converged deployment of the branch controller and analyzer in three-node cluster mode (Unified Platform + controller + analyzer)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller + analyzer

200

Intel Xeon V3 or higher, 28 or more cores

2.0 GHz or above

160 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

Controller + analyzer

2000

Intel Xeon V3 or higher, 36 or more cores

2.0 GHz or above

256 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

 

Table 154 Hardware requirements for a master node in 3+N cluster deployment mode (Unified Platform including analyzer + controller)

Components

Device quantity

CPU

Memory

Drive

General drive requirements

Network ports

Controller

200

Intel Xeon V3 or higher, 20 or more cores

2.0 GHz or above

158 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

Drive: The drives must be configured in RAID 1, 5, or 10. Storage controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present.

Drive configuration option 1:

·     System drive: The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: SSDs configured in RAID. (Installation path: /var/lib/etcd.)

Drive configuration option 2:

·     System drive: 7.2K RPM SATA/SAS HDDs configured in RAID. The capacity requirement refers to the capacity after RAID setup.

·     ETCD drive: 7.2K RPM or above SATA/SAS HDDs configured in RAID. (Installation path: /var/lib/etcd.)

·     Non-bonding mode: 2 × 10Gbps network ports.

·     Bonding mode (mode 2 or mode 4 is recommended): 4 × 10 Gbps or above network ports, each two forming a Linux bonding interface.

Controller

2000

Intel Xeon V3 or higher, 24 or more cores

2.0 GHz or above

158 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

 

Table 155 Hardware requirements for a worker node in 3+1 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 160 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

 

Table 156 Hardware requirements for a worker node in 3+3 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 16 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

1000

Analyzer node

3

·     CPU: 18 cores (total physical cores), 2.0 GHz

·     Memory: 140 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

2000

 

In the branch scenario, the security controller must be deployed in a converged manner with the branch controller. It can provide security automation for the WAN branch network. To deploy the security controller, add additional hardware resources as described below.

Table 157 Additional hardware resources required for security controller deployment (converged deployment in single-node mode)

Node configuration

Maximum number of devices

Maximum number of policies

Node name

Node quantity

Minimum single-node requirements

Controller node

1

·     CPU: Intel Xeon V3 or later, 4 core, 2.0GHz

·     Memory: 32 GB

200

60000

 

Table 158 Additional hardware resources required for security controller deployment (converged deployment in cluster mode)

Node configuration

Maximum number of devices

Maximum number of policies

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     CPU: Intel Xeon V3 or later, 8 core, 2.0GHz

·     Memory: 32 GB

2000

240000

 

Hardware requirements for quorum node deployment

Table 159 Hardware requirements for quorum node deployment

Node name

Node quantity

Minimum single-node requirements

Quorum node

1

CPU: 2 cores, 2.0 GHz

Memory: 16 GB.

Drives: The drives must be configured in RAID 1 or 10.

·     Drive configuration option 1:

¡     System drive: 2 × 480 GB SSDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup and a minimum IOPS of 5000.

¡     ETCD drive: 2 × 480 GB SSDs in RAID 1, with a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd

·     Drive configuration option 2:

¡     System drive: 2 × 600 GB HDDs in RAID 1, with a minimum total capacity of 256 GB after RAID setup, a minimum RPM of 7.2K, and a minimum IOPS of 5000.

¡     ETCD drive: 2 × 600 GB HDDs in RAID 1, with a minimum total capacity of 50 GB after RAID setup and a minimum RPM of 7.2K. Installation path: /var/lib/etcd

¡     RAID controller: 1 GB cache, powerfail safeguard module supported, supercapacitor present

Network port: 1 × 10Gbps network port

 

Hardware requirements for deployment on VMs

SD-WAN supports deployment on VMs. Supported hypervisors:

·     VMware ESXi of version 6.7.0.

·     H3C CAS E0706 or higher.

For deployment on a VM, if the host where the VM resides is enabled with hyperthreading, the number of required vCPUs is twice the number of CPU cores required for deployment on a physical server. If the host is not enabled with hyperthreading, the number of required vCPUs is the same as the number of CPU cores required for deployment on a physical server. The memory and drive resources required for deployment on a VM are the same as those required for deployment on a physical server. As a best practice, enable hyperthreading. In this section, hyperthreading is enabled.

 

IMPORTANT

IMPORTANT:

·     As a best practice, use physical servers for deployment if the network contains more than 200 devices.

·     Allocate CPU, memory, and drive resources in sizes as recommended to VMs and make sure sufficient physical resources are reserved for the allocation. To ensure stability of the system, do not overcommit hardware resources.

·     To deploy the controller on a VMware VM, you must enable promiscuous mode and pseudo-transmission on the host where the VM resides.

 

Standalone deployment of the branch controller

Table 160 Hardware requirements for branch controller deployment in a VM cluster (including Unified Platform)

Components

Device quantity

vCPUs

Memory

Drive

Network ports:

Controller

200

≥ 24 cores

2.0 GHz or above

128 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

1 × 1Gbps network port

 

Table 161 Hardware requirements for branch controller deployment on a VM (including Unified Platform)

Components

Device quantity

vCPUs

Memory

Drive

Network ports:

Controller

200

≥ 24 cores

2.0 GHz or above

128 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

1 × 1Gbps network port

 

Converged deployment of the branch controller

The converged deployment of the branch controller and analyzer in 3+N cluster mode is available. In this mode, deploy the controller on three master nodes and deploy the analyzer on N (1 or 3) worker nodes. Table 162 shows the hardware requirements for the branch controller in this mode. Table 163 shows the hardware requirements for the analyzer in 3+1 cluster mode. Table 164 shows the hardware requirements for the analyzer in 3+3 cluster mode. To deploy the security controller, add additional hardware resources for the 3 master nodes as required in Table 165. The security controller must be deployed on master nodes.

Table 162 Hardware requirements for a master node in 3+N cluster deployment mode (Unified Platform including analyzer + controller)

Components

Device quantity

vCPUs

Memory

Drive

Network ports

Controller

200

≥ 40 cores

2.0 GHz or above

158 GB or more

System drive: 1.92 TB or above

ETCD drive: 50 GB or above

2 × 10Gbps network ports

 

Table 163 Hardware requirements for a worker node in 3+1 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 48 vCPU cores, 2.0 GHz

·     Memory: 160 GB.

·     System drive: 1.92 TB

·     Data drive: 2TB with a minimum random read/write capacity of 100M/s

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

 

Table 164 Hardware requirements for a worker node in 3+3 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Maximum number of devices

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 32 vCPU cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 1.92 TB

·     Data drive: 2TB with a minimum random read/write capacity of 100M/s

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

1000

 

In the branch scenario, the security controller must be deployed in a converged manner with the branch controller. It can provide security automation for the WAN branch network. To deploy the security controller, add additional hardware resources for the master nodes as required in Table 165. The security controller must be deployed on master nodes.

Table 165 Additional hardware resources required for security controller deployment (converged deployment in cluster mode)

Node configuration

Maximum number of devices

Maximum number of policies

Node name

Node quantity

Minimum single-node requirements

Controller node

3

·     vCPU: 16 cores, 2.0 GHz.

·     Memory: 32 GB

2000

240000

 

 


Hardware requirements for SeerAnalyzer (NPA/TRA/LGA)

Single-node deployment and cluster deployment are supported. SeerAnalyzer(NPA/TRA/LGA) can be deployed on physical servers or VMs.

 

 

NOTE:

If the Unified Platform version is earlier than E0706 (including E06xx), the ETCD partition cannot share a physical drive with other partitions. For Unified Platform E0706 and later versions, the ETCD partition can share a physical drive with other partitions. As a best practice, use a separate physical drive for the ETCD partition.

 

Hardware requirements for SeerAnalyzer-NPA

Deployment on physical servers

Standalone deployment in single-node mode (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer-NPA)

Table 166 Standalone deployment in single-node mode (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer-NPA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 32 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

 

Standalone deployment in single-node mode (x86-64 Hygon) (Unified Platform + SeerAnalyzer-NPA)

Table 167 Standalone deployment in single-node mode (x86-64 Hygon) (Unified Platform + SeerAnalyzer-NPA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 64 physical cores, 2.0GHz. Recommended: 2 × Hygon C86 7280 32-core, 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

 

Standalone deployment in single-node mode (ARM Kunpeng) (Unified Platform + SeerAnalyzer-NPA)

Table 168 Standalone deployment in single-node mode (ARM Kunpeng) (Unified Platform + SeerAnalyzer-NPA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 64 physical cores, 2.0GHz. Recommended: 2 × Kunpeng 920 32-core, 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

 

Standalone deployment in cluster mode (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer-NPA)

Table 169 Standalone deployment in cluster mode (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer-NPA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

≥3

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

Analyzer node

≥3

·     CPU: 32 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

20 collectors, 160 links

Number of collectors ≤20, number of links ≤160

 

Standalone deployment in cluster mode (x86-64 Hygon) (Unified Platform + SeerAnalyzer-NPA)

Table 170 Standalone deployment in cluster mode (x86-64 Hygon) (Unified Platform + SeerAnalyzer-NPA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

≥3

·     CPU: 48 physical cores, 2.2GHz. Recommended: 2 × Hygon C86 7265 24-core, 2.2 GHz CPUs

·     Memory: 128 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

Analyzer node

≥3

·     CPU: 64 physical cores, 2.2 GHz. Recommended: 2 × Hygon C86 7280 32-core, 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

20 collectors, 160 links

Number of collectors ≤20, number of links ≤160

 

Standalone deployment in cluster mode (ARM Kunpeng) (Unified Platform + SeerAnalyzer-NPA)

Table 171 Standalone deployment in cluster mode (ARM Kunpeng) (Unified Platform + SeerAnalyzer-NPA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

≥3

·     CPU: 48 physical cores, 2.6GHz. Recommended: 2 × Kunpeng 920 24-core, 2.6 GHz CPUs

·     Memory: 128 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

Analyzer node

≥3

·     CPU: 64 physical cores, 2.0GHz. Recommended: 2 × Kunpeng 920 32-core, 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 8 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

20 collectors, 160 links

Number of collectors ≤20, number of links ≤160

 

 

NOTE:

Number of links is a collector concept, depending on the applicable service load of the collector. By default, data stored on NPA data drives will be overwritten after three months.

 

Hardware requirements for collector deployment (x86-64: Intel64/AMD64) (collectors required in NPA scenarios)

Table 172 NPA collector deployment

Item

Flow analysis unified server hardware configuration

Applicable service load

NPA collector node (2Gb)

·     Flow analysis unified server-2Gb

·     CPU: 12 cores, 2 × 6-core 1.9GHz CPUs

·     Memory: 32 GB

·     System drive: 2 × 300GB drives in RAID 1

·     Data drive: 2 × 4 TB drives in RAID 1

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network card: 1 × 4-port 360T Gbps network card

·     Number of links: 4 (not including the default)

·     Processing performance: 2Gbps

·     Concurrent connections: 300000

·     Custom applications: 500

·     Virtual links of the same type that each link supports: 16 (including the default)

NPA collector node (10Gb-enhanced edition)

·     Flow analysis unified server-10Gb-enhanced edition

·     CPU: 16 cores, 2 × 8-core 2.1GHz CPUs

·     Memory: 64 GB

·     System drive: 2 × 300GB drives in RAID 1

·     Data drive: 10 × 6 TB drives in RAID 5

·     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network card: 1 × 4-port 360T Gbps network card + 1 × 2-port 560FLR 10Gbps network card

·     Number of links: 4 (not including the default)

·     Processing performance: 10Gbps

·     Concurrent connections: 500000

·     Custom applications: 500

·     Virtual links of the same type that each link supports: 16 (including the default)

NPA collector node (10Gb-base edition)

·     Flow analysis unified server-10Gb-base edition

·     CPU: 16 cores, 2 × 8-core 2.1GHz CPUs

·     Memory: 64 GB

·     System drive: 2 × 300GB drives in RAID 1

·     Data drive: 6 × 2 TB drives in RAID 5

·     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network card: 1 × 4-port 360T Gbps network card + 1 × 2-port 560FLR 10Gbps network card

·     Number of links: 4 (not including the default)

·     Processing performance: 10Gbps

·     Concurrent connections: 500000

·     Custom applications: 500

·     Virtual links of the same type that each link supports: 16 (including the default)

NPA collector node (20Gb-enhanced edition)

·     Flow analysis unified server-20Gb-enhanced edition

·     CPU: 20 cores, 2 × 10-core 2.2GHz CPUs

·     Memory: 64 GB

·     System drive: 2 × 300GB drives in RAID 1

·     Data drive: 10 × 8 TB drives in RAID 5

·     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network card: 1 × 4-port 360T Gbps network card + 1 × 2-port 560FLR 10Gbps network card

·     Number of links: 8 (not including the default)

·     Processing performance: 20Gbps

·     Concurrent connections: 1000000

·     Custom applications: 500

·     Virtual links of the same type that each link supports: 16 (including the default)

NPA collector node (20Gb-base edition)

·     Flow analysis unified server-20Gb-base edition

·     CPU: 20 cores, 2 × 10-core 2.2GHz CPUs

·     Memory: 64 GB

·     System drive: 2 × 300GB drives in RAID 1

·     Data drive: 10 × 4 TB drives in RAID 5

·     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network card: 1 × 4-port 360T Gbps network card + 1 × 2-port 560FLR 10Gbps network card

·     Number of links: 8 (not including the default)

·     Processing performance: 20Gbps

·     Concurrent connections: 1000000

·     Custom applications: 500

·     Virtual links of the same type that each link supports: 16 (including the default)

NPA collector node (40Gb-enhanced edition)

·     Flow analysis unified server-40Gb-enhanced edition

·     CPU: 32 cores, 2 × 16-core 2.3GHz CPUs

·     Memory: 192 GB

·     System drive: 2 × 300GB drives in RAID 1

·     Data drive: 10 × 12 TB drives in RAID 5

·     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network card: 1 × 4-port 360T Gbps network card + +2 × 2-port 560F 10Gbps network card

·     Number of links: 16 (not including the default)

·     Processing performance: 40Gbps

·     Concurrent connections: 1500000

·     Custom applications: 500

·     Virtual links of the same type that each link supports: 16 (including the default)

NPA collector node (40Gb-base edition)

·     Flow analysis unified server-40Gb-base edition

·     CPU: 32 cores, 2 × 16-core 2.3GHz CPUs

·     Memory: 192 GB

·     System drive: 2 × 300GB drives in RAID 1

·     Data drive: 10 × 6 TB drives in RAID 5

·     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network card: 1 × 4-port 360T Gbps network card + +2 × 2-port 560F 10Gbps network card

·     Number of links: 16 (not including the default)

·     Processing performance: 40Gbps

·     Concurrent connections: 1500000

·     Custom applications: 500

·     Virtual links of the same type that each link supports: 16 (including the default)

 

 

NOTE:

For more hardware configuration, see the configuration list of the flow analysis unified server.

 

Table 173 Flow analysis unified server products

BOM code

Product code

Description

0150A1QS

BD-ND5200-G3-12LFF-2Gb-B-L

H3C NaviData Flow Analysis Unified Server-2Gb

0150A1H1

BD-ND5200-G3-12LFF-10Gb-B-L

H3C NaviData Flow Analysis Unified Server-10Gb(Enhanced Edition)

0150A1H2

BD-ND5200-G3-12LFF-10Gb-B-H

H3C NaviData Flow Analysis Unified Server-10Gb(Base Edition)

0150A1H4

BD-ND5200-G3-12LFF-20Gb-B-L

H3C NaviData Flow Analysis Unified Server-20Gb(Enhanced Edition)

0150A1GW

BD-ND5200-G3-12LFF-20Gb-B-H

H3C NaviData Flow Analysis Unified Server-20Gb(Base Edition)

0150A1GY

BD-ND5200-G3-12LFF-40Gb-B-L

H3C NaviData Flow Analysis Unified Server-40Gb(Enhanced Edition)

0150A1H5

BD-ND5200-G3-12LFF-40Gb-B-H

H3C NaviData Flow Analysis Unified Server-40Gb(Base Edition)

 

Table 174 Flow analysis unified server configuration

BOM code

Product code

Description

0235A2J4

BD-ND5200-G3-12LFF-B-7

H3C NaviData 5200 G3 12LFF Unified Server (2*3204/2*16GB/2*300GB+2*4TB/1*LSI 9361 8i(with Power Fail Safeguard)/1*4-port GE 360T/2*550W/Rail Kit/Security Bezel)(BTO)

0235A3TG

BD-ND5200-G3-12LFF-B-1

H3C NaviData 5200 G3 12LFF,(2*4208/4*16GB/2*300GB+10*6TB/1*RAID-2000(with Power Fail Safeguard)/1*4-port GE 360T+1*2-port 560FLR SFP+ Adapter/2*550W/Rail Kit/Security Bezel)BigData Unified Server(BTO)

0235A3TB

BD-ND5200-G3-12LFF-B-2

H3C NaviData 5200 G3 12LFF,(2*4208/4*16GB/2*300GB+6*2TB/1*RAID-2000(with Power Fail Safeguard)/1*4-port GE 360T+1*2-port 560FLR SFP+ Adapter/2*550W/Rail Kit/Security Bezel)BigData Unified Server(BTO)

0235A3TA

BD-ND5200-G3-12LFF-B-3

H3C NaviData 5200 G3 12LFF,(2*4210/6*16GB/2*300GB+10*8TB/1*RAID-2000(with Power Fail Safeguard)/1*4-port GE 360T+1*2-port 560FLR SFP+ Adapter/2*550W/Rail Kit/Security Bezel)BigData Unified Server(BTO)

0235A3TE

BD-ND5200-G3-12LFF-B-4

H3C NaviData 5200 G3 12LFF,(2*4114/6*16GB/2*300GB+10*4TB/1*RAID-2000(with Power Fail Safeguard)/1*4-port GE 360T+1*2-port 560FLR SFP+ Adapter/2*550W/Rail Kit/Security Bezel)BigData Unified Server(BTO)

0235A3T7

BD-ND5200-G3-12LFF-B-5

H3C NaviData 5200 G3 12LFF,(2*5218/6*32GB/2*300GB+10*12TB/1*RAID-2000(with Power Fail Safeguard)/1*4-port GE 360T+2*2-port 560F SFP+ Adapter/2*800W/Rail Kit/Security Bezel)BigData Unified Server(BTO)

0235A3TC

BD-ND5200-G3-12LFF-B-6

H3C NaviData 5200 G3 12LFF,(2*5218/6*32GB/2*300GB+10*6TB/1*RAID-2000(with Power Fail Safeguard)/1*4-port GE 360T+2*2-port 560F SFP+ Adapter/2*800W/Rail Kit/Security Bezel)BigData Unified Server(BTO)

 

 

NOTE:

The current collector version does not support x86-64 Hygon servers or ARM Kunpeng servers.

 

Table 175 Network cards supported by the collector

Vendor

Chip

Model

Series

Compatible versions

Intel

JL82599

H3C UIS CNA 1322 FB2-RS3NXP2D-2-port 10Gbps optical port network card (SFP+)

CNA-10GE-2P-560F-B2

 

JL82599

H3C UIS CNA 1322 FB2-RS3NXP2DBY-2-port 10Gbps optical port network card (SFP+)

CNA-10GE-2P-560F-B2

 

I350

 

UN-NIC-GE-4P-360T-B2-1

 

X710

 

UN-NIC-X710DA2-F-B-10Gb-2P

 

X710

 

UN-NIC-X710DA4-F-B-10Gb-4P

 

X710

 

1-port 40GE optical port network card (Intel XL710QSR1) (including factory modules) (FIO)

 

X550

H3C UNIC CNA 560T B2-RS33NXT2A-2-port 10Gbps copper port network card-1*2

 

NFA V2.1 (E0252),D20220718

and later versions

X540

UN-NIC-X540-T2-T-10Gb-2P (copper port network card)

 

NFA V2.1 (E0252),D20220718

and later versions

Intel® 82599 10 Gigabit Ethernet Controller

 

UN-NIC-X520DA2-F-B-10Gb-2P

 

 

 

NOTE:

For other compatible network cards, contact technical support.

 

Table 176 RAID controllers supported by the collector

RAID controller

Model

RAID controller

·     UN-RAID-2000-M2

·     UN-HBA-1000-M2

·     HPE Smart Array P440ar

·     HPE Smart Array P840ar

·     LSI 9361 8i

·     RAID-P460-M2

 

Hardware requirements for deployment on VMs

Table 177 Standalone deployment in single-node mode (Unified Platform + SeerAnalyzer-NPA)

Minimum VM node requirements

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 64 vCPU cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB

·     Data drive: 8TB with a minimum random read/write capacity of 200MB/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

 

Table 178 Standalone deployment in cluster mode (Unified Platform + SeerAnalyzer-NPA)

Minimum VM node requirements

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

≥3

·     CPU: 48 vCPU cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 1.92 TB

·     Data drive: 8TB with a minimum random read/write capacity of 200MB/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

¡     Inter-node interconnection network card: 10 GE or above bandwidth

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

Analyzer node

≥3

·     CPU: 64 vCPU cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB

·     Data drive: 8TB with a minimum random read/write capacity of 200M/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     Inter-cluster 10GE bandwidth

20 collectors, 160 links

Number of collectors ≤20, number of links ≤160

 

 

NOTE:

·     NPA can be deployed on VMs. The supported hypervisors and versions are the same as those for Unified Platform. For deployment on a VM, if the host where the VM resides is enabled with hyperthreading, the number of required vCPUs is twice the number of CPU cores required for deployment on a physical server. If the host is not enabled with hyperthreading, the number of required vCPUs is the same as the number of CPU cores required for deployment on a physical server. The memory and drive resources required for deployment on a VM are the same as those required for deployment on a physical server.

·     Allocate CPU, memory, and drive resources in sizes as recommended and make sure sufficient physical resources are available for the allocation. Do not overcommit hardware resources.

·     If a single system drive does not meet the requirements, you can mount the system drive partitions to different drives.

·     Only H3C CAS VMs are supported. CAS VMs require local storage and the drive capacity after RAID setup must meet the requirement. Use 3 or more drives of the same model to set up local RAID.

 

Requirements for test and demo deployment

Table 179 Standalone deployment in single-node mode (x86-64: Intel64/AMD64) (Unified Platform + SeerAnalyzer-NPA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 16 cores (total physical cores), 2.0 GHz

·     Memory: 128 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 2 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

2 collectors, 16 links

Number of collectors ≤2, number of links ≤16

 

Hardware requirements for SeerAnalyzer-LGA

Deployment on physical servers

Table 180 Standalone deployment in single-node mode (Unified Platform + SeerAnalyzer-LGA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 32 cores, 2 × 16-core 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 6 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 26 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps Ethernet port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

·     Number of log sets < 10

·     Data access/data processing/Ingesting into ES<5000 eps

·     Average log size <500B

·     Log retention period ≤ 30 days

·     Determine the total drive size based on the log retention period. The longer the retention period, the larger the required drive size.

·     Second-level response for querying 1 billion data entries. For queries involving 1 billion data entries, the average response time is within 10 seconds.

·     Responses in seconds for 100 million-level data entries and an average response time of within 10 seconds for complex analysis scenarios.

 

Table 181 Standalone deployment in cluster mode (Unified Platform + SeerAnalyzer-LGA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 32 cores, 2 × 16-core 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 6 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 22 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps Ethernet port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

·     Number of log sets < 30

·     Data access/data processing/Ingesting into ES<13000 eps

·     Average log size < 500b

·     Log retention period ≤ 30 days

·     Determine the total drive size based on the log retention period. The longer the retention period, the larger the required drive size.

·     Second-level response for querying 1 billion data entries. For queries involving 1 billion data entries, the average response time is within 10 seconds.

·     Responses in seconds for 100 million-level data entries and an average response time of within 10 seconds for complex analysis scenarios.

 

Hardware requirements for deployment on VMs

Table 182 Standalone deployment in single-node mode (Unified Platform + SeerAnalyzer-LGA)

Minimum VM node requirements

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 64 vCPU cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB

·     Data drive: 26TB with a minimum random read/write capacity of 200MB/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports: 1 × 10Gbps network port

·     Number of log sets < 10

·     Data access/data processing/Ingesting into ES<5000 eps

·     Average log size < 500b

·     Log retention period ≤ 30 days

·     Determine the total drive size based on the log retention period. The longer the retention period, the larger the required drive size.

·     Second-level response for querying 1 billion data entries. For queries involving 1 billion data entries, the average response time is within 10 seconds.

·     Responses in seconds for 100 million-level data entries and an average response time of within 10 seconds for complex analysis scenarios.

 

Table 183 Standalone deployment in cluster mode (Unified Platform + SeerAnalyzer-LGA)

Minimum VM node requirements

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 64 vCPU cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB

·     Data drive: 22TB with a minimum random read/write capacity of 200MB/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     1 × 10Gbps network port

¡     Inter-node interconnection network card: 10 GE or above bandwidth

·     Number of log sets < 30

·     Data access/data processing/Ingesting into ES<13000 eps

·     Average log size <500B

·     Log retention period ≤ 30 days

·     Determine the total drive size based on the log retention period. The longer the retention period, the larger the required drive size.

·     Second-level response for querying 1 billion data entries. For queries involving 1 billion data entries, the average response time is within 10 seconds.

·     Responses in seconds for 100 million-level data entries and an average response time of within 10 seconds for complex analysis scenarios.

 

 

NOTE:

·     LGA can be deployed on VMs. The supported hypervisors and versions are the same as those for Unified Platform. For deployment on a VM, if the host where the VM resides is enabled with hyperthreading, the number of required vCPUs is twice the number of CPU cores required for deployment on a physical server. If the host is not enabled with hyperthreading, the number of required vCPUs is the same as the number of CPU cores required for deployment on a physical server. The memory and drive resources required for deployment on a VM are the same as those required for deployment on a physical server.

·     Allocate CPU, memory, and drive resources in sizes as recommended and make sure sufficient physical resources are available for the allocation. Do not overcommit hardware resources.

·     Only H3C CAS VMs are supported. CAS VMs require local storage and the drive capacity after RAID setup must meet the requirement. Use 6 or more drives of the same model to set up local RAID.

·     Install the ETCD drive on a different physical drive than any other drives. Make sure ETCD has exclusive use of the drive where it is installed.

·     If a single system drive does not meet the requirements, you can mount the system drive partitions to different drives.

 

Hardware requirements for SeerAnalyzer-TRA

Deployment on physical servers

Table 184 Standalone deployment in single-node mode (Unified Platform + SeerAnalyzer-TRA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 20 core, 2.0 GHz.

·     Memory: 128 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 10 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps Ethernet port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

·     Online endpoints: 50000

·     Low bit-rate image: 10000 endpoints

·     Retention period: 1 year

 

Table 185 Standalone deployment in cluster mode (Unified Platform + SeerAnalyzer-TRA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 20 core, 2.0 GHz.

·     Memory: 128 GB.

·     System drive: SSDs or 7.2K RPM HDDs, with a minimum total capacity of 1.92 TB after RAID setup and a minimum IOPS of 5000.

·     Data drive: 3 SSDs or 7.2K RPM HDDs of the same model, providing a minimum total capacity of 10 TB after RAID setup and a minimum IOPS of 5000.

·     ETCD drive: SSDs or 7.2K RPM HDDs, providing a minimum total capacity of 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps Ethernet port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

·     Online endpoints: 70000

·     Low bit-rate image: 10000 endpoints

·     Retention period: 1 year

·     HA supported

 

Hardware requirements for deployment on VMs

Table 186 Standalone deployment in single-node mode (Unified Platform + SeerAnalyzer-TRA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 40 vCPU cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 1.92 TB

·     Data drive: 10TB with a minimum random read/write capacity of 200MB/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports: 1 × 10Gbps network port

·     Online endpoints: 30000

·     Low bit-rate image: 10000 endpoints

·     Retention period: 1 year

 

Table 187 Standalone deployment in cluster mode (Unified Platform + SeerAnalyzer-TRA)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 40 vCPU cores, 2.0 GHz

·     Memory: 128 GB.

·     System drive: 1.92 TB

·     Data drive: 10TB with a minimum random read/write capacity of 200MB/s and a minimum IOPS of 5000. Shared storage is not supported.

·     ETCD drive: 50 GB or above. Installation path: /var/lib/etcd.

·     Network ports:

¡     1 × 10Gbps network port

¡     Inter-node interconnection network card: 10 GE or above bandwidth

·     Online endpoints: 70000

·     Low bit-rate image: 10000 endpoints

·     Retention period: 1 year

·     HA supported

 

 

NOTE:

·     TRA can be deployed on VMs. The supported hypervisors and versions are the same as those for Unified Platform. For deployment on a VM, if the host where the VM resides is enabled with hyperthreading, the number of required vCPUs is twice the number of CPU cores required for deployment on a physical server. If the host is not enabled with hyperthreading, the number of required vCPUs is the same as the number of CPU cores required for deployment on a physical server. The memory and drive resources required for deployment on a VM are the same as those required for deployment on a physical server.

·     Allocate CPU, memory, and drive resources in sizes as recommended and make sure sufficient physical resources are available for the allocation. Do not overcommit hardware resources.

·     Only H3C CAS VMs are supported. CAS VMs require local storage and the drive capacity after RAID setup must meet the requirement. Use 6 or more drives of the same model to set up local RAID.

·     Install the ETCD drive on a different physical drive than any other drives. Make sure ETCD has exclusive use of the drive where it is installed.

If a single system drive does not meet the requirements, you can mount the system drive partitions to different drives.

 


Hardware requirements for license server deployment

Table 188 shows the hardware requirements for license server deployment.

Table 188 Hardware requirements

CPU architecture

CPU kernel

Memory

Drive size

NIC

Remarks

x86-64(Intel64/AMD64)

16 cores, 2.0GHz or above

32 GB or above

64GB or above (system partition where the root directory resides)

Support 1 to 10Gbps bandwidth

Recommended configuration

This configuration supports the maximum allowed number of clients.

x86-64(Intel64/AMD64)

4 cores or above

2.0GHz or above

4 GB or above

64GB or above (system partition where the root directory resides)

Support 1 to 10Gbps bandwidth

Minimum configuration

This configuration supports up to 20 clients.

 

 

NOTE:

·     To ensure stable operation of the license server, deploy the license server on a physical server instead of a VM.

·     When the license server is deployed with the controller in a converged manner, you do not need to reserve hardware resources for the license server.

 

 


Hardware requirements for multi-scenario converged deployment

Separate hardware requirements for each component

This section describes the hardware requirements for each component (excluding Unified Platform) separately. You can calculate hardware resources required by converged deployment of multiple components based on the information in this section. For the calculation rules, see "Hardware resource calculation rules for multi-scenario converged deployment."

This section describes only hardware requirements in cluster deployment mode.

For hardware requirements of single-scenario deployment, see the following:

·     Hardware requirements for AD-Campus

·     Hardware requirements for AD-DC

·     Hardware requirements for AD-WAN

·     Hardware requirements for SeerAnalyzer (NPA/TRA/LGA)

Table 189 Hardware requirements for each component in cluster deployment mode (x86-64: Intel64/AMD64)

Components

Single-node CPU (2.1 GHz or above)

Single-node memory

Single-node drive size (after RAID setup)

Remarks

Unified Platform (OS+ glusterfs +portal+kernel)

4 cores

24 GB or above

System drive: 500 GB or above

ETCD drive: 50 GB or above

--

Unified Platform (OS+ glusterfs +portal+kernel) + base network management (kernel-base+kernel-network)

8 cores

48 GB or above

System drive: 1.5 TB or above

ETCD drive: 50 GB or above

--

AD-Campus

Controller engine

4 cores

16 GB or above

System drive: 200 GB or above

Maximum number of devices: 300

6 cores

24 GB or above

System drive: 300 GB or above

Maximum number of devices: 2000

8 cores

32 GB or above

System drive: 500 GB or above

Maximum number of devices: 5000

vDHCP Server

1 core

2 GB or above

--

Maximum number of allocatable IP addresses: 15000

1 core

3 GB or above

--

Maximum number of allocatable IP addresses: 50000

EIA

4 cores

16 GB or above

System drive: 200 GB or above

Maximum number of online users: 5000

4 cores

20 GB or above

System drive: 300 GB or above

Maximum number of online users: 40000

8 cores

24 GB or above

System drive: 500 GB or above

Maximum number of online users: 100000

EAD

2 cores

8 GB or above

System drive: 200 GB or above

Maximum number of online users: 5000

4 cores

12 GB or above

System drive: 200 GB or above

Maximum number of online users: 40000

4 cores

16 GB or above

System drive: 200 GB or above

Maximum number of online users: 40000

6 cores

24 GB or above

System drive: 500 GB or above

Maximum number of online users: 100000

WSM (not including Oasis)

4 cores

8 GB or above

System drive: 200 GB or above

Maximum number of online APs: 2000

4 cores

12 GB or above

System drive: 400 GB or above

Maximum number of online APs: 5000

6 cores

16 GB or above

System drive: 600 GB or above

Maximum number of online APs: 10000

6 cores

20 GB or above

System drive: 800 GB or above

Maximum number of online APs: 20000

WSM (including Oasis)

8 cores

24 GB or above

System drive: 400 GB or above

Maximum number of online APs: 2000

8 cores

28 GB or above

System drive: 600 GB or above

Maximum number of online APs: 5000

12 cores

40 GB or above

System drive: 1 TB or above

Maximum number of online APs: 10000

12 cores

52 GB or above

System drive: 1.5 TB or above

Maximum number of online APs: 20000

EPS

4 cores

6 GB or above

System drive: 100 GB or above

Maximum number of endpoints:

10000

4 cores

8 GB or above

System drive: 200 GB or above

Maximum number of endpoints:

20000

6 cores

12 GB or above

System drive: 300 GB or above

Maximum number of endpoints:

50000

8 cores

16 GB or above

System drive: 500 GB or above

Maximum number of endpoints:

100000

SMP

4 cores

10 GB or above

System drive: 100 GB or above

Maximum number of devices:

10

8 cores

16 GB or above

System drive: 200 GB or above

Maximum number of devices:

50

16 cores

32 GB or above

System drive: 500 GB or above

Maximum number of devices:

100

AD-DC

High-spec configuration

16 cores

200 GB or above

System drive: 1.0 TB or above

Maximum number of devices:

1000 (E63xx)

300 (E62xx and earlier versions)

Maximum number of servers:

20000 (E63xx)

6000 (E62xx and earlier versions)

Low-spec configuration

12 cores

108 GB or above

System drive: 1.0 TB or above

Maximum number of devices:

300 (E63xx)

100 (E62xx and earlier versions)

Maximum number of servers:

Server quantity: 6000 (E63xx)

2000 (E62xx and earlier versions)

DTN component

4 cores

100 GB or above

--

--

Super Controller

8 cores

64 GB or above

System drive: 100 GB or above

Maximum number of sites: 32

AD-WAN carrier

Low-spec configuration

6 cores

48 GB or above

System drive: 300 GB or above

Maximum number of devices: 200

High-spec configuration

9 cores

96 GB or above

System drive: 600 GB or above

Maximum number of devices: 2000

AD-WAN branch

Low-spec configuration

4 cores

32 GB or above

System drive: 300 GB or above

Maximum number of devices: 200

High-spec configuration

8 cores

64 GB or above

System drive: 600 GB or above

Maximum number of devices: 2000

Security controller

Low-spec configuration

4 cores

32 GB or above

System drive: 100 GB or above

Maximum number of devices: 200

Maximum number of policies:

60000

High-spec configuration

8 cores

32 GB or above

System drive: 100 GB or above

Maximum number of devices: 2000

Maximum number of policies:

240000

SA-Campus

14 cores

82GB or above

System drive: 500 GB or above

Data drive: 2 TB or above. Use 2 or more drives of the same model

2000 online users

400 devices, including switches, ACs, and APs.

14 cores

86GB or above

System drive: 500 GB or above

Data drive: 2 TB or above. Use 2 or more drives of the same model

5000 online users

1000 devices, including switches, ACs, and APs.

15 cores

93 GB or above

System drive: 500 GB or above

Data drive: 2 TB or above. Use 2 or more drives of the same model

10000 online users

2000 devices, including switches, ACs, and APs.

16 cores

106 GB or above

System drive: 500 GB or above

Data drive: 3 TB or above. Use 2 or more drives of the same model

20000 online users

4000 devices, including switches, ACs, and APs.

19 cores

132 GB or above

System drive: 500 GB or above

Data drive: 4 TB or above. Use 3 or more drives of the same model

40000 online users

8000 devices, including switches, ACs, and APs.

21 cores

158 GB or above

System drive: 500 GB or above

Data drive: 5 TB or above. Use 4 or more drives of the same model

60000 online users

12000 devices, including switches, ACs, and APs.

26 cores

210 GB or above

System drive: 500 GB or above

Data drive: 8 TB or above, use six or more drives of the same model

100000 online users

20000 devices, including switches, ACs, and APs.

SA-WAN

Low-spec configuration

10 cores

64 GB or above

System drive: 300 GB or above

Data drive: 2 TB or above. Use 2 or more drives of the same model

Maximum number of devices: 1000

High-spec configuration

12 cores

76 GB or above

System drive: 500 GB or above

Data drive: 4 TB or above. Use 3 or more drives of the same model

Maximum number of devices: 2000

SA-DC

Low-spec configuration

17 cores

102 GB or above

System drive: 500 GB or above

Data drive: 8 TB or above. Use 3 or more drives of the same model

Number of devices: 50

Number of VMs: 1000

Medium-spec configuration

20 cores

124 GB or above

System drive: 500 GB or above

Data drive: 12 TB or above. Use 5 or more drives of the same model

Number of devices: 100

Number of VMs: 2000

High-spec configuration

30 cores

180 GB or above

System drive: 500 GB or above

Data drive: 24 TB or above. Use 7 or more drives of the same model

Number of devices: 200

Number of VMs: 5000

SA-NPA

Low-spec configuration

24 cores

96 GB or above

System drive: 500 GB or above

Data drive: 8 TB or above. Use 5 or more drives of the same model

10 collectors, 80 links

Number of collectors ≤10, number of links ≤80

High-spec configuration

32 cores

224 GB or above

System drive: 500 GB or above

Data drive: 8 TB or above. Use 5 or more drives of the same model

20 collectors, 160 links

Number of collectors ≤20, number of links ≤160

SA-LGA

28 cores

224 GB or above

System drive: 500 GB or above

Data drive: 26 TB or above, use six or more drives of the same model

Number of log sets < 30

Data access/data processing/Ingesting into ES<18000 eps

Second-level response for querying 1 billion data entries. For queries involving 1 billion data entries, the average response time is within 5 seconds.

Responses in seconds for 100 million-level data entries and an average response time of within 10 seconds for complex analysis scenarios.

SA-TRA

12 cores

64G or above

System drive: 500 GB or above

Data drive: 10 TB or above, use three or more drives of the same model

Online endpoints: 50000

Retention period: 1 year

 

Hardware resource calculation rules for multi-scenario converged deployment

Hardware resource calculation rules for controller converged deployment

When multiple components are deployed on the same server in a converged manner, the hardware resources can be shared. Use the calculation rules in this section to calculate the hardware requirements for multi-scenario converged deployment.

Table 190 Hardware requirements for each controller

Components

Single-node CPU

Single-node memory

Single-node drive

Unified Platform

A0

B0

C0

AD-Campus

A1

B1

C1

AD-DC

A2

B2

C2

AD-WAN

A3

B3

C3

 

As shown in Table 190, use A(num), B(num), and C(num) to represent the CPU, memory, and drive resources required by each component. When multiple components are deployed in a converged manner, the required hardware resources are calculated as follows:

·     CPU=A0+A1+A2+A3

·     Memory=B0+B1+B2+B3

·     Drive=C0+C1+C2+C3

In the above formulas, A0, B0, and C0 represents the resources required by Unified Platform and are required. The other components are optional. To calculate hardware requirements for converged deployment of specific components, replace the arguments in the formulas with the data for the components. For example, to calculate the CPU resources required by converged deployment of AD-Campus and AD-DC, the calculation formula is CPU=A0+A1+A2.

To obtain the data for the components, see Table 189.

Hardware resource calculation rules for analyzer converged deployment

When you deploy the analyzer products in a converged manner on Unified Platform, the following resources can be shared:

·     Unified Platform resources: All SA products share the Unified Platform resources.

·     Public analyzer resources: Table 191 shows the shared resource sizes. Analyzers in SA-Campus, SA-DC, and SA-WAN scenarios share hardware resources.

Table 191 Public analyzer resources

Components

Single-node CPU

Single-node memory

Single-node deployment

12 cores

60 GB

Cluster deployment

12 cores

32 GB

 

When the analyzers of multiple scenarios are deployed in a converged manner on the same server, you can calculate the required hardware resources by using the calculation rules in this section.

Table 192 Hardware resource calculation rules for analyzer converged deployment

Components

Single-node CPU

Single-node memory

Single-node drive

System drive

Data drive

Unified Platform

A0

B0

C0

--

SA-Campus

A1

B1

500G or above after RAID setup

C1

SA-DC

A2

B2

C2

SA-WAN

A3

B3

C3

SA-NPA

A4

B4

C4

SA-TRA

A5

B5

C5

SA-LGA

A6

B6

C6

 

As shown in Table 192, use  A(num), B(num), and C(num) to represent the CPU, memory, and drive resources required by each component, and the number of analyzers (SA-Campus, SA-DC, and SA-WAN) is N. When multiple components are deployed in a converged manner, the required hardware resources are calculated as follows:

·     In single-node deployment mode:

¡     CPU=A0+A1+A2+A3+A4+A5+A6-(N-1) × 12

¡     Memory=B0+B1+B2+B3+B4+B5+B6-(N-1) × 60

¡     Drive:

-     System drive=C0+500

-     Data drive=C1+C2+C3+C4+C5+C6

·     In cluster deployment mode:

¡     CPU=A0+A1+A2+A3+A4+A5+A6-(N-1) × 12

¡     Memory=B0+B1+B2+B3+B4+B5+B6-(N-1) × 32

¡     Drive:

-     System drive=C0+500

-     Data drive=C1+C2+C3+C4+C5+C6

To obtain the data for the components, see Table 189.

Converged deployment of the controller and analyzer

This section provides the hardware resource calculation rules for converged deployment of the controllers and analyzers of multiple components.

To view the hardware requirements for converged deployment of the controller and analyzer of a single component, see the following:

·     Hardware requirements for AD-Campus

·     Hardware requirements for AD-DC

·     Hardware requirements for AD-WAN

·     Hardware requirements for SeerAnalyzer (NPA/TRA/LGA)

Table 193 Hardware requirements for each controller

Components

Single-node CPU

Single-node memory

Single-node drive

Unified Platform

A0

B0

C0

Converged controller deployment

A1

B1

C1

Converged analyzer deployment

A2

B2

C2

 

As shown in Table 193, use  A(num), B(num), and C(num) to represent the CPU, memory, and drive resources required by converged deployment of controller and analyzer. When the controllers and analyzers of multiple components are deployed in a converged manner, the required hardware resources are calculated as follows:

·     CPU=A1+A2-A0

·     Memory=B1+B2-B0

·     Drive=C1+C2-C0

For hardware requirements of converged controller deployment, see "Hardware resource calculation rules for multi-scenario converged deployment." For hardware requirements of converged analyzer deployment, see "Hardware resource calculation rules for analyzer converged deployment."

 


Hardware requirements for AD-NET appliance

The AD-NET appliance is integrated with software packages for AD-Campus, AD-DC, and AD-WAN (branch and carrier) solutions, including operating system, license server, Unified Platform, Campus/DC/WAN controllers, SeerAnalyzer, basic device management, EIA (including EIP), WSM, vBGP, and vDHCP. Upon startup, you only need to select a scenario, and the appliance will automatically complete software installation and deployment. The AD-NET appliance reduces the time required by software downloading, server coordinating, and software installation, greatly simplifying installation and deployment.

Hardware requirements and applicable scenarios

Table 194 Hardware requirements and applicable scenarios

Appliance model

Hardware requirements

Applicable scenarios

AD-NET C2000 G3 Appliance standard edition

·     CPU: 2 × 8-core 2.1 GHz CPUs

·     Memory: 4 × 32GB DDR4

·     System drive: 2 × 4TB HDDs in RAID 1, providing 4 TB capacity after RAID setup.

·     ETCD drive: 2 × 240GB SSDs in RAID 1, providing 240 GB capacity after RAID setup.

·     Network ports: 4-port GE copper port

Applicable to single domain scenarios:

·     Campus scenario (2000 users in single-node deployment mode, 10000 users in three-node deployment mode)

·     DC scenario (100 devices, 2000 servers)

·     WAN scenario (200 devices)

As a best practice, use the three-node cluster deployment mode.

AD-NET C3000 G3 Appliance high-spec edition

·     CPU: 2 × 12-core 2.2 GHz CPUs

·     Memory: 6 × 32GB DDR4

·     System drive: 2 × 6TB HDDs in RAID 1, providing 6 TB capacity after RAID setup.

·     ETCD drive: 2 × 240GB SSDs in RAID 1, providing 240 GB capacity after RAID setup.

·     Network ports: 4-port GE copper port, 2 × 2-port 10Gb optical ports

Applicable to single domain scenarios and multidomain scenarios.

Single domain scenarios:

·     Campus scenario (100000 users)

·     DC scenario (200 devices, 4000 servers)

·     WAN scenario (2000 devices)

Multidomain scenarios:

·     Small-scale deployment of campus and DC in a converged manner (campus: 5000 users, DC: 100 devices and 2000 servers).

·     Small-scale deployment of campus and WAN in a converged manner (campus: 5000 users, WAN: 200 devices).

Evaluate the management scales of multidomain scenarios based on the hardware configuration guide.

As a best practice, use the three-node cluster deployment mode.

AD-NET A3000 G3 Appliance standard edition

·     CPU: 2 × 12-core 2.2 GHz CPUs

·     Memory: 8 × 32GB DDR4

·     System drive: 2 × 4TB HDDs in RAID 1, providing 4 TB capacity after RAID setup.

·     ETCD drive: 2 × 240GB SSDs in RAID 1, providing 240 GB capacity after RAID setup.

·     Data drive: 6 × 1.2TB HDDs

·     Network ports: 4-port GE copper port, 2 × 2-port 10Gb optical ports

Standalone deployment of analyzer:

·     Analyzer deployment in single-node mode or three-node cluster mode.

·     Deployment of analyzer and controller in 3+1 or 3+3 mode. The analyzer acts as the worker node of the controller cluster.

Evaluate the management scales based on the analyzer information in the scenario sections.

AD-NET X5000 G3 Appliance

·     CPU: 2 × 16-core 2.1 GHz CPUs

·     Memory: 12 × 32GB

·     System drive: 2 × 6TB HDDs in RAID 1, providing 6 TB capacity after RAID setup.

·     ETCD drive: 2 × 240GB SSDs in RAID 1, providing 240 GB capacity after RAID setup.

·     Data drive: 10 × 1.2TB HDDs

·     Network ports: 4-port GE copper port, 2 × 2-port 10Gb optical ports

Unified management, control, and analytics in single domain scenarios:

·     Campus scenario: Converged deployment of controller and analyzer

·     WAN scenario: Converged deployment of controller and analyzer

Multidomain scenario + analyzer:

·     Three-node cluster deployment.

·     Deployment of analyzer and controller in 3+1 or 3+3 mode. The analyzer acts as the worker node of the controller cluster.

Evaluate the management scales of multidomain scenarios based on the hardware configuration guide.

 

 


Appendix

Hardware requirements for SeerAnalyzer history versions

Hardware requirements for SeerAnalyzer E61xx

SeerAnalyzer is deployed based on Unified Platform. As a best practice, deploy SeerAnalyzer on physical servers. Two deployment modes are available. For more information, see Table 208.

Table 195 Deployment description

Deployment mode

Number of required servers

Deployment description

Single-node mode

1

Deploy Unified Platform on a master node, and deploy SeerAnalyzer on Unified Platform.

Use the single-node deployment mode only when the network size is small and HA is not required.

Cluster mode

≥3

Deploy Unified Platform on three master nodes.

·     Three-node cluster deployment mode: Deploy Unified Platform, controller, and SeerAnalyzer in a converged manner on three master nodes.

·     3+N cluster deployment mode: Deploy SeerAnalyzer on worker nodes. The following scenarios are available:

¡     3+1 mode: Deploy Unified Platform and controller in a converged manner on three master nodes, and deploy SeerAnalyzer on one worker node.

¡     3+3 mode: Deploy Unified Platform and controller in a converged manner on three master nodes, and deploy SeerAnalyzer on three worker nodes.

 

Service load supported by SeerAnalyzer varies by scenario and network size. The main traffic source is network service applications. SeerAnalyzer can be deployed alone or be deployed with other components in a converged manner. This section describes only the service load supported by SeerAnalyzer in scenarios where Unified Platform and scenario component are deployed. Service loads supported by scenario components need to be evaluated separately. Different service loads require different hardware configuration. Table 196 and Table 197 show the hardware requirements for Campus. Table 198 and Table 199 show the hardware requirements for DC. Table 200 and Table 201 show the hardware requirements for WAN. Table 202 and Table 203 show the hardware requirements for NPA. Table 204 and Table 205 show the hardware requirements for LGA. Table 206 shows the hardware requirements for TRA.

 

IMPORTANT

IMPORTANT:

·     To deploy Unified Platform on a server, the server must use x86-64 (Intel64/AMD64) CPUs, SAS/SATA HDDs or SSDs, and a CentOS 7.6 or higher operating system. As a best practice, configure RAID5. The RAID controller must have 1 GB or above cache and support powerfail safeguard module.

·     CPU models supported by SeerAnalyzer vary by SeerAnalyzer version. For more information, see the release documentation.

·     With the same total capacity, the more drives, the better the read and write performance. For example, six 2 TB drives provide better read and write performance than three 4 TB drives.

·     To use the TCP flow analysis and INT flow analysis features of SeerAnalyzer, you must deploy SeerCollector. See Table 207.

·     In NPA scenarios, NPA traffic collectors must be deployed on independent servers. For more information, see H3C NPA Traffic Collector Deployment Guide.

 

Hardware requirements for Campus scenarios

Table 196 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in Campus scenarios (single-node deployment)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: x86-64, 16 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     2000 online users

·     400 switches, ACs, and APs in total

1

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     5000 online users

·     1000 switches, ACs, and APs in total

1

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     10000 online users

·     2000 switches, ACs, and APs in total

1

·     CPU: x86-64, 24 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     20000 online users

·     4000 switches, ACs, and APs in total

1

·     CPU: x86-64, 40 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     40000 online users

·     8000 switches, ACs, and APs in total

1

·     CPU: x86-64, 48 physical cores, 2.0 GHz

·     Memory: 384 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 6 TB after RAID setup. 6 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     60000 online users

·     12000 switches, ACs, and APs in total

1

·     CPU: x86-64, 48 physical cores, 2.0 GHz

·     Memory: 512 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 7 or more drives of the same model are required.

·     ETCD drive: 50 GB after RAID setup. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Table 197 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in Campus scenarios (cluster deployment)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: x86-64, 16 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     2000 online users

·     400 switches, ACs, and APs in total

3

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     5000 online users

·     1000 switches, ACs, and APs in total

3

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     10000 online users

·     2000 switches, ACs, and APs in total

3

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     20000 online users

·     4000 switches, ACs, and APs in total

3

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 256 GB

·     System drive: 3 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     40000 online users

·     8000 switches, ACs, and APs in total

3

·     CPU: x86-64, 24 physical cores, 2.0 GHz

·     Memory: 384 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 12 TB after RAID setup. 6 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     60000 online users

·     12000 switches, ACs, and APs in total

3

·     CPU: x86-64, 40 physical cores, 2.0 GHz

·     Memory: 384 GB.

·     System drive: 3 TB after RAID setup

·     Data drive: 18 TB after RAID setup. 7 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Hardware requirements for DC deployment

Table 198 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in DC scenarios (single-node deployment)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: x86-64, 20 physical cores, 2.2 GHz

·     Memory: 256 GB.

·     System drive: 2TB after RAID setup. As a best practice, configure RAID 1. Use SSDs or SATA/SAS HDDs.

·     ETCD drive: 50 GB after RAID setup. As a best practice, use SSDs in RAID 1. Installation path: /var/lib/ ETCD

·     Data drive: 8 TB after RAID setup. As a best practice, configure RAID 3. 5 or more drives of the same model are required.

·     RAID controller: 1GB cache, powerfail safeguard supported with a supercapacitor installed.

·     Network ports (bonding mode): 4 × 10 Gbps network ports, each two ports forming a Linux bonding interface.

50

Not supported

Applicable service load: 50 switches, flow processing not supported.

 

Table 199 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in DC scenarios (cluster deployment)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: x86-64, 24 physical cores, 2.2 GHz

·     Memory: 256 GB.

·     System drive: 1.92TB after RAID setup. As a best practice, configure RAID 1. Use SSDs or SATA/SAS HDDs.

·     Data drive: 8 TB after RAID setup. As a best practice, configure RAID 3. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB after RAID setup.  As a best practice, configure RAID 1. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: x86-64, 32 physical cores, 2.2 GHz

·     Memory: 256 GB.

·     System drive: 1.92TB after RAID setup. As a best practice, configure RAID 1. Use SSDs or SATA/SAS HDDs.

·     Data drive: 16 TB after RAID setup. As a best practice, configure RAID 5. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB after RAID setup.  As a best practice, configure RAID 1. Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

100

2000 VMs, 4000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: x86-64, 40 physical cores, 2.2 GHz

·     Memory: 384 GB

·     System drive: 1.92TB after RAID setup. As a best practice, configure RAID 1. Use SSDs or SATA/SAS HDDs.

·     ETCD drive: 50 GB after RAID setup.  As a best practice, configure RAID 1. Installation path: /var/lib/etcd.

·     Data drive: 24 TB after RAID setup. As a best practice, configure RAID 5. 7 or more drives of the same model are required.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

200

5000 VMs, 10000 TCP flows/sec.

2 TCP flows for each VM per second.

 

 

NOTE:

You can calculate the overall TCP flow size based on the total number of VMs in the data center, and calculate the required hardware configuration on the basis of 2 TCP flows per second for each VM.

 

Hardware requirements for WAN deployment

In WAN scenarios, the analyzer cannot be deployed alone. It must be deployed together with the controller in a converged manner. Table 200 shows the hardware requirements for 3+1 cluster deployment mode, and Table 201 shows the hardware requirements for 3+3 cluster deployment mode. In 3+1 or 3+3 cluster deployment mode, the controller is deployed on three master nodes, and the analyzer is deployed on one or three worker nodes.

Table 200 Hardware requirements for a worker node in 3+1 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: x86-64, 24 physical cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

< 1000000 NetStream flows per minute

 

Table 201 Hardware requirements for a worker node in 3+3 cluster deployment mode (Unified Platform + SeerAnalyzer)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 192 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 4 TB after RAID setup. Three or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

< 1000000 NetStream flows per minute

3

·     CPU: x86-64, 20 physical cores, 2.0 GHz

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 8 TB after RAID setup. Five or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

< 3000000 NetStream flows per minute

 

Hardware requirements for NPA deployment

Table 202 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in NPA scenarios (single-node deployment)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 32 cores, 2 × 16-core 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 14 TB after RAID setup. 8 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

15 NPA collectors:

·     Up to 5000000 NetStream flows every 5 minutes.

·     20 services, 60 links

NPA collector node

≤ 15

·     CPU: 20 cores (2 × 10 cores, recommended: Intel(R) Xeon(R) CPU E5-2620 v4 or Intel(R) Xeon(R) CPU E5-2630 v4).

·     Memory: 128 GB.

·     Drive:

¡     System drive: 2 × 300 GB drives in RAID 1

¡     Data drive: 10 × 4 TB drives in RAID 5

¡     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed. Recommended: UN-RAID-2000-M2 or UN-HBA-1000-M2.

·     Network card: 4GE Gbps copper port port/2SFP+ 10Gbps optical port network card that support DPDK. Recommended: Intel I350 and Intel 82599.

20Gbps traffic

 

Table 203 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in NPA scenarios (cluster deployment)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 32 cores, 2 × 16-core 2.0 GHz CPUs

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 14 TB after RAID setup. 8 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

40 NPA collectors:

·     Up to 15000000 NetStream flows every 5 minutes.

·     60 services, 180 links

NPA collector node

≤ 40

·     CPU: 20 cores (2 × 10 cores, recommended: Intel(R) Xeon(R) CPU E5-2620 v4 or Intel(R) Xeon(R) CPU E5-2630 v4).

·     Memory: 128 GB.

·     Drive:

¡     System drive: 2 × 300 GB drives in RAID 1

¡     Data drive: 10 × 4 TB drives in RAID 5

¡     RAID controller: 2GB cache, powerfail safeguard supported with a supercapacitor installed. Recommended: UN-RAID-2000-M2 or UN-HBA-1000-M2.

·     Network card: 4GE Gbps copper port port/2SFP+ 10Gbps optical port network card that support DPDK. Recommended: Intel I350 and Intel 82599.

20 Gbps traffic

 

Hardware requirements for LGA deployment

Table 204 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in LGA scenarios (single-node deployment)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 32 core, 2.0 GHz.

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 21 TB after RAID setup. 10 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps Ethernet port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

·     Number of log sets < 10

·     Data access/data processing/Ingesting into ES<10000 eps

·     Second-level response for querying 1 billion data entries. For queries involving 1 billion data entries, the average response time is within 10 seconds.

·     Responses in seconds for 100 million-level data entries and an average response time of within 10 seconds for complex analysis scenarios.

 

Table 205 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in LGA scenarios (cluster deployment)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 32 core, 2.0 GHz.

·     Memory: 256 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 21 TB after RAID setup. 10 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps Ethernet port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

·     Number of log sets < 30

·     Data access/data processing/Ingesting into ES<18000 eps

·     Second-level response for querying 1 billion data entries. For queries involving 1 billion data entries, the average response time is within 10 seconds.

·     Responses in seconds for 100 million-level data entries and an average response time of within 5 seconds for complex analysis scenarios.

 

Hardware requirements for TRA deployment

Table 206 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in TRA scenarios (single-node deployment)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 16 core, 2.0 GHz.

·     Memory: 128 GB.

·     System drive: 1.92 TB after RAID setup

·     Data drive: 10 TB after RAID setup. 2 or more drives of the same model are required.

·     Network ports:

¡     Non-bonding mode: 1 × 10Gbps Ethernet port.

¡     Bonding mode (recommended mode: mode 2 or mode 2): 2 × 10 Gbps network ports, forming a Linux bonding interface.

·     Online endpoints: 50000

·     Low bit-rate image: 10000 endpoints

·     Retention period: 1 year

 

Hardware requirements for collector deployment

Table 207 Hardware requirements for collector deployment

Recommended configuration

Maximum resources that can be managed

·     CPU: Intel(R) Xeon(R) extensible processor (Platinum or Gold series is recommended), 2.2GHz or above, 20 or more virtual cores

·     Memory: 128 GB.

·     System drive: 2 × 600 GB SSDs or SAS HDDs in RAID 1

·     Network port (bonding mode): One 10 Gbps collector network port and one 10 Gbps management network port. The collector network port does not support bonding, and the management network port supports bonding.

Service volume: 50000TCP flows/sec

The TCP stream refers to session data after being transmitted over three hops.

 

Hardware requirements for SeerAnalyzer E23xx

SeerAnalyzer is deployed based on Unified Platform. As a best practice, deploy SeerAnalyzer on physical servers. Two deployment modes are available. For more information, see Table 208.

Table 208 Deployment description

Deployment mode

Number of required servers

Deployment description

Single-node mode

1

Deploy Unified Platform on a master node, and deploy SeerAnalyzer on Unified Platform.

Use the single-node deployment mode only when the network size is small and HA is not required.

Cluster mode

≥3

Deploy Unified Platform on three master nodes.

·     Three-node cluster deployment mode: Deploy Unified Platform, controller, and SeerAnalyzer in a converged manner on three master nodes.

·     3+N cluster deployment mode: Deploy SeerAnalyzer on worker nodes. The following scenarios are available:

¡     3+1 mode: Deploy Unified Platform and controller in a converged manner on three master nodes, and deploy SeerAnalyzer on one worker node.

¡     3+3 mode: Deploy Unified Platform and controller in a converged manner on three master nodes, and deploy SeerAnalyzer on three worker nodes.

 

Service load supported by SeerAnalyzer varies by scenario and network size. The main traffic source is network service applications. SeerAnalyzer can be deployed alone or be deployed with other components in a converged manner. This section describes only the service load supported by SeerAnalyzer in scenarios where Unified Platform and scenario component are deployed. Service loads supported by scenario components need to be evaluated separately. Different service loads require different hardware configuration. Table 209 and Table 210 show the hardware requirements for Campus. Table 211 shows the hardware requirements for DC. Table 212 and Table 213 show the hardware requirements for WAN.

 

IMPORTANT

IMPORTANT:

·     To deploy Unified Platform on a server, the server must use x86-64 (Intel64/AMD64) CPUs, SAS/SATA HDDs or SSDs, and a CentOS 7.6 or higher operating system. As a best practice, configure RAID5. The RAID controller must have 1 GB or above cache and support powerfail safeguard module.

·     With the same total capacity, the more drives, the better the read and write performance. For example, six 2 TB drives provide better read and write performance than three 4 TB drives.

·     To use the TCP flow analysis and INT flow analysis features of SeerAnalyzer, you must deploy SeerCollector. See Table 214.

 

Hardware requirements for Campus

Table 209 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in Campus scenarios (single-node deployment)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 16 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     2000 online users

·     400 switches, ACs, and APs in total

1

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     5000 online users

·     1000 switches, ACs, and APs in total

1

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     10000 online users

·     2000 switches, ACs, and APs in total

1

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     20000 online users

·     4000 switches, ACs, and APs in total

1

·     CPU: 40 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     40000 online users

·     8000 switches, ACs, and APs in total

1

·     CPU: 48 cores (total physical cores), 2.0 GHz

·     Memory: 384 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 6 TB after RAID setup. 6 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     60000 online users

·     12000 switches, ACs, and APs in total

1

·     CPU: 48 cores (total physical cores), 2.0 GHz

·     Memory: 512 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 7 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Table 210 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in Campus scenarios (cluster deployment)

Node configuration

Maximum resources that can be managed

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 16 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     2000 online users

·     400 switches, ACs, and APs in total

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 2 TB after RAID setup. 2 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     5000 online users

·     1000 switches, ACs, and APs in total

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 3 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     10000 online users

·     2000 switches, ACs, and APs in total

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 4 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     20000 online users

·     4000 switches, ACs, and APs in total

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     40000 online users

·     8000 switches, ACs, and APs in total

3

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 384 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 6 TB after RAID setup. 6 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     60000 online users

·     12000 switches, ACs, and APs in total

3

·     CPU: 40 cores (total physical cores), 2.0 GHz

·     Memory: 384 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 7 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

·     100000 online users

·     20000 switches, ACs, and APs in total

 

Hardware requirements for DC deployment

Table 211 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in DC scenarios (cluster deployment)

Node configuration

Maximum number of devices

Maximum number of TCP connections

Remarks

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 24 cores (total physical cores), 2.2 GHz

·     Memory: 256 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 8 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

20

1000 VMs, 2000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: 32 cores (total physical cores), 2.2 GHz

·     Memory: 256 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 16 TB after RAID setup. 5 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

50

2000 VMs, 4000 TCP flows/sec.

2 TCP flows for each VM per second.

3

·     CPU: 40 cores (total physical cores), 2.2 GHz

·     Memory: 384 GB

·     System drive: 2.4 TB after RAID setup

·     Data drive: 24 TB after RAID setup. 7 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

100

5000 VMs, 10000 TCP flows/sec.

2 TCP flows for each VM per second.

 

 

NOTE:

You can calculate the overall TCP flow size based on the total number of VMs in the data center, and calculate the required hardware configuration on the basis of 2 TCP flows per second for each VM.

 

Hardware requirements for WAN deployment

Table 212 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in WAN scenarios (single-node deployment)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

1

·     CPU: 24 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 4 TB after RAID setup. 3 or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd.

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

Up to 300000 NetStream flows every 5 minutes.

 

Table 213 Hardware requirements for deployment of Unified Platform and SeerAnalyzer in WAN scenarios (cluster deployment)

Node configuration

Applicable service load

Node name

Node quantity

Minimum single-node requirements

Analyzer node

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 4 TB after RAID setup. Three or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

Up to 300000 NetStream flows every 5 minutes.

3

·     CPU: 20 cores (total physical cores), 2.0 GHz

·     Memory: 256 GB.

·     System drive: 2.4 TB after RAID setup

·     Data drive: 8 TB after RAID setup. Five or more drives of the same model are required.

·     ETCD drive: 50 GB (after RAID setup). Installation path: /var/lib/etcd

·     Network ports:

¡     Non-bonding mode: 2 × 10Gbps Ethernet ports.

¡     Bonding mode (recommended mode: mode 2 or mode 4): 4 × 10 Gbps network ports, each two forming a Linux bonding interface.

Up to 1000000 NetStream flows every 5 minutes.

 

Hardware requirements for collector deployment

Table 214 Hardware requirements for collector deployment

Item

Recommended configuration

CPU

Intel(R) Xeon(R) extensible processors (as a best practice, use the Platinum or Gold series), 2.2 GHz or above, 20 or more virtual cores

Memory

128 GB or above

Drive

System drive: 2 × 600 GB SSDs or SAS HDDs in RAID 1.

Network ports:

One 10 Gbps collector network port and one 10 Gbps management network port. The collector network port does not support bonding, and the management network port supports bonding.

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网