H3C EAD Installation and Deployment Guide-E73XX-5W100

HomeSupportInstall & UpgradeInstallation GuidesH3C EAD Installation and Deployment Guide-E73XX-5W100
01-Text
Title Size Download
01-Text 1.74 MB

Contents

Overview·· 1

Introduction· 1

Terms· 1

Installation workflow·· 3

Prerequisites· 4

Software and hardware requirements· 4

Hardware requirements· 4

Operating system requirements· 4

Scanner configuration requirements· 5

Client requirements· 6

Pre-installation checklist 7

Obtaining the software packages· 7

Verifying software packages· 8

Prepare for the installation· 9

Plan disk partitions· 9

Installing the operating system and software dependencies· 10

Installing Matrix· 11

Uploading the Matrix installation package· 11

Editing the configuration file as a non-root user· 11

Installing Matrix· 12

(Optional.) Configuring SSH·· 13

Modifying the SSH service port number· 13

Configuring password-based SSH login· 15

Deploying Unified Platform·· 19

Pre-deployment check· 19

Creating a Matrix cluster· 20

Logging in to Matrix· 20

Configuring cluster parameters· 21

Creating a cluster· 24

Deploying Unified Platform applications· 26

Deploying the Unified Platform Base application package· 26

Deploy EAD·· 30

Log in to Matrix· 30

Upload the installation packages· 30

Select applications· 31

Select installation packages· 31

Configure resources· 32

Configure parameters· 32

Deploy scanners· 36

Windows scanner installation procedure (active scanner) 36

Linux scanner installation procedure (active scanner) 42

(Optional.) Linux passive scanner installation procedure· 44

Log in to Unified Platform·· 46

Obtain licensing· 47

Backup & restoration· 48

Uninstall components· 49

Uninstall components on the convergence deployment page of Matrix· 49

Uninstall a Windows scanner· 49

Uninstall a Linux passive scanner· 50

Uninstall a Linux scanner· 50

Expand· 51

Expand single-host mode to cluster mode· 51

FAQ·· 52

What should I do upon a component deployment or upgrade failure?· 52

Matrix· 52

How can I configure the aging timer for the master nodes in the Matrix cluster?· 52

What should be done if the ETCDINSTALL phase takes too long during the scale-out of Matrix?· 53

What should I do if the page cannot be accessed after Matrix installation?· 54

What should I do if adding a node to Matrix fails?· 54

What should I do if Matrix deployment fails?· 54

How can I switch to the dual stack mode in Matrix?· 55

How can I enable Unified Platform application services on Matrix?· 55

Common browser issues· 56

How can I access the Matrix page through mapped IP address?· 56

 


Overview

Introduction

This document describes the installation, login, upgrade, registration, and uninstallation functions of the Endpoint Admission Defense (EAD) product. The main modules include:

·     EAD endpoint intelligent access: Provides unified network access policies for managing wired, wireless, and VPN networks for enterprises. It offers precise network access control for employees, visitors, and device administrators based on user role, device type, access time and location. This ensures seamless execution of endpoint security policies across the network, meeting the requirements of unified operation and management for multiple network access types, endpoint types, and user roles in an enterprise.

·     EAD endpoint compliance management: This module enables endpoint to securely access the network. By integrating network access control and endpoint security products, this solution enables security clients, security policy servers, and network devices to collaborate with third-party software products. In this way, this solution enforces security policies on endpoints accessing the network, and strictly controls the network access behaviors of endpoint users, enhancing the active defense capabilities of endpoints. This solution provides an efficient, easy-to-use management tool and method for enterprise network administrators.

·     EAD endpoint profiling system: A system designed to probe, identify, and monitor all endpoints in the network. It uses scanners to automatically identify endpoint types, operating systems, and other information by scanning endpoints in the network at the set time intervals or cycles. In this way, EAD EPS promptly detects and identifies new or abnormal endpoints, and then marks them for administrator approval. EAD strengthens endpoint management and monitoring to enhance network security and reduce potential risks.

·     EAD endpoint behavior audit: A system designed to help organizations efficiently manage endpoint user behaviors. The system comprehensively monitors all endpoint user operations. It effectively tracks network resources usage and sensitive information propagation. You can accurately assess endpoint security status. The system instantly detects policy violations, triggers real-time alarms, and logs events. It performs security incident location & analysis and provides reliable forensic evidence for investigations.

·     EAD endpoint data management: This module uses advanced deep content identification technology and comprehensive channel control measures to effectively prevent intentional or accidental leakage of users' critical business data or information assets in ways that violate management policies. This module enables organizations to manage and protect data based on its importance, preventing leaks of critical information assets and avoiding immeasurable risks and losses.

Terms

·     Matrix: Kubernetes-based platform that orchestrates and schedules Docker containers. On this platform, you can build Kubernetes clusters, deploy microservices, and implement O&M monitoring of systems, Docker containers, and microservices.

·     Master node: Master node in a cluster. A cluster must include three master nodes (only one master node is needed in single-host mode). The cluster automatically elects one master node as the primary master node, while the others act as secondary master nodes. When the primary master node in the cluster fails, the system elects a new primary master node from the secondary master nodes. The new node takes over the services of the original primary master node to prevent service interruption. The primary master node and secondary master node operate as follows:

¡     Primary master node: Manages and monitors all nodes in the cluster. The northbound service VIP is assigned to the primary master node, and all master nodes jointly provide service support.

¡     Secondary master node: Only provides support for service running. Secondary nodes do not manage or monitor other nodes.

·     Worker node: Service node in the cluster, which is optional. Worker nodes only process services and do not run for primary master node election. You can add worker nodes to expand the cluster when the master nodes reach the resource and performance bottlenecks in one of the following situations:

¡     The CPU or memory usage is high.

¡     Service response is slow.

¡     The number of pods on a node is reaching or reaches the upper limit of 300.

To confirm whether you can add worker nodes, see the release notes for your product version.

·     Scanner: A scanner scans endpoints in a specified gateway or network segment based on configured policies. A scanner discovers and identifies endpoints, and then reports the scan results to the scanner engine. EAD endpoint profiling system compares the reported scan results with the existing baselines to update the endpoint's compliance status and online status.

Multiple scanner modes are supported: active scan, passive scan, and converged scan. These modes meet the requirements of different scenarios:

¡     Active scan: The scanner actively sends probe packets to endpoints and identifies them based on their responses.

¡     Passive scan: The scanner does not actively send probe packets to endpoints. Instead, it captures mirrored packets on the configured NIC and extracts feature values for endpoint identification. The destination endpoint remains unaffected.

¡     Converged scan: The scanner performs both active and passive scans simultaneously. The system reports the optimal identification results from both methods.


Installation workflow

Install the controller on the convergence deployment page of Matrix. The detailed installation workflow is as follows.

Table 1 Installation workflow

Step

Task

Remarks

1.     Prepare servers

·     In single-host deployment mode, prepare one server.

·     In cluster deployment mode, prepare a minimum of three servers.

Required.

See "Software and hardware requirements" for hardware and software requirements.

2.     Install the Matrix cluster

Install the operating system and install the Matrix cluster on the servers

Required.

For more information, see H3C Unified Platform Deployment Guide.

3.     Deploy Unified Platform

Deploy Unified Platform

Required.

For more information, see H3C Unified Platform Deployment Guide.

4.     Deploy the EAD component on the convergence deployment page of Matrix

Obtaining the software packages

Required.

Log in to Matrix

Required.

Upload the installation packages

Required.

On the convergence deployment page, upload and deploy the product installation package.

Select applications

Required.

Select installation packages

Required.

Configure resources

Required.

Configure parameters

Required.

Deploy

Required.

 


Prerequisites

EAD includes multiple functional modules such as endpoint intelligent access (EIA), endpoint compliance management, and endpoint profiling system (EPS). Each module can be deployed separately or in convergence mode with others based on the functional requirements. In convergence deployment mode, you must evaluate the hardware resource usage.

Software and hardware requirements

Hardware requirements

 

NOTE:

As a best practice to ensure stable operation of the whole system, do not deploy the components on VMs.

 

EAD supports single-host and cluster deployment modes. As a best practice, deploy a three-host cluster. Use the resource calculator to calculate the specific hardware configuration requirements.

Operating system requirements

EAD runs on Unified Platform. Before you deploy EAD, you must first install Unified Platform. For the operating system installation procedure, see H3C Unified Platform Operating System Installation Guide. The supported operating system installation packages are listed in the following table.

Table 2 Operating system

Installation file name

Feature description

Access path

Remarks

NingOS-version.iso

·     H3C NingOS operating system

·     Kylin operating system

·     TencentOS operating system

Download the H3C-developed operating system from the following path on the H3C official website: Homepage > Support > Resource Center > Software Download > Network Operations & Management > Intelligent Management Center 7 > iMC PLAT (Intelligent Management Platform).

Required. Select one of the three operating systems. For more information about compatible operating system versions, see the release notes for the product.

Kylin-Server-version.iso

Please obtain it yourself.

TencentOS-Server-version.iso

Please obtain it yourself.

 

Table 3 Operating system check items

Item

Requirements

NTP check

Make sure the system time has been configured. As a best practice, configure NTP for time synchronization and make sure the whole network uses the same clock source for time synchronization.

Server and operating system compatibility

To view the compatibility matrix between H3C servers and operating systems, access

http://www.h3c.com/cn/home/qr/default.htm?id=367

 

Scanner configuration requirements

Deploy the scanners on a separate server, which can be a physical server or virtual machine.

Minimum hardware requirements

Table 4 Scanner configuration requirements

Number of managed nodes

Time for scanning

CPU (2.5GHz or above)

Memory

Disk size

<=3000

·     <=500 endpoints: 10 min

·     <=3000 endpoints: 30 min

4-core CPU

8G

80GB

 

Operating system

Windows

·     Windows Server 2008 with Service Pack 2

·     Windows Server 2008 R2 with Service Pack 1

·     Windows Server 2012  and patch KB2836988 (64-bit)

·     Windows Server 2012 R2 (64-bit)

·     Windows Server 2016

·     Windows Server 2019

·     Windows Server 2022

 

CAUTION

CAUTION:

Install the required patches on Windows. For more information, see the restrictions and cautions.

 

Linux

·     Red Hat Enterprise Linux Server 7.0 (64-bit)

·     Linx Rocky 6.0.80

·     Kylin V10 (SP1, SP2, SP3)

·     NingOS-V3-1.0.2403

·     Kylin V10 (SP1, SP2, SP3) (ARM)

·     NingOS-V3-1.0.2403(ARM)

·     UnionTech operating system (UOS) (ARM)

Plan IP addresses

Make sure the scanners can access the IP addresses of endpoints and the address of the EPS server (the northbound service VIP of Unified Platform) to ensure network connectivity.

Gateway compatibility matrix

Table 5 Gateway compatibility matrix

Type

Model

Switch

H3C S3100-8TP-PWR-EI, H3C S5130-52S-PWR-EI, H3C S7502E-XS, Cisco 3560-24TS, Cisco 3560-48TS, Cisco 3560G-48PS, Huawei Quidway S5700-28C-EI, HP A5500-48G-PoE+ E

 

Passive scanner configuration requirements

 

NOTE:

·     As a best practice, deploy the passive scanner and active scanner on the same server.

·     When you deploy the passive scanner on a separate server, make sure the configuration requirements in this section are met.

 

Table 6 Passive scanner configuration requirements

Platform

Operating system and kernel version

Linux platform (x86_64 architecture)

RHEL 7.0-7.6, 7.8-7.9, 8.0-8.3

NeoKylin (kernel 3.10.0-862.9)

Linx Rocky (kernel 4.9.0)

NingOS-V3-1.0.2403 (kernel 5.10.0)

Linux platform (ARM architecture)

Kylin (kernel versions 4.4.58, 4.4.131, and 4.19.90-11)

UnionTech UOS (kernel 4.19.0-ARM 64)

 

Client requirements

Table 7 Client requirements

Operating system

Minimum hardware requirements

Browser requirements

Windows

2.0 GHz (or above) CPU

2 GB (or above) memory

50 GB (or above) disk

100 Mbps (or above) NIC

Sound card

·     Turn off the Pop-up Windows Blockers in the browser.

·     Enable Cookies in the browser.

·     Add Unified Platform to the trusted site list.

·     Set screen resolution width to a minimum of 1440.

·     As a best practice, use a Google Chrome 70 or later Web browser

 

Pre-installation checklist

The following table describes the pre-installation checklist. Make sure all requirements for installing Matrix are met.

Table 8 Pre-installation checklist

Item

Requirements

Server

Hardware

The CPU, memory, disk, and NIC requirements are met.

Unified Platform deployment is supported.

Software

The system time settings are configured correctly. As a best practice, configure NTP on each node and specify the same time source for all the nodes.

Client

Verify that the browser version meets the requirements.

 

Obtaining the software packages

To install EAD, you need Unified Platform. For more information about Unified Platform application packages, See the application installation packages section in H3C Unified Platform Deployment Guide. This document only introduces the Unified Platform application installation package required for EAD deployment. The installation package names are in the format as shown in the following table. The version parameter represents the software version number and the platform parameter represents the CPU architecture type.

Table 9 Unified Platform application installation packages required for EAD deployment

Application installation package name

Description

Remarks

UDTP_Base_version_platform.zip

Basic service component, which provides basic functions such as convergence deployment, user management, permission management, resource management, tenant management, menu management, log center, backup & restoration, and health check.

Required.

BMP_Connect_version_platform.zip

Connection service component: Provides upper- and lower-level site management, WebSocket channel management, and NETCONF channel management.

Required.

BMP_Common_version_platform.zip

Common service component, which provides dashboard management, alarm, alarm aggregation, and alarm subscription.

Required.

BMP_Extension_version_platform.zip

Extended service component, which provides remote disaster recovery (RDR), snapshot rollback, certificate services, self-monitoring, intelligent algorithm library, single sign-on, and password platform.

Optional. Dependent on BMP_Common.

 

The following table lists the installation packages required for EAD deployment. Decompress these packages to obtain the application packages.

Table 10 Deploy the EAD installation packages

Installation package name

Application package name

Remarks

H3C_EAD_version.zip

EIA-version.zip

EAD Endpoint Intelligent Access module.

BRANCH-version.zip

Hierarchical management module.

TAM-version.zip

Device management module.

EAD-version.zip

EAD endpoint compliance management module, which depends on the EAD EIA module.

H3C_EAD_Extend_version.zip

EPS-version.zip

EAD endpoint profiling system module.

EBM-version.zip

EAD endpoint behavior audit module.

EDM-version.zip

EAD endpoint data management module, which depends on the EAD endpoint behavior audit module.

 

Verifying software packages

After uploading installation packages, first perform MD5 verification on each software package to ensure its integrity and correctness.

1.     Identify the uploaded installation packages.

[root@node1~]# cd /opt/matrix/app/install/packages/

[root@node1~]# ls

BMP_Common_E7301_x86.zip           BMP_Connect_E7301_x86.zip

2.     Obtain the MD5 value of an installation package, for example, UDTP_Base_E7301_x86.zip.

[root@node1~]# md5sum UDTP_Base_E7301_x86.zip

652845e0b92bbdff675c7598430687e2  UDTP_Base_E7301_x86.zip

3.     Compare the obtained MD5 value with the MD5 value released with the software. If they are the same, the installation package is correct.


Prepare for the installation

Plan disk partitions

Configure the EAD disk partition based on the calculation result of resource calculator.

 

 



Installing Matrix

Uploading the Matrix installation package

IMPORTANT

IMPORTANT:

·     To avoid file damage, use binary mode if you use FTP or TFTP for package upload..

·     If the Docker version is 20.10.24, you can directly install Matrix E7105H04 (or later) or E7302 (or later). If the Docker version is earlier than 20.10.24, you must first install any Matrix version earlier than E7105H04 or E7302, then upgrade the Docker version to 20.10.24, and finally upgrade the Matrix version to E7105H04 (or later) or E7302 (or later).

 

1.     Copy or use a file transfer protocol to upload the installation package to the target directory on the server.

¡     (Recommended.) Enter the /root directory or a directory created in the /root directory if you log in as the root user.

¡     (Recommended.) Enter the /home/admin directory if you log in as a non-root user (for example, admin).

2.     After you upload the Matrix installation package, perform MD5 verification on the installation package as described in "Verifying software packages".

Editing the configuration file as a non-root user

If you install the software package as the root user or install the NingOS operating system as the admin user, you can skip this section directly.

1.     Execute the su root command to switch to the root user, and view the /etc/passwd file as a root user. Identify whether the configured non-root user name (user in this example, as shown in the following figure) is the same as that in the configuration file. If not, modify the corresponding username in the configuration file. Leave the other parameters unchanged.

[root@node1 ~]# vim /etc/passwd

user:x:1000:1001:user:/home/user:/bin/bash

2.     As a root user, edit the /etc/sudoers file.

[root@node1 ~]# vim /etc/sudoers

## Allow root to run any commands anywhere

root    ALL=(ALL)       ALL

user    ALL=(root)       NOPASSWD:/bin/bash

 

## Allows members of the 'sys' group to run networking, software,

## service management apps and more.

# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

 

## Allows people in group wheel to run all commands

%wheel  ALL=(ALL)       ALL

user    ALL=(root)       NOPASSWD:/bin/bash

user    ALL=(root)       NOPASSWD:/usr/bin/rpm,/bin/sh

3.     As a root user, edit the /etc/pam.d/login file.

[root@node1 ~]# vim /etc/pam.d/login

#%PAM-1.0

auth       substack     system-auth

auth     [user_unknown=ignore success=ok ignore=ignore auth_err=die default=bad] pam_securetty.so

4.     As a root user, edit the /etc/ssh/sshd_config file.

[root@node1 ~]# vim /etc/ssh/sshd_config

#LoginGraceTime 2m

PermitRootLogin no

5.     After editing the configuration file, execute the systemctl restart sshd command to restart the sshd service.

Installing Matrix

 

NOTE:

Make sure the installation users are the same for all nodes. For a non-root installation user, add the sudo /bin/bash instruction before the script execution command.

 

1.     Access the storage directory of the Matrix installation package.

2.     Execute the unzip UDTP_Matrix_version-platform.zip command. UDTP_Matrix_version_platform.zip represents the installation package name, the version argument represents the version number, and the platform argument represents the CPU architecture type, x86_64 for a root user in this example.

[root@node1 ~]# unzip UDTP_Matrix_E7301_x86_64.zip

[root@node1 ~]# cd UDTP_Matrix_E7301_x86_64

[root@node1 UDTP_Matrix_E7301_x86_64]# ./install.sh

Complete!

3.     Use the systemctl status matrix command to identify whether the Matrix service is installed correctly. The Active field displays active (running) if the platform is installed correctly.

4.     Change the language setting (Chinese by default) for the Web interface to English as follows:

a.     Use the vim /opt/matrix/config/navigator_config.json command to open the navigator_config file.

b.     Change the value for the defaultLanguage field to en as follows:

If the field is not available in the file, manually add this field and add a comma after the field.

[root@node4 ~]#  vim /opt/matrix/config/navigator_config.json

{

"defaultLanguage":"en",

"productName": "uc",

"pageList": ["SYS_CONFIG", "DEPLOY", "APP_DEPLOY"],

"defaultPackages": [],

"allowDeployedPackageIds": ["UNIFIED-PLATFORM-BASE"],

"url": "http:””://${vip}:30000/central/index.html#/ucenter-deploy",

"theme":"darkblue",

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

"sshPort": 22,

"sshLoginMode": "secret",

"features":{"stopNtpServerBeyondThreshold":"false"}

}

c.     Execute the systemctl restart matrix command to restart the Matrix service and have your configuration take effect.

d.     Follow the previous steps to configure other nodes.

(Optional.) Configuring SSH

Modifying the SSH service port number

A Matrix cluster installs, upgrades, and repairs nodes and performs application deployment and monitoring through SSH connections. On each node, the SSH server uses port 22 by default to listen on the client connection requests. After a TCP connection is established between a node and the SSH server, data information can be exchanged between them.

You can modify the SSH service port number to improve the SSH connection security.

 

IMPORTANT

IMPORTANT:

·     Make sure all nodes are configured with the same SSH service port number.

·     The port number range is 1 to 65535. As a best practice, do not use well-known port numbers between 1 and 1024. Do not use port numbers already defined in the port usage guide for any solution.

·     If you change the SSH service port number for a deployed cluster, verify that all service components support the port number. If you cannot do that, the SSH service might fail to start.

·     To upgrade Matrix through an ISO image, make sure the contents in the navigator_config file on all cluster nodes are the same. To view detailed information in the navigator_config file, use the vim /opt/matrix/config/navigator_config.json command.

·     To ensure cluster stability, make sure all cluster nodes have consistent configurations in the /opt/matrix/config/navigator_config.json file.

·     To change the SSH service port number, see the port usage section in the usage guidelines of the associated product.

 

Modifying the SSH service port number for the server of each node

1.     If the cluster has not been deployed, log in to the CLI of the node and execute the netstat -anp | grep after_port-number command to identify whether the specified port number is occupied. If it is not occupied, no information will be returned. If it is occupied, the following information will be returned.

If the cluster has already been deployed, in addition to the preceding checks, execute the following command to identify whether any service containers in the environment are using the specified port (check for other forms of port usage as necessary). More specifically:

¡     Port number 12345 is not used, and you can modify the port number to 12345.

[root@node1 ~]# kubectl get svc -A -oyaml | grep nodePort | grep -w 12345

[root@node1 ~]# kubectl get pod -A -oyaml | grep hostPort | grep -w 12345

¡     Port number 1234 is occupied by nodePort or hostPort, and you cannot modify the port number to 1234.

[root@node1 ~]# kubectl get svc -A -oyaml | grep nodePort | grep -w 1234

        nodePort: 1234

[root@worker ~]# kubectl get pod -A -oyaml | grep hostPort | grep -w 1234

        hostPort: 1234

2.     Use the vim /etc/ssh/sshd_config command to open the configuration file of the sshd service. Modify the port number in the configuration file to the target port number (for example, 12345), and delete the annotation symbols.

Figure 1 The port number before modification is 22

 

Figure 2 The port number after modification

 

3.     After modifying the port number, restart the sshd service.

[root@node-worker ~]# systemctl restart sshd

4.     Identify whether the port number is successfully modified. The port number is successfully modified if the following information is returned.

The following uses the configuration on a master node for example.

[root@node-worker ~]# netstat -anp | grep -w 12345

tcp        0      0 0.0.0.0:12345            0.0.0.0:*               LISTEN      26212/sshd

tcp6       0      0 :::12345                 :::*                    LISTEN      26212/sshd

Modifying the SSH service port number for each Matrix node

1.     Use the vim /opt/matrix/config/navigator_config.json command to open the navigator_config file. Identify whether the sshPort field exists in the file.

¡     If yes, modify the value for the field to the target value (12345 in this example).

¡     If not, manually add the field and specify a value for it.

{

"productName": "uc",

"pageList": ["SYS_CONFIG", "DEPLOY", "APP_DEPLOY"],

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

"sshPort": 12345

}

2.     After modification, restart the Matrix service.

[root@node-worker ~]# systemctl restart matrix

3.     Identify whether the port number is successfully modified. If yes, the last message in the log is as follows:

The following uses the configuration on a master node for example.

[root@node-worker ~]# cat /var/log/matrix-diag/Matrix/Matrix/matrix.log | grep "ssh port"

2022-03-24T03:46:22,695 | INFO  | FelixStartLevel  | CommonUtil.start:232 | ssh port = 12345.

Configuring password-based SSH login

The primary master node of the cluster manages and monitors all nodes in the cluster over SSH connection. After you change the SSH login password through the command line for a node, you must change that password from the Matrix Web interface and any other scenarios (such as a springboard machine and an application deployed on Matrix) that saves the password. The process is time and labor wasted and mistakes easily occur.

After password-based SSH login is configured on each node, you are not required to change a password for a node at multiple places. You can also configure settings for other nodes from a node without using an SSH login password.

You can configure password-based SSH login for the root user account or a non-root user account.

 

CAUTION

CAUTION:

·     Make sure all nodes in the cluster use the same SSH login method. If you change the SSH login method for a node after the Matrix service is started, you must make that change on all the other nodes and restart the Matrix service for the nodes one by one.

·     You can configure password-based SSH login before cluster deployment, matrix scale-out, and node rebuild or upgrade. Make sure you complete the password-based SSH login configuration all nodes before cluster deployment, matrix scale-out, and node rebuild or upgrade.

·     If you reinstall the operating system after Matrix deployment (in cluster or standalone mode), make sure the password-based SSH login configuration is completed on all nodes. In addition, make sure the SSH login method is password-based login on all nodes.

 

Configuring password-based SSH login for the root user account

Log in to the CLI of each node to configure password-based SSH login. The following procedure uses node1 as an example.

 

NOTE:

If the system prompts that a file or directory does not exist when you execute the ssh-keygen -R command, ignore the message, because this is normal.

 

1.     Use the root user account to log in to the CLI of node1. Execute the following commands to generate the public key and private key files required for SSH symmetric authentication through the ED25519 encryption algorithm to save the public/private key. The default file is /root/. ssh/id_ed25519.

[root@node1 ~]# ssh-keygen -t ed25519

Generating public/private ed25519 key pair.

Enter file in which to save the key (/root/.ssh/id_ed25519):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_ed25519

Your public key has been saved in /root/.ssh/id_ed25519.pub

The key fingerprint is:

SHA256:GLeq7ZQlnKHRTWvefTwIAlAHyeB3ZfZt0Ovnfbkcbak root@node1

The key's randomart image is:

2.     Clear old public key information on each node, and then copy the generated public key to each node (including the current node). In this example, the cluster has three master nodes and the default SSH port number 22 is used. The IP addresses of node 1, node 2, and node 3 are 192.168.227.171, 192.168.227.172, and 192.168.227.173, respectively.

[root@node1 ~]# ssh-keygen -R 192.168.227.171

[root@node1 ~]# ssh-keygen -R 192.168.227.172

[root@node1 ~]# ssh-keygen -R 192.168.227.173

[root@node1 ~]# ssh-copy-id -p 22 -i  ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]# ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]# ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

3.     Perform the same procedure on all the other nodes.

4.     Use the root user account to log in to the CLI of node1 and then SSH to the current and the other nodes to verify that password-based SSH login takes effect.

In this example, the root user log in to node2 over SSH and the SSH port number is 22.

[root@node1 ~]# ssh -p 22 [email protected]

Configuring password-based SSH login for a non-root user account

Log in to the CLI of each node to configure password-based SSH login.

Because some commands must be executed with root permission, you must configure admin-to-admin password-based SSH login and root-to-admin password-based SSH login for an admin user account.

 

NOTE:

If the system prompts that a file or directory does not exist when you execute the ssh-keygen -R command, ignore the message, because this is normal.

 

1.     Configuring admin-to-admin password-based SSH login

In this example, admin accounts are used for accessing the three master nodes of the cluster.

a.     Use the admin user account to log in to the CLI of node1. Execute the ssh-keygen - t ed25519 command to generate public key and private key files required for SSH symmetric authentication to save the public/private key. The default file is /home/admin/. ssh/id_ ed25519.

b.     Clear old public key information on each node, and then copy the generated public key to each node (including the current node).In this example, the cluster has three master nodes and the default SSH port number 22 is used. The IP addresses of node 1, node 2, and node 3 are 192.168.227.171, 192.168.227.172, and 192.168.227.173, respectively.

[root@node1 ~]# ssh-keygen -R 192.168.227.171

[root@node1 ~]# ssh-keygen -R 192.168.227.172

[root@node1 ~]# ssh-keygen -R 192.168.227.173

[root@node1 ~]$ ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]$ ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]$ ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

c.     Perform the same procedure on all the other nodes.

d.     Log in to the backend as the admin user. Log in to the current node and other nodes through SSH to identify whether the password-based SSH login configuration takes effect.

[root@node1 ~]$ ssh -p 22 [email protected]

2.     Configuring root-to-admin password-based SSH login

a.     Use the admin user account to log in to the CLI of node1 and switch to the root use account.

b.     Generate new public key and private key files, clear old public key information, and then copy the new public key to each node (including the current node).

c.     Perform the same procedure on all the other nodes..

d.     Log in to the back end of a node as the admin user, and switch the user to the root user. Log in to the current node and other nodes through SSH as the admin user to identify whether the password-based SSH login configuration takes effect.

[root@node1 ~]# ssh -p 22 [email protected]

Configuring password-based SSH login for Matrix

1.     Open the navigator_config file in the vim/opt/matrix/config/navigator_config.json directory to check whether the sshLoginMode field exists in the file. If the field exists, set the value to secret. If the field does not exist, manually add the field and assign a value to it. The following configuration takes the x86 version as an example.

{

"productName": "uc",

"pageList": ["SYS_CONFIG", "DEPLOY", "APP_DEPLOY"],

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

"sshPort": 22,

"sshLoginMode":"secret"

}

2.     Restart the Matrix service.

[root@node1 ~]# systemctl restart matrix

3.     Verify that the configuration takes effect.

[root@node1 ~]# cat /var/log/matrix-diag/Matrix/Matrix/matrix.log | grep "sshLoginMode"

2022-03-31T20:11:08,119 | INFO  | features-3-thread-1 | CommonUtil.start:245 | ssh port = 22, sshLoginMode = secret.

 


Deploying Unified Platform

IMPORTANT

IMPORTANT:

·     In scenarios where an inner NTP server is used, make sure the system time of all nodes is consistent with the current time before deploying the cluster. In scenarios where an external NTP server is used as the clock source, make sure the time of the external NTP server is consistent with the current time. Network disconnectivity, failure, or time inaccuracy of the NTP server might cause deployment failure of the Matrix cluster.

·     To view the system time, execute the date command. To modify the system time, use the date -s yyyy-mm-dd or date -s hh:mm:ss command.

·     During application deployment or upgrade, do not restart the matrix service or a node and do not disconnect the server power supply. If you do so, application deployment data might be corrupted (etcd data error or disk file corruption for example), which might cause operation failure.

 

Pre-deployment check

1.     Log in to the back end of each node in turn, execute the sudo bash /opt/matrix/tools/env_check.sh command to perform environment check, and take appropriate actions according to the check results.

 

 

NOTE:

·     You can execute the env_check.sh script in all operating systems supported by Unified Platform.

·     When the CPU frequency is lower than 2000 MHz, the Matrix self-check script (env_check.sh) and health check module will print a CPU frequency alarm. Please make sure the server hardware meets the requirements, and the CPU power supply mode is set to performance(For example, the NingOS system can execute the cpupower frequency-set -g performance command).

·     To view the help and obtain more script usage methods, execute the sudo bash /opt/matrix/tools/env_check.sh -h command in the back end of the node. For example, the command used to obtain the IOPS performance of the etcd disk is sudo bash /opt/matrix/tools/env_check.sh -p -d /var/lib/etcd.

 

Manually confirm the items listed in the following table that are not checked in the env_check.sh script. Make sure the conditions for installing Matrix are met.

Table 11 Verifying the installation environment

Item

Requirements

Network port

Make sure each Matrix node has a unique network port. Do not configure subinterfaces or secondary IP addresses on the network port.

IP address

The IP addresses of network ports used by other Matrix nodes and the IP address of the network port used by the current Matrix node cannot be on the same subnet.

The source IP address for the current Matrix node to communicate with other nodes in the Matrix cluster must be the IP address of the Matrix cluster. You can execute the ip route get targetIP command to obtain the source IP address.

[root@node1 ~]# ip route get 100.100.5.10

100.100.5.10 via 192.168.10.10 dev eth0 src 192.168.5.10

Time zone

·     To avoid node adding failure on the GUI interface, make sure the system time zone of all Matrix nodes are the same. You can execute the timedatectl command to view the system time zone of each Matrix node.

·     When selecting a time zone, do not select Beijing.

Host name

To avoid cluster creation failure, make sure the host name meets the following rules:

·     The host name of each node must be unique.

·     Do not use the default host names, including localhost, localhost.localdomain, localhost4, localhost4.localdomain4, localhost6, and localhost6.localdomain6.

·     The host name contains a maximum of 63 characters and supports only lowercase letters, digits, hyphens, and decimal points. It cannot start with 0, 0x, hyphen, or decimal point, and cannot end with hyphen or decimal point. It cannot be all digits.

 

2.     Before you deploy the UDTP_Base_version_platform.zip component of Unified Platform, execute the cat /proc/sys/vm/nr_hugepages command on each node to identify whether HugePages is enabled. If the return result is not 0, record that value and execute the echo 0 > /proc/sys/vm/nr_hugepages command to temporarily disable hugepages. After you deploy the UDTP_Base_version_platform.zip component, replace value 0 in the echo 0 > /proc/sys/vm/nr_hugepages command with the recorded value, and then execute the command on each node to restore the HugePages configuration.

Creating a Matrix cluster

Logging in to Matrix

Restrictions and guidelines

On Matrix, you can perform the following operations:

·     Upload or delete the Unified Platform installation package.

·     Deploy, upgrade, expand, or uninstall Unified Platform.

·     Upgrade or rebuild cluster nodes.

·     Add or delete worker nodes.

Procedure

1.     Enter the Matrix login address in your browser and then press Enter.

¡     If the node that hosts Matrix uses an IPv4 address, the login address is in the https://ip_address:8443/matrix/ui format, for example, https://172.16.101.200:8443/matrix/ui.

¡     If the node that hosts Matrix uses an IPv6 address, the login address is in the https://[ip_address]:8443/matrix/ui format, for example, https://[2000::100:611]:8443/matrix/ui.

ip_address represents the IP address of the node that hosts Matrix. This configuration uses an IPv4 address. 8443 is the default port number.

 

 

NOTE:

·     In cluster deployment mode, ip_address can be the IP address of any Master node in the cluster before the cluster is deployed.

·     When deploying cluster nodes, make sure no duplicate host names exist. After successfully deploying the cluster, you cannot edit the host names of the cluster nodes.

·     During cluster deployment, you cannot log in to the cluster nodes to perform any operations, or add the nodes deployed in the cluster to another cluster.

 

Figure 3 Matrix login page

 

2.     Enter the username and password, and then click Login. The cluster deployment page is displayed.

The default username is admin and the default password is Pwd@12345. If you have set the password when installing the operating system, enter the set password.

To deploy a dual-stack cluster, enable the dual-stack feature.

Figure 4 Single-stack cluster deployment page

 

Figure 5 Dual-stack cluster deployment page

 

Configuring cluster parameters

Before deploying cluster nodes, first configure cluster parameters. On the Configure cluster parameters page, configure cluster parameters as described in the following two tables and then click Apply.

Table 12 Configuring single-stack cluster parameters

Parameter

Description

Northbound service Virtual IP

IP address for northbound interface services. This address must be on the same subnet as the master nodes.

Service IP pool

Address pool for IP assignment to services in the cluster. It cannot overlap with other subnets in the deployment environment. The default value is 10.96.0.0/16. Typically, the default value is used.

Container IP pool

Address pool for IP assignment to containers. It cannot overlap with other subnets in the deployment environment. The default value is 177.177.0.0/16. Typically, the default value is used.

VIP Mode

Options are Internal and External. In Internal mode, the VIP is assigned by Matrix to the cluster and Matrix manages drift of the VIP among cluster nodes. In External mode, the VIP is assigned to the outside of the cluster by a third-party platform or software, and is not managed by Matrix. The default is Internal.

This parameter is added as from E0713.

Cluster network mode

Network mode of the cluster:

Single Subnet: In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for mutual communication.

Single Subnet-VXLAN: In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for mutual communication. Only an IPv4 network is supported in this mode.

NTP server

Used for time synchronization between the nodes in the cluster. Options include Internal server and External server. If you select External server, you must specify the IP address of the server, and make sure the IP address does not conflict with the IP address of any node in the cluster.

An internal NTP server is used in this configuration. After cluster deployment is started, the system synchronizes the time first. After the cluster is deployed, the three master nodes will synchronize the time regularly to ensure that the system time of all nodes in the cluster is consistent.

To deploy an environment with upper- and lower-level nodes, configure the same NTP server for both the upper- and lower-level nodes, and make sure they have consistent system time.

External DNS server

Used for resolving domain names outside the K8s cluster. Specify it by using the IP: Port format. In this configuration, leave this parameter not configured.

The DNS server in the cluster cannot resolve domain names outside the cluster. This platform will forward an external domain name randomly to an external DNS server for resolution.

A maximum of 10 external DNS servers can be configured. All the external DNS servers must have the same DNS resolution capability, and each can perform external domain name resolution independently. These DNS servers will be used randomly without precedence and sequence.

Make sure all DNS servers can access the root domain. To verify the accessibility, use the nslookup -port = {port} -q = ns. {Ip} command.

Self-Defined VIPs

This setting is typically used to isolate the cluster network from the management network. Make sure the self-defined VIPs do not belong to other subnets in the deployment environment.

 

Table 13 Configuring dual-stack cluster parameters

Parameter

Description

Northbound service VIP1 and VIP2

IP address for northbound interface services. This address must be on the same subnet as the master nodes. VIP1 is an IPv4 address, and VIP2 is an IPv6 address. For the northbound service VIPs, you must specify at least one IPv4 address or IPv6 address. Also, you can configure both an IPv4 address and IPv6 address. You cannot configure two IP addresses of the same version.

When configuring IPv6 addresses, make sure that they do not end with a colon.

Service IP pool

This parameter takes effect only in a dual-stack environment.

Address pool for assigning IPv4 addresses and IPv6 addresses to services in the cluster. The default IPv4 address is 10.96.0.0/16, and the default IPv6 address is fd00:10:96::/112. Typically, the default values are used. You cannot change the value after deployment.

To avoid cluster errors, make sure the subnet does not overlap with other subnets in the deployment.

Container IP pool

This parameter takes effect only in a dual-stack environment.

Address pool for assigning IPv4 addresses and IPv6 addresses to containers in the cluster. The default IPv4 address is 177.177.0.0/16, and the default IPv6 address is fd00:177:177::/112. Typically, the default values are used. You cannot change the value after deployment.

To avoid cluster errors, make sure the subnet does not overlap with other subnets in the deployment.

VIP

Options are Internal and External. In Internal mode, the VIP is assigned by Matrix to the cluster and Matrix manages drift of the VIP among cluster nodes. In External mode, the VIP is assigned to the outside of the cluster by a third-party platform or software, and is not managed by Matrix. The default is Internal.

This parameter is added as from E0713.

Cluster network mode

Network mode of the cluster. Only Single Subnet mode is supported. In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for mutual communication.

NTP server

Used for time synchronization between the nodes in the cluster. Options include Internal server and External server. If you select External server, you must specify the IP address of the server, and make sure the IP address does not conflict with the IP address of any node in the cluster.

An internal NTP server is used in this configuration. After cluster deployment is started, the system synchronizes the time first. After the cluster is deployed, the three master nodes will synchronize the time regularly to ensure that the system time of all nodes in the cluster is consistent.

To deploy an environment with upper- and lower-level nodes, configure the same NTP server for both the upper- and lower-level nodes, and make sure they have consistent system time.

External DNS server

Used for resolving domain names outside the K8s cluster. Specify it by using the IP: Port format. In this configuration, leave this parameter not configured.

The DNS server in the cluster cannot resolve domain names outside the cluster. This platform will forward an external domain name randomly to an external DNS server for resolution.

A maximum of 10 external DNS servers can be configured. All the external DNS servers must have the same DNS resolution capability, and each can perform external domain name resolution independently. These DNS servers will be used randomly without precedence and sequence.

Make sure all DNS servers can access the root domain. To verify the accessibility, use the nslookup -port = {port} -q = ns. {Ip} command.

Self-Defined VIPs

This setting is typically used to isolate the cluster network from the management network. Make sure the self-defined VIPs do not belong to other subnets in the deployment environment.

 

IMPORTANT

IMPORTANT:

If the existing NTP server cannot reach the northbound addresses, you can change cluster parameters to add NTP servers at NIC network configuration after cluster deployment.

 

Creating a cluster

For standalone deployment, add one master node on Matrix. For cluster deployment, add three master nodes on Matrix.

To create a cluster:

1.     After configuring the cluster parameters, click Next.

2.     In the Master Node area, click the plus icon .

Figure 6 Adding a single-stack node

Figure 7 Adding a dual-stack node

 

3.     Configure node parameters as shown in the following figure and then click OK.

Table 14 Node parameter description

Item

Description

Type

Displays the node type. Options include Master and Worker. This field cannot be modified.

IP address

Enter the planned IP address for the master node. You can add master nodes in bulk. In bulk adding mode, make sure the username and password of the master nodes are the same.

Username

Specify the user account to access the operating system. Use an account based on your configuration during system installation. All nodes in a cluster must use the same user account.

Password

Specify the password to access the operating system.

 

4.     Click Start deployment.

When the deployment progress of each node reaches 100%, the deployment finishes. After the cluster is deployed, a star icon  is displayed at the left corner of the primary master node, as shown in the following figure.

Figure 8 Cluster deployment completed

 

After the cluster is deployed, you can skip over the procedures for configuring the network and deploying applications and configure them later as needed.

Deploying Unified Platform applications

 

IMPORTANT

IMPORTANT:

·     When you bulk upload application packages simultaneously, make sure the deployment page is not closed, the PC does not enter sleep mode, and the network between the PC and cluster is not disconnected. If any of these situations occur while the system is deploying components, some components might fail to be deployed correctly. (During the deployment process, you can switch between the browser tabs, minimize the browser window, and lock the PC screen.)

·     If a cluster resource, for example, CPU or memory, reaches the usage threshold during the deployment, some components might fail to be deployed correctly. You can attempt to redeploy these components that failed to be deployed later.

·     When you bulk deploy a large number of applications, resource contention might occur, causing some applications to fail. For applications that fail to be deployed, you can click Retry on the page to attempt redeployment.

·     By default, the websocket, region, netconf, and Common application services of the Connect component, as well as the incident application service of the Common component are disabled. They are automatically enabled only when you deploy other components that depend on these application services. To manually enable them on Matrix as required by the scenario, see "How can I enable Unified Platform application services on Matrix?."

 

Deploying the Unified Platform Base application package

IMPORTANT

IMPORTANT:

When you upload installation packages, make sure the network between the browser and the cluster is operating stably and the bandwidth is not less than 10 Mbps. If the network does not meet the requirements, the installation uploading might fail or take a long time.

 

You can deploy the application packages only on the Matrix page, and you can bulk upload application packages. However, you must deploy the Base component first before deploying other applications.

1.     In the address bar of the browser, enter https://ip_address:8443/matrix/ui to log in to Matrix, where the ip_address parameter specifies the northbound service VIP.

2.     Access the Deploy > Applications page.

3.     For a single-node cluster, you can select the standard or proxy deployment mode. You cannot change the deployment mode after you install a component. This chapter uses the standard deployment mode as an example.

¡     The standard deployment mode is applicable to the systems of standard architecture and the server side of the server-proxy architecture. You can deploy all components of Unified Platform in standard mode.

¡     The proxy deployment mode applies to only the proxy side of the server-proxy architecture, applicable to U-Center products. You can deploy only the Base, Connect, UCP_BasePlat, UCP_CollectPlat components of Unified Platform in proxy mode.

 

NOTE:

To change the deployment mode, reinstall Matrix. Changing the deployment mode by only reinstalling the Base component might cause deployment issues for other components.

 

Figure 9 Selecting a deployment mode

 

4.     Click Deploy Applications.

Figure 10 Installing the Base component

 

5.     Click Upload. In the dialog box that opens, upload the Base installation package.

Figure 11 Uploading the Base installation package

 

6.     After the Base installation package is uploaded, select the Base application package on the current page and then click Next.

 

 

NOTE:

Do not select the other application packages. If you do that, you cannot install the Base component.

 

Figure 12 Base installation package uploaded

7.     On the current page, directly click Next without performing any other operations.

Figure 13 Selecting applications

8.     Click Edit to configure the Base configuration item parameters. Then, click OK to save the settings.

Table 15 Base configuration item parameters

Configuration item

Description

Resource Level

In standalone mode, options include single_large, single_medium, and single_small.

In cluster mode, options include cluster_large, cluster_medium, and cluster_small.

Deployment Protocol

Options include HTTP and HTTPS.

HTTP Protocol Port Number

The default value is 30000.

HTTPS Protocol Port Number

The default value is 30443.

CPU manufacturer

Select CPU manufacturer.

Use Third-Party Database

Select whether to use a third-party database.

Theme

Options include white and star.

Language

Options include zh_CN and en_US.

 

Figure 14 Configuring parameters

9.     After configuring the parameters, click Deploy to start deploying the Base component.

10.     After the Base component is deployed, the original Deploy > Applications page is automatically updated to the Deploy > Convergence Deployment page, where you can deploy other optional packages.


Deploy EAD

 

NOTE:

Before you deploy the EAD component, install Unified Platform. This chapter provides an example of installing EAD based on the assumption that Unified Platform has been installed.

 

Log in to Matrix

1.     In the address bar of the browser, enter https://ip_address:8443/matrix/ui to log in to Matrix. The ip_address parameter specifies the northbound service VIP.

2.     Access the Deploy > Convergence Deployment page.

Figure 15 Convergence deployment

 

Upload the installation packages

Click Packages Management to access the installation package management page. Click Upload to upload installation packages. On this page, you can upload and delete installation packages. The installation package list displays names, versions, sizes, creation time, and other information of the uploaded installation packages. The application installation packages support bulk upload. You can bulk select and upload multiple installation packages as needed. Upload the EAD, EAD-Extend, BMP_Connect, and BMP_Common installation packages to the system. EAD and EAD-Extend installation packages will be automatically decompressed into separate application packages for modules, as shown in the following figure.

Figure 16 Package management

 

Select applications

After the installation packages are uploaded, click Back to return to the convergence deployment page. Click Install to access the application selection page. Select a scene as needed, or separately selected the modules to be installed.

Figure 17 Select applications

 

Select installation packages

After you select applications, click Next to access the page for selecting installation packages, and select the installation packages of the corresponding versions.

Figure 18 Select installation packages

Configure resources

Click Next to access the resource configuration page. As shown in the following figure, use the resource calculator to select the appropriate resource level based on specifications.

Figure 19 Configure resources

 

Configure parameters

1.     Click Next to access the parameter configuration page. Click the EIA tab to configure parameters for the EAD EIA module, as shown in the following figure.

Figure 20 EIA

 

¡     (Optional.) Bind Nodes: When a cluster has more than three nodes, enable the Deploy Configuration to Bound Nodes feature to bind nodes.

¡     (Optional.) External Database Configuration: Enable this feature to use an external database of the specified type. Select a database type, enter the corresponding details such as the IP address, port, username, and password, and then click Apply to complete the configuration.

¡     (Optional.) Portal Authentication Configuration Parameters: Set the system encoding, add domain alias mappings, and click Apply to complete the configuration. When the DNS server fails to resolve a domain name to an IP address, you can configure this parameter to resolve the IP address through domain alias mappings.

 

 

NOTE:

·     If the external database configuration is enabled, you must first create the corresponding database for the component. If you do not do that, the component will fail to be deployed.

·     The EAD EIA module of version E7301 only supports the TDSQL type for external databases.

·     You need to configure the external database only once, and EIA, BRANCH, and TAM applications will share the same policy.

 

2.     Click the EAD tab as shown in the following figure. You can configure parameters for the EAD endpoint compliance management module. If the cluster has more than three nodes, enable the Deploy Configuration to Bound Nodes feature and bind the nodes.

Figure 21 EAD

 

3.     Click the EPS tab as shown in the following figure. Configure parameters for the EAD endpoint profiling system (EPS) module. If the cluster has more than three nodes, enable the Deploy Configuration to Bound Nodes feature and bind the nodes.

Figure 22 EPS

 

4.     Click the EBM tab as shown in the following figure. Configure parameters for the EAD endpoint behavior audit module. If the cluster has more than three nodes, enable the Deploy Configuration to Bound Nodes feature and bind the nodes.

Figure 23 EBM

 

5.     Click the EDM tab as shown in the following figure. You can configure parameters for the EAD endpoint data management module. If the cluster has more than three nodes, enable the Deploy Configuration to Bound Nodes feature and bind the nodes.

Figure 24 EDM

 

Deploy

1.     Click Deploy to access the deployment page. For example, EAD deployment with the small resource level in single-host mode takes about 12.5 minutes.

2.     After deployment, view the deployed components on the deployment management page.


Deploy scanners

 

NOTE:

This section applies to users who have installed the EAD EPS module. Skip this section if you have not installed the EPS module.

 

Windows scanner installation procedure (active scanner)

CAUTION

CAUTION:

To ensure a smooth EPS scanner installation on Windows, temporarily disable or uninstall antivirus software like 360 or Huorong during installation. This prevents these programs from mistakenly deleting or blocking critical installation files, which might cause malfunctions.

 

Before installing a scanner, uninstall any existing WinPcap software on your Windows system and restart the computer to ensure a successful installation. When you install a scanner on a PC for the first time, restart the PC after installation to use the scanner correctly. You can skip restarting for subsequent installations.

1.     Obtain the software package H3C_EPS_version_X86.zip. Decompress the package. The decompressed folder EPS\tools\Windows contains the Windows scanner installer EPSScanner version.exe, where the version parameter represents the version number.

2.     Double-click the scanner installer EPSScanner version.exe.

3.     Click OK to access the destination location selection page.

Figure 25 Select the scanner installation directory

 

4.     Select the installation directory, click Next, and optionally select the start menu folder.

Figure 26 Select the start menu folder

 

5.     After you select the start menu folder, click Next to select additional tasks, such as creating a desktop shortcut.

Figure 27 Select additional tasks

 

6.     After you select additional tasks, click Next to acknowledge the installation information.

Figure 28 Acknowledge the installation information

 

7.     Click Install to start installing the scanner.

¡     If the VC2010 redistributable is not installed on this device, the installation window will open.If you have already installed the redistributable, the installation window will not open.

¡     If the VC2013 redistributable is not installed on your device, the installation window will open. Follow the two windows below to complete the installation. If you have already installed the redistributable, the installation window will not open.

Figure 29 Install the VC2013 redistributable

 

Figure 30 VC2013 redistributable installed successfully

 

¡     If WinPcap is not installed on your device, the installation window will open. Follow the four windows below to complete the installation. If you have already installed the redistributable, the installation window will not open.

Figure 31 Install WinPcap 4.1.3

 

Figure 32 WinPcap installation agreement

 

Figure 33 WinPcap startup settings

 

Figure 34 WinPcap installation completed

 

8.     The scanner is installed successfully. Click Finish to exit the scanner installer.

9.     After you install the scanner, double-click its icon on the desktop to open the scanner GUI.

Figure 35 Scanner GUI

 

10.     After you edit the primary IP, server port, log level, or log retention days, click Save Config to save the changes.

¡     Server IP: Northbound service VIP of Unified Platform.

¡     Server port: The default value is 30000. Keep it consistent with the access port of Unified Platform.

¡     Log level: The default is debug.

¡     Auto delete logs of last () days: Set the log retention period (days). The default is 10 days. Edit it as needed.

11.     Open the scanner configuration GUI, click Advanced Config, and then enter the Unified Platform username and password. Select to use HTTPS if you access through HTTPS, Keep the server key as the default. Do not change it.

 

NOTE:

Configure a username and password to obtain the token for HTTP interface authentication. If the user password on Unified Platform changes, update the password in the scanner advanced configuration accordingly.

 

Linux scanner installation procedure (active scanner)

When you install a Linux scanner, you must log in to the Linux system as a root user. A Linux scanner cannot be directly upgraded. To upgrade a Linux scanner, first uninstall it completely and install it again.

1.     Copy the Linux scanner package EPSScanner version.tar.gz to the /usr/local directory:

cp EPSScanner version.tar.gz /usr/local/

2.     Use the following command to decompress the Linux scanner package:

tar -xvzf EPSScanner version.tar.gz

Figure 36 Copy the scanner package

 

3.     The decompressed files are saved in the EScan folder (automatically created during the decompression process) in the current directory, as shown in the following figure.

Figure 37 Decompressed files

 

4.     To execute the installation file correctly, access the /Escan/conf directory, and edit the WorkerConf.xml file.

a.     Execute the following command to open the WorkerConf.xml file:

vi WorkerConf.xml

Figure 38 Edit the WorkerConf.xml file

 

b.     As shown in the following figure, perform the following tasks:

-     ServerIP: Change it to the northbound service VIP of Unified Platform.

-     ServerPort: The default value is 30000. Keep it consistent with the access port of Unified Platform.

-     PortalUserName: Unified Platform username.

-     PortalPswd: Unified Platform password.

-     isSSL: Use HTTPS or not. Set it to 0 for no and 1 for yes.

-     LogLevel: The default value is 4, which sets the log level to debug.

c.     Save the file and exit.

5.     Access the Escan/ directory, and execute the following command:

./install.sh

After you execute the install script, the scanner will be installed, as shown in the following figure.

Figure 39 Install a scanner

 

6.     During the installation process, the system will prompt whether to install a passive scanner.

¡     Select n to skip passive scanner installation. The scanner installation is now complete.

¡     Select y to continue installing a passive scanner. For more information about the installation procedure, see the passive scanner installation procedure section.

7.     After you install the scanner, check the scanner status to identify whether the scanner is successfully installed. To view the scanner service status, execute the following command:

service EScanService status

As shown in the following figure, the running value indicates that the scanner is running normally.

Figure 40 View the scanner service status

 

8.     After the installation is complete, open the operation interface. In the installation directory, enter the following command to open the operation interface:

./EScanUI

To edit the configuration, edit the server IP and other parameters in the operation interface as shown in the following figure.

Figure 41 View scanner configuration information

 

(Optional.) Linux passive scanner installation procedure

CAUTION

CAUTION:

To use a Linux passive scanner, first install an active scanner.

 

To install a passive scanner on a Linux system, you must log in to the Linux system as the root user. You can install a passive scanner during the active scanner installation process or install it separately after installing an active scanner.

1.     Copy the Linux scanner package EPSScanner version.tar.gz to the /usr/local directory and decompress it. Access the /usr/local/EScan/pscan directory and execute the installation command:

sh install_psvscan.sh

2.     Select pscancatcher and pscanparser for installation, as shown in the following figure.

Figure 42 Install a passive scanner

 

3.     Select the installation path. The default is /usr/local, as shown in the following figure.

Figure 43 Select the installation path

 

4.     Configure the listening port. The default value is 7890, as shown in the following figure.

Figure 44 Configure the listening port

 

5.     Set the NIC interfaces for the passive scanner to capture packets. You can add multiple interfaces. To add more interfaces, select y as shown in the following figure.

Figure 45 Configure and add NIC interfaces

 

6.     Enter the IP address of the active scanner as shown in the following figure.

Figure 46 Enter the active scanner address

 

7.     After installation, restart the operating system as shown in the following figure.

Figure 47 Restart the operating system

 


Log in to Unified Platform

1.     Enter http://ip_address:30000 in the address bar of a browser, and then press Enter. The ip_address parameter represents the northbound service VIP of the cluster where Unified Platform resides. The login page shown in the following figure opens.

Figure 48 Log in to the deployed components page

 

2.     Enter the operator name and password (admin and Pwd@12345 by default), and click Login to access the homepage of the deployed components.

3.     If multiple scenarios are installed on Unified Platform, you can click the  icon in the top right corner of the homepage and select Change View to switch the view. After the view switch is complete, the system displays only the menu items associated with the new view. The default view is universe, which displays menu items for all of the installed scenarios.

 


Obtain licensing

Support for preinstalled licenses varies by component. For more information, see H3C AD-NET&U-Center 2.0 License Matrixes.

If you have purchased a product license, use the license code in the software authorization letter for license registration. If it's a project trial, contact the relevant H3C market personnel to apply for a trial license.

For more information about requesting and installing the license, see H3C Software Product Remote Licensing Guide.

After you install the license for the product on the license server, connect to the license server from the license management page to obtain the license. To do that, perform the following tasks:

1.     Log in to the deployed components page.

2.     Click the System tab. From the navigation pane, select License Management > License Information to access the license information page, as shown in the following figure.

Figure 49 License information page

 

3.     Configure the following information:

¡     IP Address: Specify the IP address of the server hosting the license server.

¡     Port: The port number is 5555 by default, which is the same as the port number of the license server service.

¡     Username: Client name configured on the license server.

¡     Password: Password for the client name configured on the license server.

4.     After the configuration is completed, click Connect to set up a connection to the license server. After the connection is successfully set up, the system can automatically obtain license information from the license server.

 


Backup & restoration

You can back up and restore components on Unified Platform. For more information, see H3C Unified Platform Deployment Guide.

 


Uninstall components

CAUTION

CAUTION:

·     Uninstall a module or application with caution, because this operation will delete the related data. Before the uninstall operation, verify that the module or application is not in use.

·     If you select to uninstall an application on which other applications rely, those dependent applications will be automatically selected automatically.

 

Uninstall components on the convergence deployment page of Matrix

1.     Log in to Matrix, and access the Deploy > Convergence Deployment page.

2.     Select the components you want to uninstall (for example, select components of version E7301), and then click Uninstall.. Confirm the applications to uninstall, and then click OK to uninstall the selected components.

Figure 50 Confirm the applications to uninstall

 

Uninstall a Windows scanner

From the Start menu of Windows, locate the Uninstall EPSScanner program in the EPSScanner folder, and run it to uninstall the Windows scanner.

Figure 51 Uninstall EPSScanner

 

Uninstall a Linux passive scanner

You must uninstall a passive scanner separately. Uninstalling an active scanner does not automatically uninstall the passive scanner.

1.     Access the passive scanner installation path, which defaults to the /usr/local/psvscan directory, as shown in the following figure.

cd /usr/local/psvscan/

Figure 52 Passive scanner installation path

 

2.     Execute the uninstall command as shown in the following figure.

sh uninstall_psvscan.sh

Figure 53 Uninstall a passive scanner

 

Uninstall a Linux scanner

Access the Linux scanner installation directory, which defaults to the /usr/local/EScan directory. Execute the sh uninstall.sh script to uninstall the Linux scanner.

 

 


Expand

CAUTION

CAUTION:

Before you expand a single-host cluster, make sure the system has the installation package files for the installed modules. If the installation package files are deleted before expansion, the single-host cluster expansion will fail. Upload these installation packages and retry the single-host cluster expansion.

 

Expand single-host mode to cluster mode

 

NOTE:

All EAD modules support expansion from single-host mode to cluster mode. For supported versions, see the release notes. The following uses the EAD EIA module as an example.

 

In this mode, add two master nodes on Matrix and form a three-host cluster with the original master node. Then, expand Unified Platform and the EAD EIA module in sequence.

Expand Matrix

For more information, see H3C Unified Platform Deployment Guide.

Expand Unified Platform

For more information, see H3C Unified Platform Deployment Guide.

Expand the EAD EIA module

1.     Log in to Matrix, and access the Deploy > Convergence Deployment page.

2.     Select the EAD EIA module you want to expand, and click the  icon in the Actions column to access the expansion parameter configuration page.

3.     On the parameter configuration page, select the nodes you want to add, and then click Expand to perform expansion.

 


FAQ

What should I do upon a component deployment or upgrade failure?

The deployment or upgrade of a component might fail due to timeout errors. In this situation, retry the deployment or upgrade. If the deployment or upgrade still fails, contact Technical Support for troubleshooting.

Matrix

How can I configure the aging timer for the master nodes in the Matrix cluster?

1.     Log in to the backend of all master nodes in the cluster.

2.     Open the navigator_config.json configuration file to edit values for the matrixLeaderLeaseDuration and matrixLeaderRetryPeriod parameters.  Make sure the parameter settings are the same for all master nodes in the cluster. If the two parameters are not in the configuration file, manually add them.

For example, to edit the values for matrixLeaderRetryPeriod to 2 and matrixLeaderLeaseDuration to 30:

[root@matrix01 ~]# vim /opt/matrix/config/navigator_config.json

{

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

}

3.     After modification, restart the cluster service.

[root@matrix01 ~]# systemctl restart matrix

 

 

NOTE:

·     matrixLeaderLeaseDuration: Used to set the aging timer of the primary node in the cluster. The value is a positive integer, and is greater than or equal to matrixLeaderRetryPeriod × 10.

·     matrixLeaderRetryPeriod: Used to set the interval of the lock when the cluster refreshes the primary node. The value is a positive integer.

·     To ensure cluster stability, make sure all cluster nodes have consistent configurations in the /opt/matrix/config/navigator_config.json file.

 

What should be done if the ETCDINSTALL phase takes too long during the scale-out of Matrix?

If the scale-out of Matrix fails for a long period of time, check the scale-out node's logs on the cluster deployment page to determine if the system stays in the ETCDINSTALL phase for a long time (stays in ETCDINSTALL-PENDING state for over fifteen minutes from the current system time). After you execute the etcdctl member list command at the backend in the original standalone environment , if a failure is returned, you can restore the environment to the state before scaling-out as follows before perform a scale-out again:

1.     Log in to the backend of the original standalone environment.

2.     Execute the cp -f /opt/matrix/k8s/deployenv.sh.bk /opt/matrix/k8s/deployenv.sh command to restore the deployenv.sh script.

3.     Stop the Matrix service on the node by executing the systemctl stop matrix command as the root user. Use the systemctl status matrix command to verify that the Matrix service is stopped. If the Matrix service is stopped, the inactive (dead) value is displayed for the Active field.

[root@master1 ~]# systemctl stop matrix

Non-root users can stop the Matrix service by using the sudo /bin/bash -c "systemctl stop matrix" command.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl stop matrix"

4.     Use the mv /etc/kubernetes/manifests/kube-apiserver.yaml /opt/matrix command to stop kube-apiserver. Verify that the kube-apiserver service is stopped by using the docker ps | grep kube-apiserver command. If no information is output, the service has stopped.

[root@master1 ~]# mv /etc/kubernetes/manifests/kube-apiserver.yaml /opt/matrix

[root@master1 ~]# docker ps | grep kube-apiserver //Verify that kube-apiserver is stopped.

[root@master1 ~]#  // If no information is output, the service has stopped.

5.     Completely stop the etcd service by using systemctl stop etcd command as the root user, and then verify that the etcd service is stopped by using systemctl status etcd command.  If the etcd service is stopped, the inactive (dead) value is displayed for the Active field. Execute the rm -rf /var/lib/etcd/default.etcd/ command to delete the etcd data directory, and make sure no data directory exists in /var/lib/etcd.

[root@master1 ~]# systemctl stop etcd

[root@master1 ~]# rm -rf /var/lib/etcd/default.etcd/

[root@master1 ~]# ll /var/lib/etcd/

Non-root users can use the sudo /bin/bash -c "systemctl stop etcd command to completely stop the etcd service, and use the sudo /bin/bash -c "rm -rf /var/lib/etcd/default.etcd/" command to delete the etcd data directory, ensuring no data directory exists in /var/lib/etcd.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl stop etcd"

[admin@node4 ~]$ sudo /bin/bash -c "rm -rf /var/lib/etcd/default.etcd/"

[admin@node4 ~]$ ll /var/lib/etcd/

6.     Enter the ETCD recovery script directory.

[root@master1 ~]# cd /opt/matrix/k8s/disaster-recovery/

7.     Before executing the etcd recovery script, locate the latest backup data file Etcd_Snapshot_Before_Scale.db in the etcd backup directory /opt/matrix/backup/etcd_backup_snapshot/.

¡     The recovery operation command for the root user is as follows:

[root@master1 ~]# bash etcd_restore.sh Etcd_Snapshot_Before_Scale.db

¡     The recovery operation command for non-root users is as follows:

[admin@node4 ~]$ sudo bash etcd_restore.sh Etcd_Snapshot_Before_Scale.db

8.     Execute the systemctl restart etcd command to restart the etcd service as a root user.

[root@master1 ~]# systemctl restart etcd

Non-root users can use the sudo /bin/bash -c "systemctl restart etcd" command to restart the etcd service.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl restart etcd"

9.     Execute the systemctl restart matrix command to restart the Matrix service as a root user.

[root@master1 ~]# systemctl restart matrix

Non-root users can use the sudo /bin/bash -c "systemctl restart matrix" command to restart the Matrix service.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl restart matrix"

10.     Restore kube-apiserver.

[root@master1 ~]# mv /opt/matrix/kube-apiserver.yaml /etc/kubernetes/manifests/

11.     After failure recovery, access the Matrix cluster deployment page, and then click Start Deployment to perform scale-out again.

What should I do if the page cannot be accessed after Matrix installation?

1.     Try to recover by executing the rm -rf /opt/matrix/data/ && systemctl restart matrix.service command.

2.     If the issue persists, manually upload and decompress the Matrix installation package, then execute the uninstall.sh and install.sh scripts in sequence to uninstall and reinstall the Matrix service.

3.     If the issue persists, contact the technical support.

What should I do if adding a node to Matrix fails?

If you fail to add a node to Matrix and the java.lang.NoClassDefFoundError is logged in the /var/log/matrix-diag/Matrix/Matrix/matrix.log, perform the following operations:

1.     Try to recover by executing the rm -rf /opt/matrix/data/ && systemctl restart matrix.service command.

2.     If the issue persists, manually upload and decompress the Matrix installation package, then execute the uninstall.sh and install.sh scripts in sequence to uninstall and reinstall the Matrix service.

3.     If the issue persists, contact the technical support.

What should I do if Matrix deployment fails?

When Matrix deployment fails, if the phase IMAGE_INSTALL end. cname=ImageInstallPhase, phaseResult=false log is printed, the deployment fails at the K8S phase. To resolve this issue:

1.     Try to recover by executing the rm -rf /opt/matrix/data/ && systemctl restart matrix.service command.

2.     If the issue persists, manually upload and decompress the Matrix installation package, then execute the uninstall.sh and install.sh scripts in sequence to uninstall and reinstall the Matrix service.

3.     If the issue persists, contact the technical support.

How can I switch to the dual stack mode in Matrix?

1.     Log in to Matrix and navigate to the Deploy > Clusters > Cluster Parameters page.

2.     Click Edit, enable the dual stack mode, and then click Apply.

3.     Switch to the dual stack mode:

¡     To switch from IPv4 to the dual stack mode, enter the IPv6 address of the node and the IPv6 address of the northbound service VIP separately. You must configure the node IPv6 address first.

For more information, see H3C Unified Platform Operating System Installation Guide.

¡     To switch from IPv6 to the dual stack mode, enter the IPv4 address of the node and the IPv4 address of the northbound service VIP separately. You must configure the node IPv4 address first.

For more information, see H3C Unified Platform Operating System Installation Guide.

How can I enable Unified Platform application services on Matrix?

1.     Log in to Matrix and navigate to the OBSERVE > Monitor > Application Monitoring page.

2.     Expand a component to view status of the applications of that component.

3.     Click the  or  icon in the Actions column for an application to enable or disable the application.

Figure 54 View application services

 

Common browser issues

How can I access the Matrix page through mapped IP address?

Matrix supports external browser access to the Web page through mapped node IP address and virtual IP address. It supports NAT mapping and domain name mapping, and does not support port mapping. Port 8443 must be used.

To access the Matrix page by using a mapped IP address, perform the following operations on each cluster node:

1.     Add the mapped IP address (or domain name) to the "httpHeaderHost" attribute value in /opt/matrix/config/navigator_config.json (if the attribute does not exist, add it manually, and separate multiple IP addresses or domain names with commas). For example, "httpHeaderHost": "10.10.10.2,10.10.10.3".

2.     After configuration, you can check if the configuration format is correct by running cat /opt/matrix/config/navigator_config.json | jq.

3.     Restart the service by executing service matrix restart for the modification to take effect. Make sure the settings on all cluster nodes are consistent.

 

 

NOTE:

To ensure cluster stability, make sure all cluster nodes have consistent configurations in the /opt/matrix/config/navigator_config.json file.

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网