H3C SeerEngine-SDWAN Deployment Guide-R73xx-5W100

HomeSupportAD-NET(SDN)H3C SeerEngine-SDWANInstall & UpgradeInstallation GuidesH3C SeerEngine-SDWAN Deployment Guide-R73xx-5W100
H3C SeerEngine-SDWAN Deployment Guide-R73xx-5W100
Table of Contents
Related Documents
book
Title Size Download
book 1.37 MB

 

H3C SeerEngine-SDWAN

Deployment Guide

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Document version: 5W100-20251031

 

Copyright © 2025 New H3C Technologies Co., Ltd. All rights reserved.

No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd.

Except for the trademarks of New H3C Technologies Co., Ltd., any trademarks that may be mentioned in this document are the property of their respective owners.

The information in this document is subject to change without notice.


Contents

Introduction· 1

Installation procedure· 1

Preparing for installation· 2

Hardware and software requirements· 2

Hardware requirements· 2

Operating system requirements· 2

Client requirements· 2

Obtaining software packages· 2

Verifying the software packages· 3

Pre-installation checklist 4

Installation planning· 5

Planning disk partitions· 5

Installing the operating system and software dependencies· 6

Installing Matrix· 7

Uploading the Matrix installation package· 7

Editing the configuration file as a non-root user 7

Installing Matrix· 8

(Optional.) Configuring SSH· 9

Modifying the SSH service port number 9

Configuring password-based SSH login· 11

Deploying Unified Platform·· 14

Pre-deployment check· 14

Creating a Matrix cluster 15

Logging in to Matrix· 15

Configuring cluster parameters· 17

Creating a cluster 19

Deploying Unified Platform applications· 21

Deploying the Unified Platform Base application package· 21

Deploying SeerEngine-SDWAN·· 25

Logging in to Matrix· 25

Managing the installation packages· 25

Selecting applications· 25

Selecting installation packages· 26

Configuring resources· 27

Configuring parameters· 27

Deploying SeerEngine-SDWAN components· 28

Viewing component details· 29

Accessing the SeerEngine-SDWAN page· 30

Registering the software· 31

Installing the license on the license server 31

Obtaining the license authorization· 31

Backing up and restoring SeerEngine-SDWAN configuration· 32

Upgrading SeerEngine-SDWAN·· 33

Uninstalling SeerEngine-SDWAN·· 35

FAQ·· 36

Matrix· 36

How can I configure the aging timer for the master nodes in the Matrix cluster?· 36

What should be done if the ETCDINSTALL phase takes too long during the scale-out of Matrix?· 36

What should I do if the page cannot be accessed after Matrix installation?· 38

What should I do if adding a node to Matrix fails?· 38

What should I do if Matrix deployment fails?· 38

How can I switch to the dual stack mode in Matrix?· 38

How can I enable Unified Platform application services on Matrix?· 39

Common browser issues· 39

How can I access the Matrix page through mapped IP address?· 39

Component deployment or upgrade failure· 40


Introduction

This document describes the procedure for installing SeerEngine-SDWAN, which acts as the WAN controller for service automation and intelligent traffic engineering in a branch network scenario.

SeerEngine-SDWAN is deployed on Matrix, which is a Kubernetes-based platform that orchestrates and schedules Docker containers. On Matrix, you can build Kubernetes clusters, deploy microservices, and implement O&M monitoring of systems, Docker containers, and microservices.

 


Installation procedure

Deploy SeerEngine-SDWAN on the convergence deployment page of Matrix as shown below.

Table 1 SeerEngine-SDWAN installation procedure

Task

Step

Description

Prepare servers

Prepare one or three servers as needed.

Required.

For information about hardware and software requirements, see "Hardware and software requirements."

Obtain the installation packages

Select and install the corresponding components and dependent Unified Platform applications based on the function requirements, hardware configuration, and resource level relationship.

Required.

For the installation package descriptions, see "Obtaining software packages."

Plan disk partitions

Planning disk partitions

Required.

Install the operating system and dependencies

Install the operating system on the server

Required.

See "Installing the operating system and software dependencies."

Deploy Matrix

Deploy Matrix.

Required.

For more information, see "Installing Matrix."

Deploy Unified Platform

Deploy Unified Platform

Required.

For more information, see "Deploying Unified Platform."

Deploy the controller on the convergence deployment page of Matrix

Logging in to Matrix

Required.

Managing the installation packages

Required.

Upload and deploy the installation packages of optional components and the controller on the convergence deployment page.

Selecting applications

Required.

Selecting installation packages

Required.

Configuring resources

Required.

Configuring parameters

Required.

 

 


Preparing for installation

Hardware and software requirements

Hardware requirements

You must deploy, upgrade, and uninstall the SeerEngine-SDWAN component from Matrix. Before deploying the SeerEngine-SDWAN component, make sure the Unified Platform UDTP_Base component has been successfully deployed. The component supports both standalone deployment and cluster deployment on physical servers and virtual machines (VMs). In standalone deployment mode, prepare one server. In cluster deployment mode, prepare a minimum of three servers. You can obtain hardware configuration requirements through either of the following methods:

·     Access http://iservice.h3c.com, open the hardware resource calculation page, and then enter the required information for calculation. The calculation result is for reference only.

·     Contact Technical Support.

 

CAUTION

CAUTION:

·     Deployment on physical hosts is recommended for a middle-/large-scale scenario (with more than 200 devices).

·     The CPU, memory, and disk resources allocated to a VM must meet the recommended capacity requirements. Make sure physical resources of the corresponding capacities exist. Do not enable the overcommitment mode, which allocates resources more than resources available. Additionally, you must reserve resources for VMs. If you do not do that, the system environment will be unstable.

·     The etcd partition can share a physical drive with other partitions. As a best practice, use a separate drive for the etcd partition.

·     To deploy the SeerEngine-SDWAN controller on a VM managed by VMware, you must enable the promiscuous mode and forged transmits on the host where the VM resides.

 

Operating system requirements

As a best practice, install the NingOS operating system. For more information, see H3C Unified Platform Deployment Guide.

Client requirements

You can access SeerEngine-SDWAN directly through a browser and do not need to install a client. As a best practice, use Google Chrome 96, Firefox 97, or a higher version.

Obtaining software packages

To deploy SeerEngine-SDWAN, you must first deploy Matrix, because SeerEngine-SDWAN must be deployed from Matrix. When installing SeerEngine-SDWAN, you must select the application packages to install as shown in Table 2. First, bulk upload the installation packages on the convergence deployment page of Matrix.

 

 

NOTE:

·     A Unified Platform application package name might vary by software version. For more information, see the product release notes. This section uses SeerEngine-SDWAN R73xx together with Unified Platform E73xx as examples.

·     Table 2 shows the application package names, where version represents the software version number and platform represents the CPU architecture.

·     For the system to run correctly, you must install the required application packages. You can install optional application packages as needed to use related features.

 

Table 2 Application installation packages

Installation package name

Description

Remarks

UDTP_Base_version_platform.zip

Provides basic functions such as convergence deployment, user management, permission management, resource management, tenant management, menu management, log center, backup & restoration, and health check.

Required.

BMP_Common_version_platform.zip

Provides dashboard management, alarms, alarm aggregation, and alarm subscription.

Required.

BMP_Connect_version_platform.zip

Provides hierarchical site management, WebSocket channel management, and NETCONF channel management.

Required.

H3C_WVAS-version_platform.zip

Provides value-added WAN services for the system.

Required.

BMP_ExtensionRDR_version_platform.zip

Provides remote disaster recovery (RDR), snapshot rollback, certificate services, self-monitoring, intelligent algorithm library, single sign-on, and password platform.

Optional.

H3C_SEERENGINE_SDWAN-version_platform.zip

Provide branch WAN network management services for the system.

Required.

 

Verifying the software packages

After uploading installation packages, first perform MD5 verification on each software package to ensure its integrity and correctness.

1.     Identify the uploaded installation packages.

[root@node1~]# cd /opt/matrix/app/install/packages/

[root@node1 packages]# ls

BMP_Common_E7301_x86.zip           BMP_Connect_E7301_x86.zip

2.     Obtain the MD5 value of an installation package, for example, UDTP_Base_E7301_x86.zip.

[root@node1 packages]# md5sum UDTP_Base_E7301_x86.zip

652845e0b92bbdff675c7598430687e2  UDTP_Base_E7301_x86.zip

3.     Compare the obtained MD5 value with the MD5 value released with the software. If they are the same, the installation package is correct.

Pre-installation checklist

Table 3 Pre-installation checklist

Item

Requirements

Server

Hardware

The CPU, memory, disk, and NIC requirements for installing SeerEngine-SDWAN are met.

Software

The system time settings are configured correctly. As a best practice, configure NTP on each node and specify the same time source for all the nodes.

Client

The browser version meets the requirements.

 

To ensure the normal operation of the service, you must set the server's CPU power mode to high performance and disable the Patrol Read (PR) and Consistency Check (CC) features of the RAID controller. If PR and CC are not supported, you do not need to disable them. For specific operation procedures, see the product manuals for the servers and RAID controllers, or contact the technical supports of the server/RAID controller vendors.


Installation planning

Planning disk partitions

Plan the RAID arrays and partitions for disks based on different service scales and server configuration requirements. As a best practice, configure disk settings and perform disk partitioning as instructed at https://iservice.h3c.com/, and do not use automatic partitioning.



Installing Matrix

Uploading the Matrix installation package

IMPORTANT

IMPORTANT:

·     To avoid file damage, use binary mode if you use FTP or TFTP for package upload..

·     If the Docker version is 20.10.24, you can directly install Matrix E7105H04 (or later) or E7302 (or later). If the Docker version is earlier than 20.10.24, you must first install any Matrix version earlier than E7105H04 or E7302, then upgrade the Docker version to 20.10.24, and finally upgrade the Matrix version to E7105H04 (or later) or E7302 (or later).

 

1.     Copy or use a file transfer protocol to upload the installation package to the target directory on the server.

¡     (Recommended.) Enter the /root directory or a directory created in the /root directory if you log in as the root user.

¡     (Recommended.) Enter the /home/admin directory if you log in as a non-root user (for example, admin).

2.     After you upload the Matrix installation package, perform MD5 verification on the installation package as described in "Verifying the software packages".

Editing the configuration file as a non-root user

If you install the software package as the root user or install the NingOS operating system as the admin user, you can skip this section directly.

1.     Execute the su root command to switch to the root user, and view the /etc/passwd file as a root user. Identify whether the configured non-root user name (user in this example, as shown in the following figure) is the same as that in the configuration file. If not, modify the corresponding username in the configuration file. Leave the other parameters unchanged.

[root@node1 ~]# vim /etc/passwd

user:x:1000:1001:user:/home/user:/bin/bash

2.     As a root user, edit the /etc/sudoers file.

[root@node1 ~]# vim /etc/sudoers

## Allow root to run any commands anywhere

root    ALL=(ALL)       ALL

user    ALL=(root)       NOPASSWD:/bin/bash

 

## Allows members of the 'sys' group to run networking, software,

## service management apps and more.

# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

 

## Allows people in group wheel to run all commands

%wheel  ALL=(ALL)       ALL

user    ALL=(root)       NOPASSWD:/bin/bash

user    ALL=(root)       NOPASSWD:/usr/bin/rpm,/bin/sh

3.     As a root user, edit the /etc/pam.d/login file.

[root@node1 ~]# vim /etc/pam.d/login

#%PAM-1.0

auth       substack     system-auth

auth     [user_unknown=ignore success=ok ignore=ignore auth_err=die default=bad] pam_securetty.so

4.     As a root user, edit the /etc/ssh/sshd_config file.

[root@node1 ~]# vim /etc/ssh/sshd_config

#LoginGraceTime 2m

PermitRootLogin no

5.     After editing the configuration file, execute the systemctl restart sshd command to restart the sshd service.

Installing Matrix

 

NOTE:

Make sure the installation users are the same for all nodes. For a non-root installation user, add the sudo /bin/bash instruction before the script execution command.

 

1.     Access the storage directory of the Matrix installation package.

2.     Execute the unzip UDTP_Matrix_version-platform.zip command. UDTP_Matrix_version_platform.zip represents the installation package name, the version argument represents the version number, and the platform argument represents the CPU architecture type, x86_64 for a root user in this example.

[root@node1 ~]# unzip UDTP_Matrix_E7301_x86_64.zip

[root@node1 ~]# cd UDTP_Matrix_E7301_x86_64

[root@node1 UDTP_Matrix_E7301_x86_64]# ./install.sh

Complete!

3.     Use the systemctl status matrix command to identify whether the Matrix service is installed correctly. The Active field displays active (running) if the platform is installed correctly.

4.     Change the language setting (Chinese by default) for the Web interface to English as follows:

a.     Use the vim /opt/matrix/config/navigator_config.json command to open the navigator_config file.

b.     Change the value for the defaultLanguage field to en as follows:

If the field is not available in the file, manually add this field and add a comma after the field.

[root@node4 ~]#  vim /opt/matrix/config/navigator_config.json

{

"defaultLanguage":"en",

"productName": "uc",

"pageList": ["SYS_CONFIG", "DEPLOY", "APP_DEPLOY"],

"defaultPackages": [],

"allowDeployedPackageIds": ["UNIFIED-PLATFORM-BASE"],

"url": "http:””://${vip}:30000/central/index.html#/ucenter-deploy",

"theme":"darkblue",

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

"sshPort": 22,

"sshLoginMode": "secret",

"features":{"stopNtpServerBeyondThreshold":"false"}

}

c.     Execute the systemctl restart matrix command to restart the Matrix service and have your configuration take effect.

d.     Follow the previous steps to configure other nodes.

(Optional.) Configuring SSH

Modifying the SSH service port number

A Matrix cluster installs, upgrades, and repairs nodes and performs application deployment and monitoring through SSH connections. On each node, the SSH server uses port 22 by default to listen on the client connection requests. After a TCP connection is established between a node and the SSH server, data information can be exchanged between them.

You can modify the SSH service port number to improve the SSH connection security.

 

IMPORTANT

IMPORTANT:

·     Make sure all nodes are configured with the same SSH service port number.

·     The port number range is 1 to 65535. As a best practice, do not use well-known port numbers between 1 and 1024. Do not use port numbers already defined in the port usage guide for any solution.

·     If you change the SSH service port number for a deployed cluster, verify that all service components support the port number. If you cannot do that, the SSH service might fail to start.

·     To upgrade Matrix through an ISO image, make sure the contents in the navigator_config file on all cluster nodes are the same. To view detailed information in the navigator_config file, use the vim /opt/matrix/config/navigator_config.json command.

·     To ensure cluster stability, make sure all cluster nodes have consistent configurations in the /opt/matrix/config/navigator_config.json file.

·     To change the SSH service port number, see the port usage section in the usage guidelines of the associated product.

 

Modifying the SSH service port number for the server of each node

1.     If the cluster has not been deployed, log in to the CLI of the node and execute the netstat -anp | grep after_port-number command to identify whether the specified port number is occupied. If it is not occupied, no information will be returned. If it is occupied, the following information will be returned.

If the cluster has already been deployed, in addition to the preceding checks, execute the following command to identify whether any service containers in the environment are using the specified port (check for other forms of port usage as necessary). More specifically:

¡     Port number 12345 is not used, and you can modify the port number to 12345.

[root@node1 ~]# kubectl get svc -A -oyaml | grep nodePort | grep -w 12345

[root@node1 ~]# kubectl get pod -A -oyaml | grep hostPort | grep -w 12345

¡     Port number 1234 is occupied by nodePort or hostPort, and you cannot modify the port number to 1234.

[root@node1 ~]# kubectl get svc -A -oyaml | grep nodePort | grep -w 1234

        nodePort: 1234

[root@worker ~]# kubectl get pod -A -oyaml | grep hostPort | grep -w 1234

        hostPort: 1234

2.     Use the vim /etc/ssh/sshd_config command to open the configuration file of the sshd service. Modify the port number in the configuration file to the target port number (for example, 12345), and delete the annotation symbols.

Figure 1 The port number before modification is 22

 

Figure 2 The port number after modification

 

3.     After modifying the port number, restart the sshd service.

[root@node-worker ~]# systemctl restart sshd

4.     Identify whether the port number is successfully modified. The port number is successfully modified if the following information is returned.

The following uses the configuration on a master node for example.

[root@node-worker ~]# netstat -anp | grep -w 12345

tcp        0      0 0.0.0.0:12345            0.0.0.0:*               LISTEN      26212/sshd

tcp6       0      0 :::12345                 :::*                    LISTEN      26212/sshd

Modifying the SSH service port number for each Matrix node

1.     Use the vim /opt/matrix/config/navigator_config.json command to open the navigator_config file. Identify whether the sshPort field exists in the file.

¡     If yes, modify the value for the field to the target value (12345 in this example).

¡     If not, manually add the field and specify a value for it.

{

"productName": "uc",

"pageList": ["SYS_CONFIG", "DEPLOY", "APP_DEPLOY"],

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

"sshPort": 12345

}

2.     After modification, restart the Matrix service.

[root@node-worker ~]# systemctl restart matrix

3.     Identify whether the port number is successfully modified. If yes, the last message in the log is as follows:

The following uses the configuration on a master node for example.

[root@node-worker ~]# cat /var/log/matrix-diag/Matrix/Matrix/matrix.log | grep "ssh port"

2022-03-24T03:46:22,695 | INFO  | FelixStartLevel  | CommonUtil.start:232 | ssh port = 12345.

Configuring password-based SSH login

The primary master node of the cluster manages and monitors all nodes in the cluster over SSH connection. After you change the SSH login password through the command line for a node, you must change that password from the Matrix Web interface and any other scenarios (such as a springboard machine and an application deployed on Matrix) that saves the password. The process is time and labor wasted and mistakes easily occur.

After password-based SSH login is configured on each node, you are not required to change a password for a node at multiple places. You can also configure settings for other nodes from a node without using an SSH login password.

You can configure password-based SSH login for the root user account or a non-root user account.

 

CAUTION

CAUTION:

·     Make sure all nodes in the cluster use the same SSH login method. If you change the SSH login method for a node after the Matrix service is started, you must make that change on all the other nodes and restart the Matrix service for the nodes one by one.

·     You can configure password-based SSH login before cluster deployment, matrix scale-out, and node rebuild or upgrade. Make sure you complete the password-based SSH login configuration all nodes before cluster deployment, matrix scale-out, and node rebuild or upgrade.

·     If you reinstall the operating system after Matrix deployment (in cluster or standalone mode), make sure the password-based SSH login configuration is completed on all nodes. In addition, make sure the SSH login method is password-based login on all nodes.

 

Configuring password-based SSH login for the root user account

Log in to the CLI of each node to configure password-based SSH login. The following procedure uses node1 as an example.

 

NOTE:

If the system prompts that a file or directory does not exist when you execute the ssh-keygen -R command, ignore the message, because this is normal.

 

1.     Use the root user account to log in to the CLI of node1. Execute the following commands to generate the public key and private key files required for SSH symmetric authentication through the ED25519 encryption algorithm to save the public/private key. The default file is /root/. ssh/id_ed25519.

[root@node1 ~]# ssh-keygen -t ed25519

Generating public/private ed25519 key pair.

Enter file in which to save the key (/root/.ssh/id_ed25519):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_ed25519

Your public key has been saved in /root/.ssh/id_ed25519.pub

The key fingerprint is:

SHA256:GLeq7ZQlnKHRTWvefTwIAlAHyeB3ZfZt0Ovnfbkcbak root@node1

The key's randomart image is:

2.     Clear old public key information on each node, and then copy the generated public key to each node (including the current node). In this example, the cluster has three master nodes and the default SSH port number 22 is used. The IP addresses of node 1, node 2, and node 3 are 192.168.227.171, 192.168.227.172, and 192.168.227.173, respectively.

[root@node1 ~]# ssh-keygen -R 192.168.227.171

[root@node1 ~]# ssh-keygen -R 192.168.227.172

[root@node1 ~]# ssh-keygen -R 192.168.227.173

[root@node1 ~]# ssh-copy-id -p 22 -i  ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]# ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]# ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

3.     Perform the same procedure on all the other nodes.

4.     Use the root user account to log in to the CLI of node1 and then SSH to the current and the other nodes to verify that password-based SSH login takes effect.

In this example, the root user log in to node2 over SSH and the SSH port number is 22.

[root@node1 ~]# ssh -p 22 [email protected]

Configuring password-based SSH login for a non-root user account

Log in to the CLI of each node to configure password-based SSH login.

Because some commands must be executed with root permission, you must configure admin-to-admin password-based SSH login and root-to-admin password-based SSH login for an admin user account.

 

NOTE:

If the system prompts that a file or directory does not exist when you execute the ssh-keygen -R command, ignore the message, because this is normal.

 

1.     Configuring admin-to-admin password-based SSH login

In this example, admin accounts are used for accessing the three master nodes of the cluster.

a.     Use the admin user account to log in to the CLI of node1. Execute the ssh-keygen - t ed25519 command to generate public key and private key files required for SSH symmetric authentication to save the public/private key. The default file is /home/admin/. ssh/id_ ed25519.

b.     Clear old public key information on each node, and then copy the generated public key to each node (including the current node).In this example, the cluster has three master nodes and the default SSH port number 22 is used. The IP addresses of node 1, node 2, and node 3 are 192.168.227.171, 192.168.227.172, and 192.168.227.173, respectively.

[root@node1 ~]# ssh-keygen -R 192.168.227.171

[root@node1 ~]# ssh-keygen -R 192.168.227.172

[root@node1 ~]# ssh-keygen -R 192.168.227.173

[root@node1 ~]$ ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]$ ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

[root@node1 ~]$ ssh-copy-id -p 22 -i ~/.ssh/id_ed25519.pub [email protected]

c.     Perform the same procedure on all the other nodes.

d.     Log in to the backend as the admin user. Log in to the current node and other nodes through SSH to identify whether the password-based SSH login configuration takes effect.

[root@node1 ~]$ ssh -p 22 [email protected]

2.     Configuring root-to-admin password-based SSH login

a.     Use the admin user account to log in to the CLI of node1 and switch to the root use account.

b.     Generate new public key and private key files, clear old public key information, and then copy the new public key to each node (including the current node).

c.     Perform the same procedure on all the other nodes..

d.     Log in to the back end of a node as the admin user, and switch the user to the root user. Log in to the current node and other nodes through SSH as the admin user to identify whether the password-based SSH login configuration takes effect.

[root@node1 ~]# ssh -p 22 [email protected]

Configuring password-based SSH login for Matrix

1.     Open the navigator_config file in the vim/opt/matrix/config/navigator_config.json directory to check whether the sshLoginMode field exists in the file. If the field exists, set the value to secret. If the field does not exist, manually add the field and assign a value to it. The following configuration takes the x86 version as an example.

{

"productName": "uc",

"pageList": ["SYS_CONFIG", "DEPLOY", "APP_DEPLOY"],

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

"sshPort": 22,

"sshLoginMode":"secret"

}

2.     Restart the Matrix service.

[root@node1 ~]# systemctl restart matrix

3.     Verify that the configuration takes effect.

[root@node1 ~]# cat /var/log/matrix-diag/Matrix/Matrix/matrix.log | grep "sshLoginMode"

2022-03-31T20:11:08,119 | INFO  | features-3-thread-1 | CommonUtil.start:245 | ssh port = 22, sshLoginMode = secret.

 


Deploying Unified Platform

IMPORTANT

IMPORTANT:

·     In scenarios where an inner NTP server is used, make sure the system time of all nodes is consistent with the current time before deploying the cluster. In scenarios where an external NTP server is used as the clock source, make sure the time of the external NTP server is consistent with the current time. Network disconnectivity, failure, or time inaccuracy of the NTP server might cause deployment failure of the Matrix cluster.

·     To view the system time, execute the date command. To modify the system time, use the date -s yyyy-mm-dd or date -s hh:mm:ss command.

·     During application deployment or upgrade, do not restart the matrix service or a node and do not disconnect the server power supply. If you do so, application deployment data might be corrupted (etcd data error or disk file corruption for example), which might cause operation failure.

 

Pre-deployment check

1.     Log in to the back end of each node in turn, execute the sudo bash /opt/matrix/tools/env_check.sh command to perform environment check, and take appropriate actions according to the check results.

 

 

NOTE:

·     You can execute the env_check.sh script in all operating systems supported by Unified Platform.

·     When the CPU frequency is lower than 2000 MHz, the Matrix self-check script (env_check.sh) and health check module will print a CPU frequency alarm. Please make sure the server hardware meets the requirements, and the CPU power supply mode is set to performance(For example, the NingOS system can execute the cpupower frequency-set -g performance command).

·     To view the help and obtain more script usage methods, execute the sudo bash /opt/matrix/tools/env_check.sh -h command in the back end of the node. For example, the command used to obtain the IOPS performance of the etcd disk is sudo bash /opt/matrix/tools/env_check.sh -p -d /var/lib/etcd.

 

Manually confirm the items listed in the following table that are not checked in the env_check.sh script. Make sure the conditions for installing Matrix are met.

Table 4 Verifying the installation environment

Item

Requirements

Network port

Make sure each Matrix node has a unique network port. Do not configure subinterfaces or secondary IP addresses on the network port.

IP address

The IP addresses of network ports used by other Matrix nodes and the IP address of the network port used by the current Matrix node cannot be on the same subnet.

The source IP address for the current Matrix node to communicate with other nodes in the Matrix cluster must be the IP address of the Matrix cluster. You can execute the ip route get targetIP command to obtain the source IP address.

[root@node1 ~]# ip route get 100.100.5.10

100.100.5.10 via 192.168.10.10 dev eth0 src 192.168.5.10

Time zone

·     To avoid node adding failure on the GUI interface, make sure the system time zone of all Matrix nodes are the same. You can execute the timedatectl command to view the system time zone of each Matrix node.

·     When selecting a time zone, do not select Beijing.

Host name

To avoid cluster creation failure, make sure the host name meets the following rules:

·     The host name of each node must be unique.

·     Do not use the default host names, including localhost, localhost.localdomain, localhost4, localhost4.localdomain4, localhost6, and localhost6.localdomain6.

·     The host name contains a maximum of 63 characters and supports only lowercase letters, digits, hyphens, and decimal points. It cannot start with 0, 0x, hyphen, or decimal point, and cannot end with hyphen or decimal point. It cannot be all digits.

 

2.     Before you deploy the UDTP_Base_version_platform.zip component of Unified Platform, execute the cat /proc/sys/vm/nr_hugepages command on each node to identify whether HugePages is enabled. If the return result is not 0, record that value and execute the echo 0 > /proc/sys/vm/nr_hugepages command to temporarily disable hugepages. After you deploy the UDTP_Base_version_platform.zip component, replace value 0 in the echo 0 > /proc/sys/vm/nr_hugepages command with the recorded value, and then execute the command on each node to restore the HugePages configuration.

Creating a Matrix cluster

Logging in to Matrix

Restrictions and guidelines

On Matrix, you can perform the following operations:

·     Upload or delete the Unified Platform installation package.

·     Deploy, upgrade, expand, or uninstall Unified Platform.

·     Upgrade or rebuild cluster nodes.

·     Add or delete worker nodes.

Procedure

1.     Enter the Matrix login address in your browser and then press Enter.

¡     If the node that hosts Matrix uses an IPv4 address, the login address is in the https://ip_address:8443/matrix/ui format.

¡     If the node that hosts Matrix uses an IPv6 address, the login address is in the https://[ip_address]:8443/matrix/ui format.

ip_address represents the IP address of the node that hosts Matrix. This configuration uses an IPv4 address. 8443 is the default port number.

 

 

NOTE:

·     In cluster deployment mode, ip_address can be the IP address of any Master node in the cluster before the cluster is deployed.

·     When deploying cluster nodes, make sure no duplicate host names exist. After successfully deploying the cluster, you cannot edit the host names of the cluster nodes.

·     During cluster deployment, you cannot log in to the cluster nodes to perform any operations, or add the nodes deployed in the cluster to another cluster.

 

Figure 3 Matrix login page

 

2.     Enter the username and password, and then click Login. The cluster deployment page is displayed.

The default username is admin and the default password is Pwd@12345. If you have set the password when installing the operating system, enter the set password.

To deploy a dual-stack cluster, enable the dual-stack feature.

Figure 4 Single-stack cluster deployment page

 

Figure 5 Dual-stack cluster deployment page

 

Configuring cluster parameters

Before deploying cluster nodes, first configure cluster parameters. On the Configure cluster parameters page, configure cluster parameters as described in the following two tables and then click Apply.

Table 5 Configuring single-stack cluster parameters

Parameter

Description

Northbound service Virtual IP

IP address for northbound interface services. This address must be on the same subnet as the master nodes.

Service IP pool

Address pool for IP assignment to services in the cluster. It cannot overlap with other subnets in the deployment environment. The default value is 10.96.0.0/16. Typically, the default value is used.

Container IP pool

Address pool for IP assignment to containers. It cannot overlap with other subnets in the deployment environment. The default value is 177.177.0.0/16. Typically, the default value is used.

VIP Mode

Options are Internal and External. In Internal mode, the VIP is assigned by Matrix to the cluster and Matrix manages drift of the VIP among cluster nodes. In External mode, the VIP is assigned to the outside of the cluster by a third-party platform or software, and is not managed by Matrix. The default is Internal.

This parameter is added as from E0713.

Cluster network mode

Network mode of the cluster:

Single Subnet: In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for mutual communication.

Single Subnet-VXLAN: In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for mutual communication. Only an IPv4 network is supported in this mode.

NTP server

Used for time synchronization between the nodes in the cluster. Options include Internal server and External server. If you select External server, you must specify the IP address of the server, and make sure the IP address does not conflict with the IP address of any node in the cluster.

An internal NTP server is used in this configuration. After cluster deployment is started, the system synchronizes the time first. After the cluster is deployed, the three master nodes will synchronize the time regularly to ensure that the system time of all nodes in the cluster is consistent.

To deploy an environment with upper- and lower-level nodes, configure the same NTP server for both the upper- and lower-level nodes, and make sure they have consistent system time.

External DNS server

Used for resolving domain names outside the K8s cluster. Specify it by using the IP: Port format. In this configuration, leave this parameter not configured.

The DNS server in the cluster cannot resolve domain names outside the cluster. This platform will forward an external domain name randomly to an external DNS server for resolution.

A maximum of 10 external DNS servers can be configured. All the external DNS servers must have the same DNS resolution capability, and each can perform external domain name resolution independently. These DNS servers will be used randomly without precedence and sequence.

Make sure all DNS servers can access the root domain. To verify the accessibility, use the nslookup -port = {port} -q = ns. {Ip} command.

Self-Defined VIPs

This setting is typically used to isolate the cluster network from the management network. Make sure the self-defined VIPs do not belong to other subnets in the deployment environment.

 

Table 6 Configuring dual-stack cluster parameters

Parameter

Description

Northbound service VIP1 and VIP2

IP address for northbound interface services. This address must be on the same subnet as the master nodes. VIP1 is an IPv4 address, and VIP2 is an IPv6 address. For the northbound service VIPs, you must specify at least one IPv4 address or IPv6 address. Also, you can configure both an IPv4 address and IPv6 address. You cannot configure two IP addresses of the same version.

When configuring IPv6 addresses, make sure that they do not end with a colon.

Service IP pool

This parameter takes effect only in a dual-stack environment.

Address pool for assigning IPv4 addresses and IPv6 addresses to services in the cluster. The default IPv4 address is 10.96.0.0/16, and the default IPv6 address is fd00:10:96::/112. Typically, the default values are used. You cannot change the value after deployment.

To avoid cluster errors, make sure the subnet does not overlap with other subnets in the deployment.

Container IP pool

This parameter takes effect only in a dual-stack environment.

Address pool for assigning IPv4 addresses and IPv6 addresses to containers in the cluster. The default IPv4 address is 177.177.0.0/16, and the default IPv6 address is fd00:177:177::/112. Typically, the default values are used. You cannot change the value after deployment.

To avoid cluster errors, make sure the subnet does not overlap with other subnets in the deployment.

VIP

Options are Internal and External. In Internal mode, the VIP is assigned by Matrix to the cluster and Matrix manages drift of the VIP among cluster nodes. In External mode, the VIP is assigned to the outside of the cluster by a third-party platform or software, and is not managed by Matrix. The default is Internal.

This parameter is added as from E0713.

Cluster network mode

Network mode of the cluster. Only Single Subnet mode is supported. In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for mutual communication.

NTP server

Used for time synchronization between the nodes in the cluster. Options include Internal server and External server. If you select External server, you must specify the IP address of the server, and make sure the IP address does not conflict with the IP address of any node in the cluster.

An internal NTP server is used in this configuration. After cluster deployment is started, the system synchronizes the time first. After the cluster is deployed, the three master nodes will synchronize the time regularly to ensure that the system time of all nodes in the cluster is consistent.

To deploy an environment with upper- and lower-level nodes, configure the same NTP server for both the upper- and lower-level nodes, and make sure they have consistent system time.

External DNS server

Used for resolving domain names outside the K8s cluster. Specify it by using the IP: Port format. In this configuration, leave this parameter not configured.

The DNS server in the cluster cannot resolve domain names outside the cluster. This platform will forward an external domain name randomly to an external DNS server for resolution.

A maximum of 10 external DNS servers can be configured. All the external DNS servers must have the same DNS resolution capability, and each can perform external domain name resolution independently. These DNS servers will be used randomly without precedence and sequence.

Make sure all DNS servers can access the root domain. To verify the accessibility, use the nslookup -port = {port} -q = ns. {Ip} command.

Self-Defined VIPs

This setting is typically used to isolate the cluster network from the management network. Make sure the self-defined VIPs do not belong to other subnets in the deployment environment.

 

IMPORTANT

IMPORTANT:

If the existing NTP server cannot reach the northbound addresses, you can change cluster parameters to add NTP servers at NIC network configuration after cluster deployment.

 

Creating a cluster

For standalone deployment, add one master node on Matrix. For cluster deployment, add three master nodes on Matrix.

To create a cluster:

1.     After configuring the cluster parameters, click Next.

2.     In the Master Node area, click the plus icon .

Figure 6 Adding a single-stack node

Figure 7 Adding a dual-stack node

 

3.     Configure node parameters as shown in the following figure and then click OK.

Table 7 Node parameter description

Item

Description

Type

Displays the node type. Options include Master and Worker. This field cannot be modified.

IP address

Enter the planned IP address for the master node. You can add master nodes in bulk. In bulk adding mode, make sure the username and password of the master nodes are the same.

Username

Specify the user account to access the operating system. Use an account based on your configuration during system installation. All nodes in a cluster must use the same user account.

Password

Specify the password to access the operating system.

 

4.     Click Start deployment.

When the deployment progress of each node reaches 100%, the deployment finishes. After the cluster is deployed, a star icon  is displayed at the left corner of the primary master node, as shown in the following figure.

Figure 8 Cluster deployment completed

 

After the cluster is deployed, you can skip over the procedures for configuring the network and deploying applications and configure them later as needed.

Deploying Unified Platform applications

 

IMPORTANT

IMPORTANT:

·     When you bulk upload application packages simultaneously, make sure the deployment page is not closed, the PC does not enter sleep mode, and the network between the PC and cluster is not disconnected. If any of these situations occur while the system is deploying components, some components might fail to be deployed correctly. (During the deployment process, you can switch between the browser tabs, minimize the browser window, and lock the PC screen.)

·     If a cluster resource, for example, CPU or memory, reaches the usage threshold during the deployment, some components might fail to be deployed correctly. You can attempt to redeploy these components that failed to be deployed later.

·     When you bulk deploy a large number of applications, resource contention might occur, causing some applications to fail. For applications that fail to be deployed, you can click Retry on the page to attempt redeployment.

·     By default, the websocket, region, netconf, and Common application services of the Connect component, as well as the incident application service of the Common component are disabled. They are automatically enabled only when you deploy other components that depend on these application services. To manually enable them on Matrix as required by the scenario, see "How can I enable Unified Platform application services on Matrix?."

 

Deploying the Unified Platform Base application package

IMPORTANT

IMPORTANT:

When you upload installation packages, make sure the network between the browser and the cluster is operating stably and the bandwidth is not less than 10 Mbps. If the network does not meet the requirements, the installation uploading might fail or take a long time.

 

You can deploy the application packages only on the Matrix page, and you can bulk upload application packages. However, you must deploy the Base component first before deploying other applications.

1.     In the address bar of the browser, enter https://ip_address:8443/matrix/ui to log in to Matrix, where the ip_address parameter specifies the northbound service VIP.

2.     Access the Deploy > Applications page.

3.     For a single-node cluster, you can select the standard or proxy deployment mode. You cannot change the deployment mode after you install a component. This chapter uses the standard deployment mode as an example.

¡     The standard deployment mode is applicable to the systems of standard architecture and the server side of the server-proxy architecture. You can deploy all components of Unified Platform in standard mode.

¡     The proxy deployment mode applies to only the proxy side of the server-proxy architecture, applicable to U-Center products. You can deploy only the Base, Connect, UCP_BasePlat, UCP_CollectPlat components of Unified Platform in proxy mode.

 

NOTE:

To change the deployment mode, reinstall Matrix. Changing the deployment mode by only reinstalling the Base component might cause deployment issues for other components.

 

Figure 9 Selecting a deployment mode

 

4.     Click Deploy Applications.

Figure 10 Installing the Base component

 

5.     Click Upload. In the dialog box that opens, upload the Base installation package.

Figure 11 Uploading the Base installation package

 

6.     After the Base installation package is uploaded, select the Base application package on the current page and then click Next.

 

 

NOTE:

Do not select the other application packages. If you do that, you cannot install the Base component.

 

Figure 12 Base installation package uploaded

7.     On the current page, directly click Next without performing any other operations.

Figure 13 Selecting applications

8.     Click Edit to configure the Base configuration item parameters. Then, click OK to save the settings.

Table 8 Base configuration item parameters

Configuration item

Description

Resource Level

In standalone mode, options include single_large, single_medium, and single_small.

In cluster mode, options include cluster_large, cluster_medium, and cluster_small.

Deployment Protocol

Options include HTTP and HTTPS.

HTTP Protocol Port Number

The default value is 30000.

HTTPS Protocol Port Number

The default value is 30443.

CPU manufacturer

Select CPU manufacturer.

Use Third-Party Database

Select whether to use a third-party database.

Theme

Options include white and star.

Language

Options include zh_CN and en_US.

 

Figure 14 Configuring parameters

9.     After configuring the parameters, click Deploy to start deploying the Base component.

10.     After the Base component is deployed, the original Deploy > Applications page is automatically updated to the Deploy > Convergence Deployment page, where you can deploy other optional packages.

 

 


Deploying SeerEngine-SDWAN

As a best practice, deploy SeerEngine-SDWAN as a component on the convergence deployment page of Matrix. After deployment, SeerEngine-SDWAN is deployed on the host as a container.

 

 

NOTE:

After the controller is deployed, if you need to deploy optional Unified Platform application packages, make sure the version of the optional packages matches the version of the required packages.

 

Logging in to Matrix

1.     In the address bar of the browser, enter https://ip_address:8443/matrix/ui to log in to Matrix.

The ip_address parameter specifies the northbound service VIP.

2.     Access the Deploy > Convergence Deployment page.

Figure 15 Applications page

 

Managing the installation packages

Click Packages Management to access the installation package management page.

On this page, you can upload and delete installation packages. The installation package list displays names, versions, sizes, creation time, and other information of the uploaded installation packages. You can bulk upload application installation packages. After the installation packages are uploaded, click  to return to the deployment management page.

Figure 16 Uploading installation packages

 

Selecting applications

Click Install to access the Select Applications page. Select WAN Branch Scenario, and then select Uninstalled for SeerEngine-SDWAN.

 

NOTE:

After you select an application, if its dependencies are not installed, the system will automatically select them.

 

Figure 17 Selecting applications

 

Selecting installation packages

On the Select Packages page, select the application package version numbers, and click Next to access the Configure Resources page.

Figure 18 Selecting installation packages

 

Configuring resources

1.     On the Configure Resources page, select a resource level based on the scale supported by the hardware.

¡     When the device scale is less than 200, as a best practice, select the small scale for the Unified Platform + basic network management service set, and select the small scale for the WAN branch service set.

¡     When the device scale exceeds 200, as a best practice, select the default scale for the Unified Platform + basic network management service set, and select the default scale for the WAN branch service set.

2.     After selecting the resource levels, click Next to access the Configure Parameters page.

Figure 19 Configuring resources

 

Configuring parameters

TIP

TIP:

You can select the nodes to be bound only when worker nodes exist. When binding a node, make sure the selected node is in normal state. You can select only one or three nodes. You cannot select both master nodes and worker nodes.

 

1.     On the Configure Parameters page, configure relevant parameters for each node as needed.

2.     To deploy the SDWAN MSP scenario, turn on the MSP Scenario switch.

Figure 20 Configuring parameters

 

Deploying SeerEngine-SDWAN components

 

NOTE:

For SeerEngine-SDWAN component deployment, standalone deployment and cluster deployment takes about five to ten minutes.

 

1.     Click Deploy. On the Confirm Parameters page that opens, you can confirm deployed and undeployed nodes of dependencies.

2.     Click OK to start the deployment. Then, wait for the deployment to finish.

Figure 21 Confirming parameters

 

Viewing component details

1.     After deployment is completed, click  to the left of a component on the component management page to expand the component information, as shown in Figure 22.

2.     Click the Details icon in the Actions column to view node binding information and MSP scenario deployment information.

Figure 22 Managing components

 

Figure 23 Component details

 

Accessing the SeerEngine-SDWAN page

After deployment is completed, access the deployment management page of Unified Platform, and click the Home tab to access the SeerEngine-SDWAN page as shown in Figure 24.

Figure 24 Accessing the SeerEngine-SDWAN page

 


Registering the software

The SeerEngine-SDWAN product requires a license to operate normally after it is installed and deployed.

Installing the license on the license server

For more information about requesting and installing the license, see H3C Software Product Remote Licensing Guide.

Obtaining the license authorization

After installing the license for the product on the license server, you only need to connect to the license server from the license management page to obtain the license authorization. To do that, perform the following tasks:

1.     Log in to the system.

2.     On the top navigation bar, click System.

3.     From the navigation pane, select License > License Information.

Figure 25 License information page

 

4.     Configure the license server parameters on the page.

The following table describes each parameter.

5.     Parameters

Parameter

Description

IP Address

Specify the IP address configured on the license server used for internal communication in the cluster where Unified Platform and SeerEngine-SDWAN are deployed.

Port

Specify the service port number of the license server. The default value is 5555.

Username

Specify the username configured on the license server.

Password

Specify the user password configured on the license server.

 

6.     SeerEngine-SDWAN automatically obtains licensing information after connecting to the license server.

 


Backing up and restoring SeerEngine-SDWAN configuration

You can back up and restore the SeerEngine-WAN component on Unified Platform. For more information, see H3C Unified Platform Deployment Guide.


Upgrading SeerEngine-SDWAN

On Matrix, you can upgrade the SeerEngine-SDWAN component with its configuration retained. Do not back up or restore data during the upgrade process.

To upgrade SeerEngine-SDWAN:

1.     In the address bar of the browser, enter https://ip_address:8443/matrix/ui to log in to Matrix, where the ip_address parameter specifies the configured northbound service VIP.

2.     Access the Deploy > Convergence Deployment page.

3.     Click  to the left of the SDWAN branch component to expand the component information.

Figure 26 Deployment management page (1)

 

4.     Click the Upgrade icon  in the Actions column for an application to access the upgrade page.

Figure 27 Deployment management page (2)

 

5.     If no installation packages are available, you can click Upload, and then upload the target installation packages.

Figure 28 Upgrading components (1)

 

6.     Select the installation packages to be upgraded, click Upgrade, and then confirm the upgrade before starting it.

Figure 29 Confirming the upgrade

 

7.     Wait for the upgrade to finish. After the upgrade, clear the browser cache before you log in to the system again as a best practice.

 


Uninstalling SeerEngine-SDWAN

1.     Log in to Matrix. On the top navigation bar, click DEPLOY. From the navigation pane, select Convergence Deployment.

2.     Select the components you want to uninstall, and then click Uninstall to uninstall the specified components.

Figure 30 Uninstalling components

 

3.     In the dialog box that opens, click OK. The uninstallation will finish in a moment.

Figure 31 Confirming the uninstall operation

 


FAQ

Matrix

How can I configure the aging timer for the master nodes in the Matrix cluster?

1.     Log in to the backend of all master nodes in the cluster.

2.     Open the navigator_config.json configuration file to edit values for the matrixLeaderLeaseDuration and matrixLeaderRetryPeriod parameters.  Make sure the parameter settings are the same for all master nodes in the cluster. If the two parameters are not in the configuration file, manually add them.

For example, to edit the values for matrixLeaderRetryPeriod to 2 and matrixLeaderLeaseDuration to 30:

[root@matrix01 ~]# vim /opt/matrix/config/navigator_config.json

{

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

}

3.     After modification, restart the cluster service.

[root@matrix01 ~]# systemctl restart matrix

 

 

NOTE:

·     matrixLeaderLeaseDuration: Used to set the aging timer of the primary node in the cluster. The value is a positive integer, and is greater than or equal to matrixLeaderRetryPeriod × 10.

·     matrixLeaderRetryPeriod: Used to set the interval of the lock when the cluster refreshes the primary node. The value is a positive integer.

·     To ensure cluster stability, make sure all cluster nodes have consistent configurations in the /opt/matrix/config/navigator_config.json file.

 

What should be done if the ETCDINSTALL phase takes too long during the scale-out of Matrix?

If the scale-out of Matrix fails for a long period of time, check the scale-out node's logs on the cluster deployment page to determine if the system stays in the ETCDINSTALL phase for a long time (stays in ETCDINSTALL-PENDING state for over fifteen minutes from the current system time). After you execute the etcdctl member list command at the backend in the original standalone environment , if a failure is returned, you can restore the environment to the state before scaling-out as follows before perform a scale-out again:

1.     Log in to the backend of the original standalone environment.

2.     Execute the cp -f /opt/matrix/k8s/deployenv.sh.bk /opt/matrix/k8s/deployenv.sh command to restore the deployenv.sh script.

3.     Stop the Matrix service on the node by executing the systemctl stop matrix command as the root user. Use the systemctl status matrix command to verify that the Matrix service is stopped. If the Matrix service is stopped, the inactive (dead) value is displayed for the Active field.

[root@master1 ~]# systemctl stop matrix

Non-root users can stop the Matrix service by using the sudo /bin/bash -c "systemctl stop matrix" command.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl stop matrix"

4.     Use the mv /etc/kubernetes/manifests/kube-apiserver.yaml /opt/matrix command to stop kube-apiserver. Verify that the kube-apiserver service is stopped by using the docker ps | grep kube-apiserver command. If no information is output, the service has stopped.

[root@master1 ~]# mv /etc/kubernetes/manifests/kube-apiserver.yaml /opt/matrix

[root@master1 ~]# docker ps | grep kube-apiserver //Verify that kube-apiserver is stopped.

[root@master1 ~]#  // If no information is output, the service has stopped.

5.     Completely stop the etcd service by using systemctl stop etcd command as the root user, and then verify that the etcd service is stopped by using systemctl status etcd command.  If the etcd service is stopped, the inactive (dead) value is displayed for the Active field. Execute the rm -rf /var/lib/etcd/default.etcd/ command to delete the etcd data directory, and make sure no data directory exists in /var/lib/etcd.

[root@master1 ~]# systemctl stop etcd

[root@master1 ~]# rm -rf /var/lib/etcd/default.etcd/

[root@master1 ~]# ll /var/lib/etcd/

Non-root users can use the sudo /bin/bash -c "systemctl stop etcd command to completely stop the etcd service, and use the sudo /bin/bash -c "rm -rf /var/lib/etcd/default.etcd/" command to delete the etcd data directory, ensuring no data directory exists in /var/lib/etcd.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl stop etcd"

[admin@node4 ~]$ sudo /bin/bash -c "rm -rf /var/lib/etcd/default.etcd/"

[admin@node4 ~]$ ll /var/lib/etcd/

6.     Enter the ETCD recovery script directory.

[root@master1 ~]# cd /opt/matrix/k8s/disaster-recovery/

7.     Before executing the etcd recovery script, locate the latest backup data file Etcd_Snapshot_Before_Scale.db in the etcd backup directory /opt/matrix/backup/etcd_backup_snapshot/.

¡     The recovery operation command for the root user is as follows:

[root@master1 ~]# bash etcd_restore.sh Etcd_Snapshot_Before_Scale.db

¡     The recovery operation command for non-root users is as follows:

[admin@node4 ~]$ sudo bash etcd_restore.sh Etcd_Snapshot_Before_Scale.db

8.     Execute the systemctl restart etcd command to restart the etcd service as a root user.

[root@master1 ~]# systemctl restart etcd

Non-root users can use the sudo /bin/bash -c "systemctl restart etcd" command to restart the etcd service.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl restart etcd"

9.     Execute the systemctl restart matrix command to restart the Matrix service as a root user.

[root@master1 ~]# systemctl restart matrix

Non-root users can use the sudo /bin/bash -c "systemctl restart matrix" command to restart the Matrix service.

[admin@node4 ~]$ sudo /bin/bash -c "systemctl restart matrix"

10.     Restore kube-apiserver.

[root@master1 ~]# mv /opt/matrix/kube-apiserver.yaml /etc/kubernetes/manifests/

11.     After failure recovery, access the Matrix cluster deployment page, and then click Start Deployment to perform scale-out again.

What should I do if the page cannot be accessed after Matrix installation?

1.     Try to recover by executing the rm -rf /opt/matrix/data/ && systemctl restart matrix.service command.

2.     If the issue persists, manually upload and decompress the Matrix installation package, then execute the uninstall.sh and install.sh scripts in sequence to uninstall and reinstall the Matrix service.

3.     If the issue persists, contact the technical support.

What should I do if adding a node to Matrix fails?

If you fail to add a node to Matrix and the java.lang.NoClassDefFoundError is logged in the /var/log/matrix-diag/Matrix/Matrix/matrix.log, perform the following operations:

1.     Try to recover by executing the rm -rf /opt/matrix/data/ && systemctl restart matrix.service command.

2.     If the issue persists, manually upload and decompress the Matrix installation package, then execute the uninstall.sh and install.sh scripts in sequence to uninstall and reinstall the Matrix service.

3.     If the issue persists, contact the technical support.

What should I do if Matrix deployment fails?

When Matrix deployment fails, if the phase IMAGE_INSTALL end. cname=ImageInstallPhase, phaseResult=false log is printed, the deployment fails at the K8S phase. To resolve this issue:

1.     Try to recover by executing the rm -rf /opt/matrix/data/ && systemctl restart matrix.service command.

2.     If the issue persists, manually upload and decompress the Matrix installation package, then execute the uninstall.sh and install.sh scripts in sequence to uninstall and reinstall the Matrix service.

3.     If the issue persists, contact the technical support.

How can I switch to the dual stack mode in Matrix?

1.     Log in to Matrix and navigate to the Deploy > Clusters > Cluster Parameters page.

2.     Click Edit, enable the dual stack mode, and then click Apply.

3.     Switch to the dual stack mode:

¡     To switch from IPv4 to the dual stack mode, enter the IPv6 address of the node and the IPv6 address of the northbound service VIP separately. You must configure the node IPv6 address first.

For more information, see H3C Unified Platform Operating System Installation Guide.

¡     To switch from IPv6 to the dual stack mode, enter the IPv4 address of the node and the IPv4 address of the northbound service VIP separately. You must configure the node IPv4 address first.

For more information, see H3C Unified Platform Operating System Installation Guide.

How can I enable Unified Platform application services on Matrix?

1.     Log in to Matrix and navigate to the OBSERVE > Monitor > Application Monitoring page.

2.     Expand a component to view status of the applications of that component.

3.     Click the  or  icon in the Actions column for an application to enable or disable the application.

Figure 32 View application services

 

Common browser issues

How can I access the Matrix page through mapped IP address?

Matrix supports external browser access to the Web page through mapped node IP address and virtual IP address. It supports NAT mapping and domain name mapping, and does not support port mapping. Port 8443 must be used.

To access the Matrix page by using a mapped IP address, perform the following operations on each cluster node:

1.     Add the mapped IP address (or domain name) to the "httpHeaderHost" attribute value in /opt/matrix/config/navigator_config.json (if the attribute does not exist, add it manually, and separate multiple IP addresses or domain names with commas). For example, "httpHeaderHost": "10.10.10.2,10.10.10.3".

2.     After configuration, you can check if the configuration format is correct by running cat /opt/matrix/config/navigator_config.json | jq.

3.     Restart the service by executing service matrix restart for the modification to take effect. Make sure the settings on all cluster nodes are consistent.

 

 

NOTE:

To ensure cluster stability, make sure all cluster nodes have consistent configurations in the /opt/matrix/config/navigator_config.json file.

 

Component deployment or upgrade failure

Component deployment or upgrade might fail due to a timeout. As a best practice, immediately try the deployment or upgrade again. Alternatively, terminate the deployment or upgrade, and try them again. If the issue persists, contact Technical Support.

 

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us