H3C Unified Platform Deployment Guide-E0706-5W100

HomeSupportInstall & UpgradeInstallation GuidesH3C Unified Platform Deployment Guide-E0706-5W100
01-Text
Title Size Download
01-Text 2.58 MB

Contents

About this document 1

Terms· 1

Unified Platform deployment procedure at a glance· 2

Preparing for deployment 3

IP addresses· 3

Single stack· 3

Dual stack· 4

Application installation packages· 4

Server requirements· 6

Hardware requirements· 6

Software requirements· 6

Client requirements· 7

Pre-installation checklist 7

Installing the operating system and software dependencies· 8

Unified Platform deployment restrictions in the virtualization environment 8

Creating and configuring VMs on H3C CAS· 8

Loading an ISO image file· 10

Loading a file on a physical host 10

Loading a file on a VM·· 10

Installing the H3Linux operating system and Matrix· 11

Installing the H3Linux operating system·· 11

Installing the software dependencies· 27

Scenario-based configuration dependency· 28

Installing Matrix (required on non-H3Linux) 29

Preparing for installation· 29

Uploading the Matrix installation package· 29

Installing Matrix· 29

Installing Matrix as a root user 29

Installing Matrix as a non-root user 30

(Optional.) Configuring HugePages· 33

(Optional.) Modifying the SSH service port number 34

Modifying the SSH service port number for the server of each node· 34

Modifying the SSH service port number for each Matrix node· 36

Installing Unified Platform·· 37

Creating a Matrix cluster 37

Logging in to Matrix· 37

Configuring cluster parameters· 39

Creating a cluster 41

Deploying the applications· 43

Procedure· 43

Logging in to Unified Platform·· 45

Registering the software· 46

Installing licenses on the license server 46

Obtaining the device information file· 46

Requesting an activation file· 46

Installing the activation file· 47

Adding a client 47

Obtaining the license authorization· 47

Managing the components on Unified Platform·· 48

Preparing for deployment 48

Enabling NICs· 48

Deploying components· 49

Upgrading a component 50

Removing a component 51

Backing up and restoring the configuration· 52

Backing up Unified Platform and its components· 52

Restoring the configuration· 52

Cluster failure recovery· 54

Single node failure recovery· 54

Procedure· 54

Scaling out or in Unified Platform and its components· 55

Scaling out Unified Platform in standalone mode· 55

Scaling out Matrix· 55

Scaling out Unified Platform·· 55

Scaling out Unified Platform in cluster mode· 56

Scaling in Unified Platform in cluster mode· 56

Upgrading the software· 58

Prerequisites· 58

Backing up data· 58

Remarks· 58

Upgrading E07 Matrix· 60

Upgrading Matrix in cluster mode· 60

Upgrading Matrix in standalone mode· 63

Upgrading Matrix from E06 to E07· 66

Upgrading Matrix in cluster mode· 66

Upgrading Matrix in standalone mode· 69

Upgrading Unified Platform·· 70

Uninstalling Unified Platform·· 72

FAQ·· 73

 


About this document

This document describes the deployment process for Unified Platform.

Terms

The following terms are used in this document:

·     H3LinuxH3C proprietary Linux operating system.

·     Matrix—Docker containers-orchestration platform based on Kubernetes. On this platform, you can build Kubernetes clusters, deploy microservices, and implement O&M monitoring of systems, Docker containers, and microservices.

·     Kubernetes (K8s)—An open-source container-orchestration platform that automates deployment, scaling, and management of containerized applications.

·     Docker—An open-source application container platform that allows developers to package their applications and dependencies into a portable container. It uses the OS-level virtualization.

·     Redundant Arrays of Independent Disks (RAID)—A data storage virtualization technology that combines many small-capacity disk drives into one large-capacity logical drive unit to store large amounts of data and provide increased reliability and redundancy.

·     Graphical User Interface (GUI)A type of user interface through which users interact with electronic devices via graphical icons and other visual indicators.


Unified Platform deployment procedure at a glance

Unified Platform is deployed through Matrix. It supports deployment in standalone mode or cluster mode. In standalone mode, Unified Platform is deployed on a single master node and offers all its functions on this master node. In cluster mode, Unified Platform is deployed on a cluster that contains three master nodes and N (≥ 0) worker nodes, delivering high availability and service continuity. You can add worker nodes to the cluster for service expansion. A Unified Platform that has been deployed in standalone mode can be smoothly expanded to cluster mode.

Unified Platform can be deployed on physical servers or VMs.

Use the following procedure to deploy Unified Platform:

1.     Prepare for installation.

To deploy Unified Platform in standalone mode, prepare one physical server. To deploy Unified Platform in cluster mode, prepare a minimum of three physical servers.

2.     Deploy the operating system and software dependencies on the servers

3.     Configure scenario-specific settings.

4.     Deploy Unified Platform.

In standalone node, deploy Unified Platform on the master node. In cluster mode, deploy Unified Platform on the three or more master nodes.

 


Preparing for deployment

IP addresses

To deploy Unified Platform, plan single-stack IP addresses as described in Table 1 and plan dual-stack IP addresses as described in Table 2 in advance.

Single stack

The IP addresses can be IPv4 or IPv6 addresses.

Table 1 IP addresses

IP address

Description

Remarks

Master node 1 IP

IP address assigned to master node 1 installed with the H3Linux operating system.

In standalone mode, Unified Platform is deployed on only one master mode.

The IP addresses of master nodes added to one cluster must be on the same subnet.

If multiple physical NICs exist on a node, make sure all physical NICs before the NIC whose IP is used as the node IP in the Matrix cluster have IPs assigned. If you cannot do that, the cluster will fail to be deployed, upgraded, or rebuilt. For example, if the node uses the IP of NIC ens191 as the node IP in the Matrix cluster, and ens190 is before ens191 in order, make sure ens190 has an IP assigned. To view the NIC order, execute the ifconfig command.

Master node 2 IP

IP address assigned to master node 2 installed with the H3Linux operating system.

Master node 3 IP

IP address assigned to master node 3 installed with the H3Linux operating system.

Cluster internal virtual IP

IP address for communication inside the cluster.

This address must be on the same subnet as those of the master nodes.

Northbound service VIP

IP address for northbound services.

The northbound service VIP must be on the same subnet as the subnet of the master nodes.

Worker node IP

IP address assigned to a worker node.

Optional.

This address must be on the same subnet as those of the master nodes.

If multiple physical NICs exist on a node, make sure all physical NICs before the NIC whose IP is used as the node IP in the Matrix cluster have IPs assigned. If you cannot do that, the cluster will fail to be deployed, upgraded, or rebuilt. For example, if the node uses the IP of NIC ens191 as the node IP in the Matrix cluster, and ens190 is before ens191 in order, make sure ens190 has an IP assigned. To view the NIC order, execute the ifconfig command.

 

Dual stack

Table 2 IP addresses

IP address

Description

Remarks

Master node 1 IP

IP address assigned to master node 1 installed with the H3Linux operating system.

In standalone mode, Unified Platform is deployed on only one master mode.

The IP addresses of master nodes added to one cluster must be on the same subnet.

If multiple physical NICs exist on a node, make sure all physical NICs before the NIC whose IP is used as the node IP in the Matrix cluster have IPs assigned. If you cannot do that, the cluster will fail to be deployed, upgraded, or rebuilt. For example, if the node uses the IP of NIC ens191 as the node IP in the Matrix cluster, and ens190 is before ens191 in order, make sure ens190 has an IP assigned. To view the NIC order, execute the ifconfig command.

Master node 2 IP

IP address assigned to master node 2 installed with the H3Linux operating system.

Master node 3 IP

IP address assigned to master node 3 installed with the H3Linux operating system.

Cluster internal virtual IP

IP address for communication inside the cluster.

This address must be on the same subnet as those of the master nodes.

Northbound service VIP1 and VIP2

IP addresses for northbound services.

This address must be on the same subnet as the master nodes. VIP1 is an IPv4 address, and VIP2 is an IPv6 address. For the northbound service VIPs, you must specify at least one IPv4 address or IPv6 address. Also, you can configure both an IPv4 address and IPv6 address. You cannot configure two IP addresses of the same version.

Worker node IP

IP address assigned to a worker node.

Optional.

This address must be on the same subnet as those of the master nodes.

If multiple physical NICs exist on a node, make sure all physical NICs before the NIC whose IP is used as the node IP in the Matrix cluster have IPs assigned. If you cannot do that, the cluster will fail to be deployed, upgraded, or rebuilt. For example, if the node uses the IP of NIC ens191 as the node IP in the Matrix cluster, and ens190 is before ens191 in order, make sure ens190 has an IP assigned. To view the NIC order, execute the ifconfig command.

 

Application installation packages

Table 3 describes the application installation packages required if you select the H3Linux operating system for Unified Platform. When you select another operating system, the H3Linux ISO image file is not required.

Table 3 Application installation packages

Application installation package

Description

Remarks

Dependencies

common_H3Linux-<version>.iso

Installation package for the H3Linux operating system

Required

N/A

common_PLAT_GlusterFS_2.0_<version>.zip

Provides local shared storage functionalities.

Required

N/A

general_PLAT_portal_2.0_<version>.zip

Provides portal, unified authentication, user management, service gateway, and help center functionalities.

Required

N/A

general_PLAT_kernel_2.0_<version>.zip

Provides access control, resource identification, license, configuration center, resource group, and log functionalities.

Required

N/A

general_PLAT_kernel-base_2.0_<version>.zip

Provides alarm, access parameter template, monitoring template, report, email, and SMS forwarding functionalities.

Optional

N/A

general_PLAT_network_2.0_<version>.zip

Provides basic management of network resources, network performance, network topology, and iCC.

Optional

kernel-base

general_PLAT_kernel-region_2.0_<version>.zip

Provides hierarchical management.

Optional

kernel-base

general_PLAT_Dashboard_2.0_<version>.zip

Provides the dashboard framework.

Optional

kernel-base

general_PLAT_widget_2.0_<version>.zip

Provides dashboard widget management.

Optional

Dashboard

general_PLAT_websocket_2.0_<version>.zip

Provides the southbound WebSocket function.

Optional

N/A

Syslog-<version>.zip

Provides the syslog function.

Optional

N/A

general_PLAT_cmdb_2.0_<version>.zip

Provides database configuration and management.

Optional

kernel-base

general_PLAT_suspension_2.0_<version>.zip

Allows you to configure maintenance tag tasks for resources of all types and configure the related parameters to control the resources.

Optional

N/A

general_PLAT_aggregation_2.0_< version>.zip

Provides alarm aggregation service.

Optional

kernel-base

Analyzer-AIOPS-<version>.zip

Provides the trend prediction and anomaly detection services for the time series data.

Optional

N/A

 

Analyzer-Collector-<version>.zip

Provides the data collection service for gRPC and NETCONF.

Optional

N/A

 

nsm-webdm_<version>.zip

Provides the network device management function and supports device panels.

Optional

network

 

 

 

NOTE:

·     The dashboard, network, CMDB, aggregation, and kernel-region applications depend on the kernel-base component. To install the these applications, first install the kernel-base component.

·     The dashboard and widget applications are required for the dashboard function. The dashboard application must be installed before the widget application.

·     Syslog must be installed before the deployment of SeerAnalyzer.

·     After the application packages are uploaded successfully, they will be automatically synchronized to the /opt/matrix/app/install/packages/ directory on each node.

·     To use HTTPS, log in to Unified Platform after the applications and components are installed and then select System > System Settings > Security Settings to enable HTTPS.

·     To install Analyzer-Collector when the SeerAnalyzer version is E61XX, you must first upgrade SeerAnalyzer E61XX to SeerAnalyzer E62XX.

·     To avoid data loss when you install CMDB of version E0706, you must disable the function of automatically deleting entries without synchronization sources for network-related resource types (for example, switches and routers) in the resource synchronization settings. If you install CMDB of version E0706P02 or later, you do not need to do that.

 

Server requirements

Hardware requirements

For the hardware requirements for Unified Platform deployment and its deployment in a specific application scenario, see AD-NET Solution Hardware Configuration Guide and the server hardware configuration guide for that scenario.

 

CAUTION

CAUTION:

·     Allocate CPUs, memory, and disks in sizes as recommended to Unified Platform and make sure sufficient physical resources are available for the allocation. To ensure Unified Platform stability, do not overcommit hardware resources such as memory and drive.

·     As a best practice, install the etcd service on a disk mapped to a different physical drive than the disks for installing the system and other components. If you cannot do this, use SSDs or 7200 RPM (or higher) HDDs in conjunction with a 1G RAID controller.

 

Software requirements

The H3Linux image file contains the H3Linux operating system and Matrix software packages. After the H3Linux operating system is installed, the dependencies and Matrix will be installed automatically. This frees the users from the workloads of manual installation.

Table 4 Operating systems available for Unified Platform

Unified Platform version

Available operating system

Deployment

x86

H3Linux V1.1.2

Installing the H3Linux operating system and Matrix

 

IMPORTANT

IMPORTANT:

All nodes in the cluster must be installed with the same version of operating system.

 

Client requirements

You can access Unified Platform from a Web browser without installing any client. As a best practice, use Google Chrome 70, Firefox 78, or a browser of a higher version with a minimum resolution width of 1600.

Pre-installation checklist

Table 5 Pre-installation checklist

Item

Requirements

Server or VM

Hardware

The CPU, memory, disk (also called drive in this document), and NIC settings are as required.

Software

·     The operating system meets the requirements.

·     The system time settings are configured correctly. As a best practice, configure NTP on each node to ensure time synchronization on the network.

·     The drives have been configured in a RAID setup.

Client

Google Chrome 70 or a higher version is installed on the client.

 

CAUTION

CAUTION:

·     During the Unified Platform deployment process, do not enable or disable firewall services.

·     To avoid exceptions, do not modify the system time after cluster deployment.

 

IMPORTANT

IMPORTANT:

·     As a best practice, set the server's or VM's next startup mode to UEFI. For VMware, make sure the firmware is set to EFI from the Boot Options field and deselect the Secure Boot check box.

·     Do not use KVM to install the same operating system image for multiple servers simultaneously.

·     H5 KVM is unstable in performance and issues such as slow or unsuccessful image loading might occur during installation. As a best practice, use Java KVM for installation.

·     A power failure during the installation process might cause installation failure of some service components. For function integrity, perform a reinstallation when a power failure occurs during installation.

 


Installing the operating system and software dependencies

Unified Platform deployment restrictions in the virtualization environment

·     In the virtualization environment, the CPU requirements specify the number of physical cores, and the CPU must support hyper threading (HT) and have a frequency of 2.2 GHz or higher. If Unified Platform is deployed on a VM, the number of virtual cores must be twice the specified number of physical cores, and the frequency must be 2.4 GHz or higher on the VM. The memory and disk requirements for VMs are the same as those for physical servers.

·     Some resources will be lost on the virtualization layer, typically by 10% to 20%. You must allocate more resources to VMs that those to physical servers.

·     Deploy the environment to make sure nodes in a cluster can communicate with each other.

¡     The IP address of each node in the cluster must be able to communicate with the internal virtual IP and northbound service VIPs of the cluster.

¡     The internal communication addresses of all nodes must be able to communicate with each other, including addresses in the service IP address pool and container IP address pool.

·     VMs in the cluster cannot share one host. If you do that, the failure of the host will cause all nodes in the cluster to fail.

·     Do not enable resource overcommitment for VMs. If you do that, the hardware resources allocated by the cluster to a VM might not be exclusively used by the VM, and the cluster node performance will be affected.

·     Do not set the thin provision mode for the VM storage volumes.

·     You must install Unified Platform on VM nodes in a cluster one by one. Do not clone a node with Unified Platform and then modify its IP.

·     As a best practice, install Unified Platform on a VM through mounting an ISO image rather than building a hard disk image.

·     Do not deploy VM nodes in a cluster and other I/O-intensive VMs on the same virtualization server.

·     Make sure the etcd portion and data disks use different storage pools. If an SSD storage pool is available, as a best practice, use the SSD storage pool to allocate an etcd partition. This ensures high read/write performance and high availability for the etcd partition.

Creating and configuring VMs on H3C CAS

This section describes how to create VMs based on the hosts or clusters in an H3C CAS host pool, but does not describe how to create host pools, hosts, and clusters.

 

IMPORTANT

IMPORTANT:

When Unified Platform is deployed on a VM, it cannot be migrated between VMs.

 

When creating a VM, you can specify the name, CPU, memory, disk, NIC, and operating system for the VM. Focus on the following configurations:

·     CPU configuration: Set the number of CPUs and the number of CPU cores for the VM. The number of CPUs cannot exceed the total number of CPUs on the host. By default, the number of CPU cores for a VM is 1. The number of CPU cores of a VM cannot exceed the number of CPU cores on the physical host. You can configure both the number of CPUs and the number of CPU cores.

¡     Reserve CPU: Enter the number of CPUs that the host reserves for the VM.

¡     I/O Priority: Specify the priority for the VM to read and write the disk. Options include Low, Medium (the default), and High. Set the priority to High.

·     Memory configuration: Specify the memory capacity of the VM. This setting is the memory size of the VM OS. The maximum memory size available depends on the physical memory size. You can avoid memory overcommitment through reserving the memory and limiting the VM memory size.

¡     Reserve memory: Enter the memory to be reserved for the VM to the total available memory of the host in percentage. The host allocates specific memory to a VM based on the actual memory usage of the VM. You can reserve memory for a VM in case the VM needs more memory after the host memory is exhausted. Set the percentage to 100%.

¡     Memory resource priority: Specify the priority for a VM to request memory resources. Options include High, Medium, and Low (the default). Set the priority to High.

·     Network configuration: Select the vSwitch to which the VM NIC will connect, and set the VM NIC type, which is high-speed NIC by default.

When you modify the VM network on CAS, follow these restrictions and guidelines:

¡     To ensure that the cluster VIPs are reachable, do not allocate IP addresses through IP-MAC bindings.

¡     When you manually configure the IP addresses, the contents in the /etc/hosts file might be lost. After modifying a VM, identify whether the following contents are lost in the /etc/hosts file on the node. If yes, manually add the lost contents.

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

·     Disk configuration: Specify the storage volume and its storage pool for the VM disk. Specify the disk type for the VM. Options include Block Device, New File (the default), and Existing File. By default, a new empty storage file is created as the disk of the VM. As a best practice, configure two disks, one as the system disk and the other disk for separately mounting the etcd partition. If you cannot do this, use SSDs or use 7200 RPM (or higher) HDDs in conjunction with a 1G RAID controller.

¡     Provision: Select a storage volume provision mode. Options include Thin (the default), Lazy Zeroed, and Eager Zeroed. This field is required when the disk type is New File. Set this field to Eager Zeroed to ensure that the VM exclusively uses the allocated resources.

When you configure the disk parameters, follow these restrictions and guidelines:

-     Select a logical disk as the system disk, and make sure the logical disk size meets the system disk requirements.

-     If you configure two disks, you must manually configure the disk where the mount point resides subsequently. Mount the /var/lib/etcd partition to one disk (50 GB or higher), and mount the other partitions to the other disk.

-     When you use CAS for deployment, if you mount the system disk to an IDE disk, do not mount the etcd partition to a high-speed disk.

-     To deploy SeerAnalyzer, prepare a separate data disk, and plan the disk partitions as described in H3C SeerAnalyzer Installation and Deployment Guide.

·     CD drive configuration: Set the CD drive or image used by the VM and the CD drive connection mode. By default, an image file is used.

·     Add hardware resources: Add hardware resources to the VM, including NIC, disk, CD drive, floppy disk, GPU, USB device, and network USB device.

In H3C CAS E0709 and later, you can configure overcommitment for VMs. This section uses H3C CAS E0709 as an example.

1.     Navigate to the System > Parameters page.

2.     Click the System Parameters tab to enter the system parameter configuration page.

3.     Set the following basic system parameters:

¡     CPU overcommitment: Select whether to enable CPU overcommitment. Options include Enabled and Disabled. The default is Enabled.

-     If you select Enabled, the number of vCPUs bound to the physical CPUs of a NUMA node can exceed the number of the physical CPUs.

-     If you select Disabled, the number of vCPUs bound to the physical CPUs of a NUMA node cannot exceed the number of the physical CPUs.

¡     Shared storage overcommitment limit: Specify whether the host limits shared storage overcommitment. Options include Enabled and Disabled. The default is Enabled.

-     If you select Enabled, the shared storage commitment ratio can be set. If you need to forbid overcommitment for critical services, select Enabled, and set the shared storage overcommitment ratio to 0, which means to forbid shared storage commitment.

-     If you select Disabled, shared storage overcommitment is not limited. In this case, monitor the shared storage pool usage. When the usage is too high, promptly expand the storage pool or delete unnecessary files to avoid read/write interruption caused by insufficient space.

4.     After settings, click Save.

Loading an ISO image file

You can load an ISO image file on a physical host or VM.

Loading a file on a physical host

You can use the remote console of the server to load the ISO image file through the virtual optical drive.

Configure the server to boot from the optical drive and then restart the sever.

Loading a file on a VM

Use an H3C CAS VM as an example. Upload the ISO image file to the storage pool of the host in the virtualization management platform. As a best practice, upload it to the storage pool named isopool of the local file directory type.

When creating and configuring a VM on the virtualization management platform, you can mount an ISO image file through the optical drive.

When the VM is stated, it automatically loads the ISO image file.

Installing the H3Linux operating system and Matrix

CAUTION

CAUTION:

You must reserve an empty disk or free space or partition of a minimum of 200 GB on each server node for the GlusterFS application. For how to prepare a disk partition for GlusterFS, see "How can I prepare a disk partition for GlusterFS on a node?" To avoid installation failure, do not format the disk. If the disk has been formatted, use the wipefs -a /dev/disk_name command to wipe the disk.

 

IMPORTANT

IMPORTANT:

Installing the operating system on a server that already has an operating system installed replaces the existing operating system. To avoid data loss, back up data before you install the operating system.

 

This section uses a server without an operating system as an example to describe H3Linux operating system installation. Matrix will be installed automatically during installation of the H3Linux operating system.

Installing the H3Linux operating system

1.     Use the virtual optical drive to load the installation package (ISO file) from the server remote console.

2.     Configure the server to boot from the optical drive and then restart the server.

3.     After the ISO file is loaded, select a language (English(United States) in this example), and then click Continue, as shown in Figure 1.

Figure 1 Selecting a language

 

4.     On the INSTALLATION SUMMARY page, click DATE & TIME in the LOCALIZATION area.

Figure 2 Installation summary page

5.     Set the data and time, and then click Done.

Figure 3 Setting the date and time

 

6.     Click KEYBOARD in the LOCALIZATION area and select the English (US) keyboard layout.

Figure 4 Selecting the keyboard layout

 

7.     Click SOFTWARE SELECTION in the SOFTWARE area to enter the page for selecting software, as shown in Figure 2. Select the Virtualization Host base environment.

8.     Click LICENSE SERVER in the SOFTWARE area to enter the license server page, as shown in Figure 5. Select whether to install the license server as needed.

Figure 5 Adding a license server

 

9.     Select INSTALLATION DESTINATION in the SYSTEM area.

Figure 6 INSTALLATION SUMMARY page

 

10.     Select two disks from the Local Standard Disks area and then select I will configure partitioning in the Other Storage Options area. Then click Done.

Figure 7 Installation destination page

 

IMPORTANT

IMPORTANT:

As from release PLAT 2.0 (E0609), the system automatically carries out the Unified Platform disk partitioning scheme if the disk space meets the minimum requirements of Unified Platform. You can skip step 11 and continue the configuration from step 12. For disk partitioning in a specific scenario, see the deployment guide for that scenario and edit the partitioning scheme as required at step 12.

 

11.     (Optional.) Select the Standard Partition scheme from the drop-down menu for the new mount points.

12.     The system creates disk partitions automatically, as shown in Figure 8. Table 6 describes the detailed information about the partitions. You can edit the partition settings as required.

a.     To create a mount point.

# Click the  button.

# In the dialog box that opens, select a partition from the Mount Point list and set a capacity for it. Then click Add mount point.

b.     To change the destination disk for a mount point, select the mount point and then click Modify….

 

IMPORTANT

IMPORTANT:

·     The H3Linux automatic disk partitioning scheme uses the first logical drive as the system disk. The size of the logical drive meets the system disk requirement.

·     As from PLAT 2.0 (E0706), you can install the etcd service together with other services on a disk. As a best practice, install the etcd service on a disk mapped to a different physical drive than the disks for installing the system and other components. If you cannot do this, use SSDs or 7200 RPM (or higher) HDDs in conjunction with a 1G RAID controller.

·     The H3Linux operating system can be deployed on VMware ESXi 6.7.0, H3C CAS-E0706 VMs, or VMs of a higher version. To deploy the H3Linux operating system on a CAS VM, mount the system disk on an IDE disk, and do not mount the etcd partition on a high-speed disk.

·     To deploy SeerAnalyzer, prepare a separate data disk and partition the disk according to H3C SeerAnalyzer Installation and Deployment Guide.

 

Figure 8 Disk partition information

 

Table 6 Automatically created disk partitions

Mount point

Capacity

Applicable mode

Remarks

/var/lib/docker

400 GiB

BIOS mode/UEFI mode

Capacity expandable.

/boot

1024 MiB

BIOS mode/UEFI mode

N/A

swap

1024 MiB

BIOS mode/UEFI mode

N/A

/var/lib/ssdata

450 GiB

BIOS mode/UEFI mode

Capacity expandable.

/

400 GiB

BIOS mode/UEFI mode

Capacity expandable.

As a best practice, do not save service data in the / directory.

/boot/efi

200 MiB

UEFI mode

Required in UEFI mode.

/var/lib/etcd

50 GiB

BIOS mode/UEFI mode

As a best practice, mount it on a separate disk.

Reserved disk space

N/A

N/A

Used for GlusterFS. 200 GB of the reserved disk space is used for Unified Platform. If other components use this partition, increase the partition capacity as required.

The total capacity of system disks is 1.7 TB + 50 GB. The capacity of the above mounting points is 1.23 TB + 50 GB, and the remaining space is reserved automatically for GlusterFS.

 

To partition a disk, for example, a 2.4 TB system disk in the DC scenario, you can use the partitioning solution as described in Table 7.

Table 7 Partitioning solution for a 2.4 TB system disk in the DC scenario

Mount point

Minimum capacity

Applicable mode

Remarks

/var/lib/docker

500 GiB

BIOS mode/UEFI mode

Capacity expandable.

/boot

1024 MiB

BIOS mode/UEFI mode

N/A

swap

1024 MiB

BIOS mode/UEFI mode

N/A

/var/lib/ssdata

450 GiB

BIOS mode/UEFI mode

Capacity expandable.

/

1000 GiB

BIOS mode/UEFI mode

Capacity expandable.

As a best practice, do not save service data in the / directory.

/boot/efi

200 MiB

UEFI mode

N/A

/var/lib/etcd

48 GiB

BIOS mode/UEFI mode

As a best practice, mount it on a separate disk.

Reserved disk space

400 GiB

N/A

Used for GlusterFS. For how to prepare a disk partition for GlusterFS, see "How can I prepare a disk partition for GlusterFS on a node?."

The total capacity of system disks is 2.3 TB + 50 GB. The capacity of the above mounting points is 1.91 TB + 50 GB, and the remaining 400 GB is reserved for GlusterFS.

 

 

NOTE:

For disk partitioning in a specific scenario, see the deployment guide or installation guide for that scenario.

 

 

NOTE:

Follow these guidelines to set the capacity for the partitions:

·     /var/lib/docker/—The capacity depends on the Docker operation conditions and the specific application scenario.

·     /var/lib/ssdata/—Used by PXC, Kafka, and ZooKeeper. In theory, only Unified Platform uses this partition. If other components use this partition, increase the partition capacity as required.

·     /—Used by Matrix, including the images of the components such as K8s and Harbo. The capacity of the partition depends on the size of uploaded component images. You can increase the partition capacity as required.

·     GlusterFS—200 GB of this partition is used for Unified Platform. If other components use this partition, increase the partition capacity as required.

 

13.     Click Done.

¡     If a message as shown in Figure 9 is displayed, create a BIOS Boot partition of 1 MiB.

¡     If no such message is displayed, go to the next step.

Figure 9 Message promoting to create a BIOS Boot partition

 

14.     Click Accept Changes.

Figure 10 Summary of changes page

 

15.     On the INSTALLATION SUMMARY page, click LOGIN ACCOUNT. Select the login account for installing Matrix and creating the cluster (select the Choose ROOT as administrator option in this example) and then click Done, as shown in Figure 11.

To deploy a Matrix cluster, you must select the same user account for all nodes in the cluster. If you select the admin account, the system creates the root account simultaneously by default, but disables the SSH permission of the root account. If you select the root account, you have all permissions and the admin account will not be created.

 

IMPORTANT

IMPORTANT:

Before selecting the admin login account, make sure all applications in the deployment scenario support installation by using the admin account. Add sudo before every command. If the command executes the installation or uninstallation scripts, add sudo /bin/bash before the command.

 

Figure 11 Selecting the login account

 

16.     In the SYSTEM area, click NETWORK & HOST NAME. On the NETWORK & HOST NAME page, perform the following tasks:

a.     Enter a new host name in the Host name field and then click Apply.

 

IMPORTANT

IMPORTANT:

·     To avoid cluster creation failure, configure different host names for the nodes in a cluster. A host name can contain only lower-case letters, digits, hyphens (-), and dots (.) but cannot start or end with a hyphen (-) or dot (.).

·     To modify the host name of a node before cluster deployment, execute the hostnamectl set-hostname hostname command in the CLI of the node's operating system. hostname represents the new host name. The new host name takes effect after the node is restarted. A node's host name cannot be modified after cluster deployment.

 

Figure 12 NETWORK & HOST NAME page

 

b.     (Optional.) Configure NIC bonding. NIC bonding allows you to bind multiple NICs to form a logical NIC for NIC redundancy, bandwidth expansion, and load balancing.

To configure NIC bonding, click the  button at this step, or add configuration files on the servers after the operating system is installed. For the configuration procedure, see "What is and how can I configure NIC bonding?."

 

IMPORTANT

IMPORTANT:

If you are to configure NIC bonding, finish the NIC bonding configuration before creating a cluster.

 

c.     Select a NIC and then click Configure to enter the network configuration page.

d.     Configure the network settings as follows

# Click the General tab and then select Automatically connect to this network when it is available (A) and leave the default selection of All users may connect to this network.

Figure 13 General tab

 

e.     Configure IPv4 or IPv6 settings. Matrix supports IPv4 and IPv6 dual-stack.

-     To configure an IPv4 address, click the IPv4 Settings tab. Select the Manual method from the Method drop-down list, click Add and configure an IPv4 address (master node IP) in the Addresses area, and then click Save. Only an IPv4 address is configured in this deployment.

-     To configure an IPv6 address, perform the following steps:

# Click the IPv4 Settings tab and select Disable from the Method drop-down list.

# Click the IPv6 Settings tab.

# Select the Manual method from the Method drop-down list, click Add and configure an IPv6 address (master node IP) in the Addresses area, and then click Save.

-     In a dual-stack environment, configure both IPv4 and IPv6 addresses.

 

CAUTION

CAUTION:

·     You must specify a gateway when configuring an IPv4 or IPv6 address.

·     Before configuring an IPv6 address in a single-stack environment, you must disable the IPv4 address that has been configured.

·     To deploy a dual-stack cluster, you must specify both IPv4 and IPv6 addresses.

·     To avoid environment exceptions, do not use the ifconfig command to enable or disable a NIC after the operating system is installed. As a best practice, use the ifup and ifdown commands.

·     Matrix must have an exclusive use of a NIC. You are not allowed to configure a subinterface or sub-address on the NIC.

·     The IP address used for cluster creation must not be on the same network segment as the IP addresses of other NICs on the Matrix node.

 

Figure 14 Configuring an IPv4 address for the server

 

17.     On the NETWORK & HOST NAME page, verify that the IP address configuration and the NIC enabling status are correct. Then, click Done to return to the INSTALLATION SUMMARY page.

18.     On the command prompt window of your PC, execute the ping IP_Dress command (where the IP_Dress argument is the IPv4 address configured on the IPv4 settings tab), and identify whether the specified IP address is reachable.

¡     If the IP address can be successfully pinged, proceed to the next step.

¡     If the IP address cannot be pinged, return to the previous tab and verify that the mask and gateway are configured correctly.

19.     Click Begin Installation to start the installation. During the installation process, you will be prompted to configure the password for the login account.

¡     If the admin account has been selected, set the password for both the admin and root accounts.

¡     If the root account has been selected, set the password for the root account.

Figure 15 User settings area

 

20.     You can select to modify the default password for Matrix and Unified Platform. The modification takes effect at the same time.

Figure 16 UNIFIED PLATFORM PASSWORD

 

After the installation is complete, the system reboots to finish the installation of the operating system. If you set the passwords after the installation, click Finish configuration for the system to restart.

Figure 17 Installation completed

 

21.     Log in to the operating system and then execute the systemctl status matrix command to verify whether Matrix is installed successfully. If active (running) is displayed in the Active field, the installation succeeds.

Figure 18 Verifying the Matrix installation

 

Installing the software dependencies

The H3Linux image file contains the H3Linux operating system and Matrix software packages. After the H3Linux operating system is installed, the dependencies and Matrix will be installed automatically. You are not required to install the software dependencies and Matrix manually.

 


Scenario-based configuration dependency

After installing the operating system, you need to configure scenario-based configuration to deploy some schemes. For more information, see the solution deployment guide.

 


Installing Matrix (required on non-H3Linux)

Preparing for installation

Before installing Matrix, make sure the installation environment meet the requirements listed in Table 8.

Table 8 Verifying the installation environment

Item

Requirements

Network port

Make sure each Matrix node has a unique network port. Do not configure subinterfaces or secondary IP addresses on the network port.

IP address

The IP addresses of network ports used by other Matrix nodes and the IP address of the network port used by the current Matrix node cannot be on the same subnet.

The source IP address for the current Matrix node to communicate with other nodes in the Matrix cluster must be the IP address of the Matrix cluster. You can execute the ip route get targetIP command to obtain the source IP address.

[root@uc1 ~]# ip route get 10.99.223.190

10.99.223.190 dev ens3 src 10.99.223.154 uid 0

Time zone

To avoid node adding failure on the GUI interface, make sure the system time zone of all Matrix nodes are the same. You can execute the timedatectl command to view the system time zone of each Matrix node.

Host name

To avoid cluster creation failure, make sure the host name meets the following rules:

·     The host name of each node must be unique.

·     The host name contains a maximum of 63 characters and supports only lowercase letters, digits, hyphens, and decimal points. It cannot start with 0, 0x, hyphen, or decimal point, and cannot end with hyphen or decimal point. It cannot be all digits.

 

Uploading the Matrix installation package

IMPORTANT

IMPORTANT:

To avoid file damage, use binary mode if you use FTP or TFTP for package upload.

 

Copy or use a file transfer protocol to upload the installation package to the target directory on the server.

Installing Matrix

You can use a root user account (recommended) or a non-root user account to install Matrix.

Installing Matrix as a root user

1.     Access the storage directory of the Matrix installation package.

2.     Execute the unzip Matrix-version-platform.zip command. Matrix-version-platform.zip represents the installation package name, the version argument represents the version number, and the platform argument represents the CPU architecture type, x86_64 in this example.

 [root@matrix01 ~]# unzip Matrix-V900R001B07D006-x86_64.zip

Archive:  Matrix-V900R001B07D006-x86_64.zip

   creating: Matrix-V900R001B07D006-x86_64/

   extracting: Matrix-V900R001B07D006-x86_64/matrix.tar.xz

   inflating: Matrix-V900R001B07D006-x86_64/install.sh

 

   inflating: Matrix-V900R001B07D006-x86_64/uninstall.sh

[root@matrix01 ~]# cd Matrix-V900R001B07D006-x86_64

[root@matrix01 Matrix-V900R001B07D006-x86_64]# ./install.sh

Installing…

[install] -----------------------------------

[install] Matrix-V900R001B07D006-x86_64

[install] Red Hat Enterprise Linux release 8.4 (Ootpa)

[install] Linux 4.18.0-305.el8.x86_64

[install] -----------------------------------

[install] WARNING: To avoid unknow error, do not interrupt this installation procedure.

[install] Checking environment...

[install] Done.

[install] Checking current user permissions...

[install] Done.

[install] Decompressing matrix package...

[install] Done.

[install] Installing dependent software...

[install]  Installed: jq-1.6

[install] Done.

[install] Starting matrix service...

[install] Done.

Complete!

 

 

NOTE:

The installation procedure is the same for worker nodes and master nodes. You can specify the node role when you set up a cluster from the Web interface.

 

3.     Use the systemctl status matrix command to identify whether the Matrix service is installed correctly. The Active field displays active (running) if the platform is installed correctly.

4.     Repeat the steps above on the other nodes.

Installing Matrix as a non-root user

To install Matrix as a non-root user, first modify related configurations.

Editing configuration files

1.     As a root user, view the /etc/passwd file. Identify whether the configured non-root user name (admin in this example, as shown in Figure 19) is the same as that in the configuration file. If not, modify the corresponding username in the configuration file.

[root@matrix01 ~]# vim /etc/passwd

Figure 19 Confirming parameters in the /etc/passwd file

 

2.     As a root user, edit the /etc/sudoers file.

[root@matrix01 ~]# vim /etc/sudoers

Figure 20 Editing the /etc/sudoers file

 

3.     As a root user, edit the /etc/pam.d/login file.

[root@matrix01 ~]# vim /etc/pam.d/login

Figure 21 Editing the /etc/pam.d/login file

 

4.     As a root user, edit the /etc/ssh/sshd_config file to the left part of the picture in Figure 22.

[root@matrix01 ~]# vim /etc/ssh/sshd_config

Figure 22 Editing the /etc/ssh/sshd_config file

 

Configuring system settings

1.     View the firewall status.

systemctl status firewalld

2.     Disable the firewall if the firewall is enabled.

systemctl stop firewalld && systemctl disable firewalld

Installing Matrix

1.     Access the storage directory of the Matrix installation package.

2.     Execute the unzip Matrix-version-platform.zip command as a non-root user. Matrix-version-platform.zip represents the installation package name, the version argument represents the version number, and the platform argument represents the CPU architecture type, x86_64 in this example.

# unzip Matrix-V900R001B07D006-x86_64.zip

3.     Install Matrix as a non-root user.

# cd Matrix-V900R001B07D006-x86_64

# sudo bash install.sh

 

IMPORTANT

IMPORTANT:

The installation procedure is the same for worker nodes and master nodes.

 

4.     Use the systemctl status matrix command to identify whether the Matrix service is installed correctly. The Active field displays active (running) if the platform is installed correctly.

5.     In a non-root environment, you need to manually create the log directory as a root user and edit the owner of the log directory before deploying the Unified Platform.

[root@master01 ~]# mkdir -p /var/log/ucenter && chown admin:wheel /var/log/ucenter

[root@master01 ~]# ll -d /var/log/ucenter

6.     Repeat the steps above on the other nodes.

 


(Optional.) Configuring HugePages

To deploy the SeerAnalyzer component in the cloud DC scenario, you must not enable HugePages. To deploy the vBGP component in the cloud DC scenario, you must enable HugePages on each server.

To enable or disable HugePages, you must restart the server for the configuration to take effect. Determine the installation sequence of the SeerAnalyzer and vBGP components as required. HugePages is disabled on a server by default. For more information, see H3C SeerEngine-DC Installation Guide (Unified Platform).

 


(Optional.) Modifying the SSH service port number

A Matrix cluster installs, upgrades, and repairs nodes and performs application deployment and monitoring through SSH connections. On each node, the SSH server uses port 22 by default to listen on the client connection requests. After a TCP connection is established between a node and the SSH server, data information can be exchanged between them.

You can modify the SSH service port number to improve the SSH connection security.

 

IMPORTANT

IMPORTANT:

·     Make sure the SSH service port number is the same on all Matrix nodes in the same cluster.

·     To ensure that the SSH service can start successfully, do not configure a known port number (in the range of 1 to 1024) as the SSH service port number.

 

Modifying the SSH service port number for the server of each node

1.     Log in to the back end of the server. Execute the netstat -anp | grep after_port-number command to identify whether a port number is used. The after_port-number argument is the target SSH service port number to be set.

If the port number is used, no information is returned. If the port is used, information is returned. For example:

¡     Port number 12345 is not used, and you can modify the port number to it.

[root@node-worker ~]# netstat -anp | grep 12345

¡     Port number 1234 is used, and you cannot modify the port number to it.

[root@node-worker ~]# netstat -anp | grep 1234

tcp        0      0 0.0.0.0:1234            0.0.0.0:*               LISTEN      26211/sshd

tcp6       0      0 :::1234                 :::*                    LISTEN      26211/sshd

2.     Use the vim /etc/ssh/sshd_config command to open the configuration file of the sshd service. Modify the port number in the configuration file to the target port number (for example, 12345), and delete the annotation symbols.

Figure 23 The port number before modification is 22

 

Figure 24 The port number after modification

 

3.     After modifying the port number, restart the sshd service.

[root@node-worker ~]# systemctl restart sshd

4.     Identify whether the port number is successfully modified. The port number is successfully modified if the following information is returned.

[root@node-worker ~]# netstat -anp | grep -w 12345

tcp        0      0 0.0.0.0:12345            0.0.0.0:*               LISTEN      26212/sshd

tcp6       0      0 :::12345                 :::*                    LISTEN      26212/sshd

Modifying the SSH service port number for each Matrix node

1.     Use the vim /opt/matrix/config/navigator_config.json command to open the navigator_config file. Identify whether the sshPort field exists in the file.

¡     If yes, modify the value for the field to the target value (12345 in this example).

¡     If not, manually add the field and specify a value for it.

{

"productName": "uc",

"pageList": ["SYS_CONFIG", "DEPLOY", "APP_DEPLOY"],

"defaultPackages": ["common_PLAT_GlusterFS_2.0_E0707_x86.zip", "general_PLAT_portal_2.0_E0707_x86.zip", "general_PLAT_kernel_2.0_E0707_x86.zip"],

"url": "http://${vip}:30000/central/index.html#/ucenter-deploy",

"theme":"darkblue",

"matrixLeaderLeaseDuration": 30,

"matrixLeaderRetryPeriod": 2,

"sshPort": 12345

}

2.     After modification, restart the Matrix service.

[root@node-worker ~]# systemctl restart matrix

3.     Identify whether the port number is successfully modified. If yes, the last message in the log is as follows:

[root@node-worker ~]# cat /var/log/matrix-diag/Matrix/Matrix/matrix.log | grep "ssh port"

2022-03-24T03:46:22,695 | INFO  | FelixStartLevel  | CommonUtil.start:232 | ssh port = 12345.

 

 


Installing Unified Platform

IMPORTANT

IMPORTANT:

In scenarios where internal NTP servers are used, make sure the system time of all nodes is consistent with the current time before deploying the cluster. In scenarios where external NTP servers are used, you do not need to verify the system time of the nodes. If the internal or external NTP server fails, you cannot deploy the cluster. To view the system time, execute the date command. To modify the system time, use the date -s yyyy-mm-dd or date -s hh:mm:ss command.

 

Creating a Matrix cluster

Logging in to Matrix

Restrictions and guidelines

On Matrix, you can perform the following operations:

·     Upload or delete the Unified Platform installation package.

·     Deploy, upgrade, expand, or uninstall Unified Platform.

·     Upgrade or rebuild cluster nodes.

·     Add or delete worker nodes.

Do not perform the following operations simultaneously on Unified Platform when you perform operations on Matrix:

·     Upload or delete the component installation packages.

·     Deploy, upgrade, or expand the components.

·     Add, edit, or delete the network.

Procedure

1.     Enter the Matrix login address in your browser and then press Enter.

¡     If the node that hosts Matrix uses an IPv4 address, the login address is in the https://ip_address:8443/matrix/ui format, for example, https://172.16.101.200:8443/matrix/ui.

¡     If the node that hosts Matrix uses an IPv6 address, the login address is in the https://[ip_address]:8443/matrix/ui format, for example, https://[2000::100:611]:8443/matrix/ui.

ip_address represents the IP address of the node that hosts Matrix. This configuration uses an IPv4 address. 8443 is the default port number.

 

 

NOTE:

In cluster deployment mode, ip_address can be the IP address of any node in the cluster before the cluster is deployed.

 

Figure 25 Matrix login page

 

2.     Enter the username and password, and then click Login. The cluster deployment page is displayed.

The default username is admin and the default password is Pwd@12345. If you have set the password when installing the operating system, enter the set password.

To deploy a dual-stack cluster, enable the dual-stack feature.

Figure 26 Single-stack cluster deployment page

 

Figure 27 Dual-stack cluster deployment page

 

Configuring cluster parameters

CAUTION

CAUTION:

If multiple NICs configured with IP addresses are in up state on a node, use the ifdown command to shut down all the NICs except for the NIC to be used by the cluster. After cluster deployment, configure security policies from Matrix to ensure correct cluster operation, and then you can bring up NICs as needed. For information about adding security policies, see "How can I configure a security policy when a node has multiple NICs in up state?."

 

Before deploying cluster nodes, first configure cluster parameters. On the Configure cluster parameters page, configure cluster parameters as described in Table 9 or Table 10 and then click Apply.

Table 9 Configuring single-stack cluster parameters

Parameter

Description

Cluster internal virtual IP

IP address for communication between the nodes in the cluster. This address must be on the same subnet as the master nodes. It cannot be modified after cluster deployment. Please be cautious when you configure this parameter.

Northbound service VIP

IP address for northbound interface services. This address must be on the same subnet as the master nodes.

Service IP pool

Address pool for IP assignment to services in the cluster. It cannot overlap with other subnets in the deployment environment. The default value is 10.96.0.0/16. Typically, the default value is used.

Container IP pool

Address pool for IP assignment to containers. It cannot overlap with other subnets in the deployment environment. The default value is 177.177.0.0/16. Typically, the default value is used.

Cluster network mode

Network mode of the cluster. Only Single Subnet mode is supported. In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for communications.

NTP server

Used for time synchronization between the nodes in the cluster. Options include Internal server and External server. If you select External server, you must specify the IP address of the server, and make sure the IP address does not conflict with the IP address of any node in the cluster.

An internal NTP server is used in this configuration. After cluster deployment is started, the system synchronizes the time first. After the cluster is deployed, the three master nodes will synchronize the time regularly to ensure that the system time of all nodes in the cluster is consistent.

External DNS server

Used for resolving domain names outside the K8s cluster. Specify it by using the IP: Port format. In this configuration, leave this parameter not configured.

The DNS server in the cluster cannot resolve domain names outside the cluster. This platform will forward an external domain name randomly to an external DNS server for resolution.

A maximum of 10 external DNS servers can be configured. All the external DNS servers must have the same DNS resolution capability, and each can perform external domain name resolution independently. These DNS servers will be used randomly without precedence and sequence.

Make sure all DNS servers can access the root domain. To verify the accessibility, use the nslookup -port = {port} -q = ns. {Ip} command.

 

Table 10 Configuring dual-stack cluster parameters

Parameter

Description

Cluster internal virtual IP

IP address for communication between the nodes in the cluster. This address must be on the same subnet as the master nodes. It cannot be modified after cluster deployment. Please be cautious when you configure this parameter.

Northbound service VIP1 and VIP2

IP address for northbound interface services. This address must be on the same subnet as the master nodes. VIP1 is an IPv4 address, and VIP2 is an IPv6 address. For the northbound service VIPs, you must specify at least one IPv4 address or IPv6 address. Also, you can configure both an IPv4 address and IPv6 address. You cannot configure two IP addresses of the same version.

Service IP pool

This parameter takes effect only in a dual-stack environment.

Address pool for assigning IPv4 addresses and IPv6 addresses to services in the cluster. The default IPv4 address is 10.96.0.0/16, and the default IPv6 address is fd00:10:96::/112. Typically, the default values are used. You cannot change the value after deployment.

To avoid cluster errors, make sure the subnet does not overlap with other subnets in the deployment.

Container IP pool

This parameter takes effect only in a dual-stack environment.

Address pool for assigning IPv4 addresses and IPv6 addresses to containers in the cluster. The default IPv4 address is 177.177.0.0/16, and the default IPv6 address is fd00:177:177::/112. Typically, the default values are used. You cannot change the value after deployment.

To avoid cluster errors, make sure the subnet does not overlap with other subnets in the deployment.

Cluster network mode

Network mode of the cluster. Only Single Subnet mode is supported. In this mode, all nodes and virtual IPs in the cluster must be on the same subnet for communications.

NTP server

Used for time synchronization between the nodes in the cluster. Options include Internal server and External server. If you select External server, you must specify the IP address of the server, and make sure the IP address does not conflict with the IP address of any node in the cluster.

An internal NTP server is used in this configuration. After cluster deployment is started, the system synchronizes the time first. After the cluster is deployed, the three master nodes will synchronize the time regularly to ensure that the system time of all nodes in the cluster is consistent.

External DNS server

Used for resolving domain names outside the K8s cluster. Specify it by using the IP: Port format. In this configuration, leave this parameter not configured.

The DNS server in the cluster cannot resolve domain names outside the cluster. This platform will forward an external domain name randomly to an external DNS server for resolution.

A maximum of 10 external DNS servers can be configured. All the external DNS servers must have the same DNS resolution capability, and each can perform external domain name resolution independently. These DNS servers will be used randomly without precedence and sequence.

Make sure all DNS servers can access the root domain. To verify the accessibility, use the nslookup -port = {port} -q = ns. {Ip} command.

 

IMPORTANT

IMPORTANT:

If the existing NTP server cannot reach the northbound addresses, you can change cluster parameters to add NTP servers at NIC network configuration after cluster deployment.

 

Creating a cluster

For standalone deployment, add one master node on Matrix. For cluster deployment, add three master nodes on Matrix.

To create a cluster:

1.     After configuring the cluster parameters, click Next.

Figure 28 Cluster deployment page

 

2.     In the Master Node area, click the plus icon.

3.     Configure node parameters as shown in Figure 29 and then click Apply.

Figure 29 Configuring single-stack node parameters

 

Figure 30 Configuring dual-stack node parameters

 

Table 11 Node parameter description

Item

Description

Type

Displays the node type. Options include Master and Worker. This field cannot be modified.

IP address

Specify the IP address of the node.

Username

Specify the user account to access the operating system. Use a root user account or admin user account based on your configuration during system installation. All nodes in a cluster must use the same user account.

Password

Specify the password to access the operating system.

 

4.     Add the other two master nodes in the same way the first master node is added.

For standalone deployment, skip this step.

5.     Click Start deployment.

When the deployment progress of each node reaches 100%, the deployment finishes. After the cluster is deployed, a star icon  is displayed at the left corner of the primary master node, as shown in Figure 31.

Figure 31 Cluster deployment completed

 

After the cluster is deployed, you can skip over the procedures for configuring the network and deploying applications and configure them later as needed.

Deploying the applications

Procedure

1.     Enter https://ip_address:8443/matrix/ui in your browser to log in to Matrix. ip_address represents the northbound service virtual IP address.

2.     On the top navigation bar, click GUIDE, and then select Clusters.

3.     Select installation packages and click Next.

First upload and deploy the following required packages. By default, the following packages are selected, and do not unselect them.

¡     common_PLAT_GlusterFS_2.0_<version>.zip (required)

¡     general_PLAT_portal_2.0_<version>.zip (required)

¡     general_PLAT_kernel_2.0_<version>.zip (required)

Then, deploy other installation packages, which can be bulk uploaded.

4.     On the Configure Shared Storage page, click Next.

GlusterFS does not support shared storage configuration.

 

CAUTION

CAUTION:

To avoid installation failure, do not format the disk that is used for the GlusterFS application. If the disk has been formatted, use the wipefs -a /dev/disk name command to wipe the disk. If an error message is displayed when executing this command, wait for a time and then execute this command again.

 

5.     On the Configure Database page, click Next.

GlusterFS does not support database configuration.

6.     On the Configure Parameters page, configure the parameters.

¡     GlusterFS

-     nodename—Specifies the host name of the node server.

-     device—Specifies the name of the disk or partition on which GlusterFS is to be installed.

To install GlusterFS on an empty disk, enter the name of the disk.

To install GlusterFS on an empty partition, enter the name of the partition.

 

IMPORTANT

IMPORTANT:

Use the lsblk command to view disk partition information and make sure the selected disk or partition is not mounted or used and has a minimum capacity of 200 GB. If no disk or partition meets the conditions, create a new one. For more information, see "How can I prepare a disk partition for GlusterFS on a node?."

 

¡     Portal

-     Unified Platform supports HTTPS. The ServiceProtocol is HTTP by default, and can be modified to HTTPS. Change the port number as needed.

-     To deploy an English environment, set Language to en_US.

¡     Set the Kernel application parameters. You can set the memory and resources used by ES according to service requirements, as shown in Figure 32. You can modify the ES values in an environment where the Kernel application has been installed or when upgrading the Kernel application. For more information, see "How can I manually modifying the ES values in the back end?."

Figure 32 Memory and resource used by ElasticSearch

 

7.     Click Deploy.

8.     To deploy other applications, click Deploy on the top navigation bar and then select Applications.

9.     Click the upload icon  to upload the application installation packages. For the installation packages, see Table 3. Select the installation packages as required.

 

 


Logging in to Unified Platform

1.     Enter http://ip_address:30000 in your browser and then press Enter. ip_address represents the northbound service virtual IP address.

Figure 33 Unified Platform login page

 

2.     Enter the username and password, and then click Login.

The default username is admin and the default password is Pwd@12345. If you have set the password when installing the operating system, enter the set password.

 


Registering the software

After Unified Platform is deployed, you must obtain the license authorization to use it normally. If you have purchased the product, use the license key in the license letter for the later registration process. If you use the product for trial, contact the H3C marketing staff to request the trial license and obtain the license authorization.

For more information about requesting and installing the license, see H3C Software Product Remote Licensing Guide.

Installing licenses on the license server

 

NOTE:

The license server and Unified Platform are one-to-one related. A license server can be used by only one Unified Platform.

 

If you selected Install License Server at H3Linux OS installation, the system automatically deploys a license server on the node and no manual intervention is required. If you did not select the option, deploy license servers manually as needed.

·     Deploy one license server, and log in to the license server through the real IP address of the node.

·     Deploy two license servers. The following methods are available:

¡     Deploy the two license servers in primary/secondary mode to improve reliability as follows:

-     Log in to one of the license servers and configure HA. The IP addresses of the primary and secondary servers are the real IP addresses of the two nodes.

-     Manually set the virtual IP and HA ID. Then, you can access the license servers through the virtual IP.

¡     Do not configure HA. Configure two standalone license servers. Log in to a license server through the real IP address of the corresponding node.

 

 

NOTE:

·     Determine whether to configure HA according to your service requirements.

·     Configure HA (if needed) before installing licenses. If not, the license information on the primary license server will overwrite that on the secondary license server.

 

Obtaining the device information file

1.     Log in to the license server at https://ip_address :port (the default port number is 28443). By default, the username and password are admin and admin@h3c, respectively.

2.     Click Export DID to obtain the device information file of the license server.

Requesting an activation file

Access the H3C license management platform at http://www.h3c.com/en/License and enter the License Activation Requests page. Follow the guide on the page to use the license key and the device information file of the license server to request an activation file.

Installing the activation file

1.     Log in to the license server. From the navigation pane, select License > Installation.

2.     Click Install License File. In the dialog box that opens, upload the locally saved activation file and install the activation file.

3.     After the activation file is successfully installed, the license installation page will display the installed authorization information.

Adding a client

1.     Log in to the license server. From the navigation pane, select Configuration > Clients to enter the client configuration page.

2.     Click Add. On the page that opens, configure the username and password for a client.

Obtaining the license authorization

After installing the license for the product on the license server, you only need to connect to the license server on the license management page to obtain the license. To do that, perform the following tasks:

1.     Log in to Unified Platform.

2.     Click the System tab. From the navigation pane, select License Management > License Information to enter the license information page, as shown in Figure 34

Figure 34 License information page

 

3.     Configure the following parameters:

¡IP Address: Specify the IP address of the server hosting the license server.

¡Port Number: The port number is 5555 by default, which is the same as the port number of the license server authorization service.

¡Username: Client name configured on the license server.

¡Password: Password for the client name configured on the license server.

4.     Click Connect to set up a connection to the license server. After the connection is successfully set up, Unified Platform can automatically obtain authorization information from the license server.

 


Managing the components on Unified Platform

IMPORTANT

IMPORTANT:

·     The components run on Unified Platform. You can deploy, upgrade, and uninstall it only on Unified Platform.

·     You add, edit, or delete networks only on Unified Platform.

 

Preparing for deployment

Enabling NICs

If the server uses multiple NICs for connecting to the network, enable the NICs before deployment.

The procedure is the same for all NICs. The following procedure enables NIC ens34.

To enable a NIC:

1.     Access the server that hosts Unified Platform.

2.     Access the NIC configuration file.

[root@node1 /]# vi /etc/sysconfig/network-scripts/ifcfg-ens34

3.     Set the BOOTPROTO field to none to not specify a boot-up protocol and set the ONBOOT field to yes to activate the NIC at system startup.

Figure 35 Editing the configuration file of a NIC

 

4.     Execute the ifdown and ifup commands in sequence to reboot the NIC.

[root@node1 /]# ifdown ens34

[root@node1 /]# ifup ens34

5.     Execute the ifconfig command to verify that the NIC is in up state.

Deploying components

1.     Log in to Unified Platform. Click System > Deployment.

You are placed on the package upload page.

Figure 36 Package upload

 

2.     Click Upload to upload the installation package and then click Next.

3.     Select components, and then click Next.

Table 12 Component description

Item

Description

Campus network

Specify the controller for setting up a campus network to implement campus network automation, user access control automation, and policy automation.

End User Intelligent Access

Provides authentication and authorization for the end users to access the network.

Super controller

Specify the super controller for multiple cloud DC networks for hierarchical management of these networks.

Cloud DC

Specify the controller for setting up a cloud DC network to implement DC network automation and dynamically manage virtual networks and network services.

To use the remote disaster recovery feature, select Support RDRS on this page.

WAN

Specify the controller for setting up a WAN to implement service automation and intelligent traffic scheduling for WAN backbone networks.

SD-WAN

Specify the controller for setting up an SD-WAN to implement service automation and intelligent traffic scheduling for branch networks.

Cross-Domain Service Orchestration

Specify the cross-domain orchestrator to associate controllers on multiple sites and achieve overall control of network resources by using the predefined service logic.

SeerEngine-SEC

Select the SeerEngine-SEC package to install for automated deployment and management of security services on the SDN network.

VNF Lifecycle Management

Provides lifecycle management of VNFs.

Intelligent Analysis Engine

Specify the intelligent analysis engine, which collects network data through telemetry technologies, and analyzes and processes the data through big data and AI to implement intelligent assurance and prediction for network services.

Unified O&M

Provides unified CAS authentication, route configuration and forwarding, LDAP authentication and user synchronization, and privilege management.

ITOA (Information Technology Operations Analytics)

ITOA base and ITOA components provide fundamental configuration for all the analytic systems.

Public Service

Specify services shared by multiple scenarios mentioned above. Options include Oasis Platform and vDHCP server.

vDHCP Server is used for automated device deployment.

 

4.     Configure required parameters for the component, and then click Next.

Only SeerAnalyzer, Analyzer-Collector, and Oasis Platform support parameter configuration. For other components, click Next.

5.     Create networks and subnets, and then click Next.

6.     Select the nodes where the Pods are to be deployed, and then click Next.

Only Cloud DC, Intelligent Analysis Engine, and ITOA support parameter configuration. For other components, click Next.

7.     Bind networks to the components, assign IP address to the components, and then click Next.

8.     On the Confirm Parameters tab, verify network information.

9.     Click Deploy.

10.     To view detailed information about a component after deployment, click  to the left of the component on the deployment management page, and then click .

Figure 37 Expanding component information

 

Upgrading a component

CAUTION

CAUTION:

·     The upgrade might cause service interruption. Be cautious when you perform this operation.

·     To avoid data loss when upgrading CMDB, you must first disable the function of automatically deleting entries without synchronization sources for network-related resource types (for example, switches and routers) in the resource synchronization settings. If you install CMDB of version E0706P02 or later, you do not need to do that.

 

Before upgrading a component, save configuration data on the component. For the backup procedure, see "Backing up Unified Platform and its components."

The controller can be upgraded on Unified Platform with the configuration retained.

To upgrade a component:

1.     Log in to Unified Platform. Click System > Deployment.

2.     Click the right chevron button  for the controller to expand controller information, and then click the upgrade icon  .

3.     On the Upgrade tab, upload and select the installation package, and then click Upgrade.

4.     If the upgrade fails, click Roll Back to roll back to the previous version.

Removing a component

1.     Log in to Unified Platform. Click System > Deployment.

2.     Select a component, and then click Remove.

 


Backing up and restoring the configuration

CAUTION

CAUTION:

·     Do not perform any configuration operations while a configuration backup or restoration process is in progress.

·     To ensure configuration consistency, you must use the backup files for Unified Platform and Unified Platform components saved at the same time for restoration. As a best practice, use the backup files saved at the same scheduled time for configuration restoration.

·     To ensure successful restoration, the backup files used for the restoration must contain the same number of nodes as the environment to be restored.

 

To back up and restore Unified Platform and its components configuration data, log in to Unified Platform and configure backup and recovery.

·     BackupThe system supports scheduled backup and manual backup. You can back up the file to the server where Unified Platform resides or a remote server, or save the file locally. The file must be named in the prefix name_component name_component version_date_backup mode.zip format. The backup mode can be M or A, representing manual backup or scheduled backup respectively.

·     RestoreYou can restore the product configuration from the local backup file or from the backup history list.

Backing up Unified Platform and its components

1.     Log in to Unified Platform.

2.     Click Settings in the System area and then click the Backup & Restore tab.

3.     Click Backup Configuration. In the dialog box that opens, configure the backup settings, including the prefix name of the backup file and parameters for local backup, remote backup, and scheduled backup, and then click Apply.

If you enable the scheduled backup option, the system automatically backs up the configurations of Unified Platform and its components to the specified path at the scheduled interval.

4.     Click Back up and then select a component to back up.

Restoring the configuration

IMPORTANT

IMPORTANT:

If you need to restore the configuration of both Unified Platform and its components, restore Unified Platform configuration first.

 

1.     Log in to Unified Platform.

2.     Click Settings in the System area and then click the Backup & Restore tab.

3.     To restore the configuration from a backup file saved locally:

a.     Click the  icon to select the backup file, and then click Upload.

b.     Click Restore.

4.     To restore the configuration from the Backup History list, determine the backup file, and then click Restore in the Actions column for the file.

 


Cluster failure recovery

Single node failure recovery

When several nodes are deployed correctly to form a cluster and one of these nodes fails, perform this task to recover the cluster from the failure.

Procedure

When a single node in a Matrix cluster fails, recover the failed node through rebuilding the node.

To rebuild a single node:

1.     Log in to the Matrix Web interface of the node, and then click Deploy > Cluster. Click the  button for the node and select Rebuild from the list. Then use one of the following methods to rebuild the node:

¡     Select and upload the same version of software package as installed on the current code. Then click Apply.

¡     Select the original software package version and then click Apply.

2.     After rebuilding the node, identify whether the node is recovered.

 

CAUTION

CAUTION:

If the hardware of a cluster node server fails, and the node server operates abnormally and cannot be recovered, you must replace the node server with a new one. Before rebuilding a node, you must pre-install Matrix of the same version as the cluster nodes on the new node, and configure the same host name, NIC name, node IP, username, password, and disk partitions on the new node as the faulty node.

 

 


Scaling out or in Unified Platform and its components

CAUTION

CAUTION:

Before scaling out Unified Platform and its components, back up the configuration and data of Matrix, Unified Platform, and components on Unified Platform. You can use the backup file to restore the configuration and data in case of a scale-out failure.

 

IMPORTANT

IMPORTANT:

You can scale out or in components only on Unified Platform.

 

Unified Platform can be scaled out in both standalone mode and cluster mode.

·     To scale out Unified Platform from standalone mode to cluster mode, add two master nodes on Matrix to form a three-host cluster with the existing master node. Then scale out Unified Platform and its components sequentially.

·     To scale out Unified Platform in cluster mode, scale out the nodes one by one.

Scaling out Unified Platform in standalone mode

Scaling out Matrix

1.     Install Matrix on two new servers. For the deployment procedure, see "Installing Matrix (required on non-H3Linux)."

2.     Add two master nodes to Matrix.

a.     Log in to Matrix.

b.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

c.     In the Master node area, click the plus icon to add two master nodes.

3.     Click Start deployment.

The scale out takes some time to complete.

Scaling out Unified Platform

1.     Log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Application from the navigation pane.

3.     Click Scale out Application, select gluster, SYSTEM, and Unified Platform Syslog, and then click Next.

 

 

NOTE:

Before expansion, make sure all component versions in SYSTEM support expansion according to the actual conditions of your solution.

 

4.     Click Next.

On the Configure Shared Storage and Configure Database pages, you do not need to perform any operations.

5.     On the Configure Params page, enter disk information for the three nodes in the Configuration Item Parameters area for gluster, leave other parameters unchanged, and then click Expand.

The scale-out takes some time to complete.

Scaling out Unified Platform in cluster mode

1.     Deploy Matrix on the new server. For the deployment procedure, see "Installing Matrix (required on non-H3Linux)."

2.     Add a worker node to Matrix.

a.     Log in to Matrix.

b.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

c.     In the Worker node area, click the plus icon  to add a worker node.

To add more worker nodes, repeat the step above. Alternatively, click Bulk Add, and bulk add worker nodes through uploading a worker node template file.

 

3.     Click Start deployment.

The scale-out takes some time to complete.

Scaling in Unified Platform in cluster mode

You can scale in Unified Platform in cluster mode by deleting a worker node in the cluster.

To scale in Unified Platform in cluster mode:

1.     Log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane..

3.     In the Worker node area, click the  icon for a worker node, and then select Delete from the list.


Upgrading the software

The software cannot be rolled back after it is upgraded. If errors occur during the upgrade process, recover the data and follow steps to upgrade the software again. Alternatively, contact Technical Support.

 

IMPORTANT

IMPORTANT:

To avoid data loss when upgrading CMDB, you must disable the function of automatically deleting entries without synchronization sources for network-related resource types (for example, switches and routers) in the resource synchronization settings. If you install CMDB of version E0706P02 or later, you do not need to do that.

 

Prerequisites

Copy the installation package to your local server. Do not use the installation package through remote sharing or other methods.

1.     To avoid data loss caused by upgrade failure, back up the system data before upgrade.

2.     Before upgrading PLAT 2.0, make sure pods related to PLAT 2.0 are running properly.

3.     During the process of upgrading PLAT 2.0, you cannot modify the language information (switch the language between Chinese and English).

Backing up data

Before the upgrade, back up data for Unified Platform and its components. For more information, see "Backing up and restoring the configuration.”

Remarks

The image of PLAT 2.0 contains the upgrade image for compatible Matrix. Before upgrading software to PLAT 2.0, first upgrade Matrix to compatible Matrix.

 

CAUTION

CAUTION:

·     In cluster mode, Matrix supports ISSU, which ensures service continuity during the upgrade.

·     In standalone mode, Matrix does not support ISSU.

 

When upgrading a Matrix cluster, follow these restrictions and guidelines:

·     To upgrade Matrix in cluster mode, upgrade the worker nodes (if any), the secondary master nodes, and the primary master node in sequence.

·     During the upgrade process, services on a node to be upgraded are migrated to another node not disabled in the same cluster. To avoid service interruption, upgrade the nodes one by one.

Matrix upgrade supports quick upgrade and full upgrade.

·     Quick upgrade—Upgrades only the Matrix service and some components and does not affect the service container operation. This way takes less time, and is simpler and less error-prone than full upgrade. As a best practice, use this way.

·     Full upgrade—Upgrades the Matrix service and the container platform. The service containers on the upgraded node are removed and rebuilt. Full upgrade supports the following methods:

¡     Upload Image for Upgrade—Upload the new version of the Matrix software package on the cluster deployment page for upgrade. In standalone mode, the Matrix service will be restarted, and the Web interface will be unavailable temporarily. Wait until the page is recovered and log in again.

¡     Upgrade in Back End

-     In standalone mode, log in to the CLI of the node to be upgraded and execute the new version upgrade script for the single node. Then, log in to the Web interface and perform upgrade in the back end.

-     In cluster mode, log in to the CLI of the node to be upgraded, uninstall the old version, and then install the new version. Then, perform upgrade on the cluster deployment page of Matrix.

 

 

NOTE:

·     You must upgrade all nodes of the same cluster in the same way, either quick upgrade or full upgrade.

·     In versions later than PLAT 2.0 (E0611), both quick upgrade and full upgrade (equivalent to node upgrade in versions earlier than PLAT 2.0 (E0611)) are supported in standalone mode and cluster node.

·     If you have configured an external NTP server, make sure the ntpdate {NtpServerIP} command of the node is available.

 

Table 13 Node upgrade in cluster mode

Upgrade method

Available in versions

Implementation

Node

Quick upgrade

PLAT 2.0 (E0704) and later

Upload the new image file on the Web interface

Upgrade the secondary master nodes and worker nodes

Upgrade the primary master node

Full upgrade

All E07 versions

Upload the new image file on the Web interface

Upgrade the secondary master nodes and worker nodes

Upgrade the primary master node

Upgrade in Back End

Upgrade nodes

 

Table 14 Node upgrade in standalone mode

Upgrade method

Implementation

Remarks

Quick upgrade

Upload the new image file on the Web interface

During the upgrade process, the Matrix service will be restarted, and the Web interface will be unavailable temporarily. Wait until the page is recovered and log in again.

In the current software version, you cannot upgrade Matrix from version E06 to E07 through uploading the image for upgrade on the full upgrade page.

Full upgrade

Upload the new image file on the Web interface

Upgrade in Back End

N/A

 

Upgrading E07 Matrix

In cluster mode and standalone mode, the upgrade procedures are different. Please perform upgrade according to your actual environment.

 

 

NOTE:

·     In standalone mode, you must back up data before upgrade to avoid data loss caused by upgrade failure. During the upgrade process, do not disable master nodes. If upgrade fails, you can select the upgrade in back end method on the Web interface and try again.

·     The user that performs upgrade in the back end must be the same as the user that installed the previous version. If you log in to Matrix by using username admin and upgrade Matrix in the back end, add sudo before the commands. To run the installation or uninstallation script, add sudo /bin/bash before the commands.

 

Upgrading Matrix in cluster mode

When upgrading Matrix in cluster mode, upgrade the worker nodes (if any), the secondary master nodes, and the primary master node in sequence.

Quick upgrade

1.     Use the northbound service VIP to log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

3.     (Applicable to only the primary master node.) Perform primary/secondary master node switchover. Select Upgrade. In the primary/secondary master node switchover confirmation dialog box that opens, click OK.

4.     Click  in the upper right corner of the node, and select Upgrade. In the dialog box that opens, click the Quick Upgrade tab, and select Upload Image for Upgrade, as shown in Figure 38.

Figure 38 Selecting an upgrade method

 

5.     Select the new version of the Matrix software package, and click Upload to upload the software package.

6.     After the software package is uploaded, click Apply to start upgrade.

The upgrade operation succeeds if the node icon turns blue.

7.     Click  in the upper right corner of the node, and select Enable to enable the node.

Full upgrade

Upload the image for upgrade on the Web interface

 

CAUTION

CAUTION:

·     To upgrade Matrix in cluster mode, upgrade the worker nodes (if any), the secondary master nodes, and the primary master node in sequence.

·     During the upgrade process, services on a node being upgraded are migrated to another node not disabled in the same cluster. To avoid service interruption, upgrade the nodes one by one.

 

1.     Use the northbound service VIP to log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

3.     Select the node to be upgraded, and click  in the upper right corner of the node. Select Disable to disable the node. (To disable the primary master node, first perform primary/secondary master node switchover.)

4.     (Applicable to only the primary master node.) Perform primary/secondary master node switchover. Select Upgrade. In the primary/secondary master node switchover confirmation dialog box that opens, click OK.

5.     Click  in the upper right corner of the node, and select Upgrade. In the dialog box that opens, click the Full Upgrade tab, and select Upload Image for Upgrade, as shown in Figure 39.

Figure 39 Selecting an upgrade method

 

6.     Select the new version of the Matrix software package, and click Upload to upload the software package.

7.     After the software package is uploaded, click Apply to start upgrade.

The upgrade operation succeeds if the node icon turns blue.

8.     Click  in the upper right corner of the node, and select Enable to enable the node.

Upgrade in Back End

1.     Use the northbound service VIP to log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

3.     Select the node to be upgraded, and click  in the upper right corner of the node. Select Disable to disable the node. (To disable the primary master node, first perform primary/secondary master node switchover.)

4.     Log in to the back end of the Matrix node to be upgraded, upload the new version of the Matrix software package, and execute the upgrade script upgrade.sh to install the new version of the Matrix software package. Install the x86_64 version by using the root account in this example.

 

CAUTION

CAUTION:

When using a non-root account, to execute the upgrade.sh script, you must execute the sudo bash upgrade.sh command.

 

[root@matrix01 ~]# unzip Matrix-V900R001B07D006-x86_64.zip

Archive:  Matrix-V900R001B07D006-x86_64.zip

   creating: Matrix-V900R001B07D006-x86_64/

   extracting: Matrix-V900R001B07D006-x86_64/matrix.tar.xz

   inflating: Matrix-V900R001B07D006-x86_64/install.sh

   inflating: Matrix-V900R001B07D006-x86_64/uninstall.sh

   inflating: Matrix-V900R001B07D006-x86_64/standaloneUpgrade.sh

   inflating: Matrix-V900R001B07D006-x86_64/upgrade.sh

[root@matrix01 ~]# cd Matrix-V900R001B07D006-x86_64

[root@matrix01 Matrix-V900R001B07D006-x86_64]# ./upgrade.sh

Uninstalling...

[uninstall] Stopping matrix service...

[uninstall] Done.

[uninstall] Uninstalling matrix...

[uninstall] Done.

[uninstall] Restoring default configuration...

[uninstall] Done.

[uninstall] Clearing matrix data...

[uninstall] Done.

Complete!

Installing...

[install] -----------------------------------

[install]   Matrix-V900R001B07D006-x86_64

[install]   H3Linux Release 1.1.2

[install]   Linux 3.10.0-957.27.2.el7.x86_64

[install] -----------------------------------

[install] WARNING: To avoid unknown error, do not interrupt this installation procedure.

[install] Checking environment...

[install] WARNING: Firewalld is active, please ensure matrix.service communication ports[6443 10250...etc.] are open or your cluster may not function correctly.

[install] Done.

[install] Checking current user permissions...

[install] Done.

[install] Decompressing matrix package...

[install] Done.

[install] Installing dependent software...

[install] Done.

[install] Starting matrix service...

[install] Done.

Complete!

[upgrade] WARNING: After the upgrade script is executed, please select Upgrade in Back End on the Deploy Cluster > Upgrade Node page.

5.     Use the systemctl status matrix command to identify whether the Matrix service is installed successfully on the node. The Active field displays active (running) if Matrix is installed successfully on the node.

6.     Log in to Matrix, and enter the cluster deployment page. Select the node that has been installed with the new software package, click  in the upper right corner of the node, and select Upgrade. In the dialog box that opens, click the Full Upgrade tab, and select Upgrade in Back End, as shown in Figure 40.

Figure 40 Selecting an upgrade method

 

7.     Click Apply.

The upgrade operation succeeds if the node icon turns blue.

8.     Click  in the upper right corner of the node, and select Enable to enable the node.

Upgrading Matrix in standalone mode

Quick upgrade

1.     Use the northbound service VIP to log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

3.     Select the node to be upgraded, click  in the upper right corner of the node, and select Upgrade. In the dialog box that opens, click the Quick Upgrade tab, and select Upload Image for Upgrade, as shown in Figure 41.

Figure 41 Selecting an upgrade method

 

4.     Select the new version of the Matrix software package, and click Upload to upload the software package.

5.     After the software package is uploaded, click Apply to start upgrade.

The upgrade operation succeeds if the node icon turns blue.

Full upgrade

Upload the image for upgrade on the Web interface

1.     Use the northbound service VIP to log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

3.     Select the node to be upgraded, click  in the upper right corner of the node, and select Upgrade. In the dialog box that opens, click the Full Upgrade tab, and select Upload Image for Upgrade, as shown in Figure 42.

Figure 42 Selecting an upgrade method

 

4.     Select the new version of the Matrix software package, and click Upload to upload the software package.

5.     After the software package is uploaded, click Apply to start upgrade.

The upgrade operation succeeds if the node icon turns blue.

Upgrade in Back End

1.     Obtain the new version of the software package, and copy the software package to the destination directory on the server or upload the software image to the specified directory (/single-upgrade in this example) through FTP. Log in to the back end of the Matrix node to be upgraded, and install the new version of the Matrix software package. Install the x86_64 version by using the root account in this example.

 

IMPORTANT

IMPORTANT:

When using a non-root account, to execute the standaloneUpgrade.sh script, you must execute the sudo bash standaloneUpgrade.sh command.

 

[root@matrix-node1 single-upgrade]# unzip Matrix-V900R001B07D006-x86_64.zip

Archive:  Matrix-V900R001B07D006-x86_64.zip

   creating: Matrix-V900R001B07D006-x86_64/

  inflating: Matrix-V900R001B07D006-x86_64/matrix.tar.xz

  inflating: Matrix-V900R001B07D006-x86_64/install.sh

  inflating: Matrix-V900R001B07D006-x86_64/uninstall.sh

  inflating: Matrix-V900R001B07D006-x86_64/standaloneUpgrade.sh

[root@matrix-node1 single-upgrade]# cd Matrix-V900R001B07D006-x86_64

[root@matrix-node1 Matrix-V900R001B07D006-x86_64]# ./standaloneUpgrade.sh

Upgrading...

[upgrade] -----------------------------------

[upgrade]   Matrix-V900R001B07D006-x86_64

[upgrade]   H3Linux Release 1.1.2

[upgrade]   Linux 3.10.0-957.27.2.el7.x86_64

[upgrade] -----------------------------------

[upgrade] WARNING: To avoid unknown error, do not interrupt this installation procedure.

[upgrade] Checking environment...

[upgrade] Done.

[upgrade] Checking current user permissions...

[upgrade] Done.

[upgrade] Decompressing matrix package...

[upgrade] Done.

[upgrade] Installing dependent software...

[upgrade] Done.

[upgrade] Starting matrix service...

[upgrade] Done.

Complete!

2.     Use the systemctl status matrix command to identify whether the Matrix service is installed successfully. The Active field displays active (running) if Matrix is installed successfully on the node.

3.     Use the northbound service VIP to log in to Matrix.

4.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

5.     Select the node that has been installed with the new software package, click  in the upper right corner of the node, and select Upgrade. In the dialog box that opens, click the Full Upgrade tab, and select Upgrade in Back End, as shown in Figure 43.

Figure 43 Selecting an upgrade method

 

6.     Click Apply.

The upgrade operation succeeds if the node icon turns blue.

7.     Click  in the upper right corner of the node, and select Enable to enable the node.

Upgrading Matrix from E06 to E07

In cluster mode, use the script installation method. In standalone mode, use the full upgrade method.

Upgrading Matrix in cluster mode

Upgrade the secondary master nodes, the primary master node, and then the worker nodes in sequence. This example upgrades Matrix in a cluster with three master nodes and one worker node.

Upgrading by using the root account

1.     Use the northbound service VIP to log in to Matrix.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

3.     (Applicable only to the primary master node.) Perform primary/secondary master node switchover. Select Upgrade. In the primary/secondary master node switchover confirmation dialog box that opens, click OK.

4.     Click  in the upper right corner of the node, and select Disable.

5.     Log in to the back end of the node through SSH, access the /opt/matrix/ directory, and use the sh uninstall.sh script to uninstall B06 Matrix.

6.     Upload and decompress the installation package for B07 Matrix. If the B06 or E0704 Matrix has been installed, upload the package to the /h3Linux path. Use the bash install.sh script to install B07 Matrix.

7.     After B07 Matrix is installed successfully, access the /opt/matrix/tools/matrix_upgrade_across_kubernetes/ directory, execute the bash matrix_upgrade_across_kubernetes.sh VIP script to upgrade Matrix, where VIP represents the internal virtual IP of the cluster. You can view the VIP on the system parameter page.

8.     Access the cluster deployment page and verify that the node is in normal state.

 

 

NOTE:

·     After you upgrade the first secondary master node, the kube-controller-manager and kube-scheduler check item results are red on the node, which is normal. After you switch the role of the upgraded node to primary master, these check item results will be restored to the normal state on the node and the node icon will become blue after a certain period of time.

·     After the first secondary master node is upgraded and its role is switched to primary master, you can ignore abnormal results for the heapster and kube-proxy check items (if any) on the other nodes, and proceed to the next step.

 

9.     After you upgrade the first secondary master node, switch the node role to primary master:

a.     Log in to the master nodes to be upgraded, execute the systemctl stop matrix.service command to stop services on the nodes.

The secondary master node that has been upgraded becomes the primary node automatically.

b.     Access the cluster deployment page, verify that the primary/secondary switchover is performed successfully, and the primary master node is in normal state.

c.     Use the systemctl start matrix.service command to start services on the two master nodes.

d.     On the cluster deployment page, verify that all the three master nodes are operating correctly.

10.     Repeat steps 1 to 8 on the other master nodes.

11.     Repeat steps 1 to 8 on the worker nodes.

The icon of a node is blue-colored if the node is operating correctly. If any issue occurs during the upgrade process, see "Troubleshooting cluster upgrade failure."

Upgrading by using a non-root account

1.     Authorize the sudo permission over /bin/bash and /bin/sh to non-root users from visudo.

Figure 44 Authorizing sudo permission

 

2.     Use the northbound service VIP to log in to Matrix.

3.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

4.     (Applicable only to the primary master node.) Perform primary/secondary master node switchover. Select Upgrade. In the primary/secondary master node switchover confirmation dialog box that opens, click OK.

5.     Click  in the upper right corner of the node, and select Disable.

6.     Log in to the back end of the node, access the /opt/matrix/ directory, and use the sudo sh uninstall.sh script to uninstall B06 Matrix.

7.     Upload and decompress the installation package for B07 Matrix. If the B06 or E0704 Matrix has been installed, upload the package to the /h3Linux path. Use the sudo bash install.sh script to install B07 Matrix.

8.     After B07 Matrix is installed successfully, access the /opt/matrix/tools/matrix_upgrade_across_kubernetes/ directory, execute the sudo bash matrix_upgrade_across_kubernetes.sh VIP script to upgrade Matrix, where VIP represents the internal virtual IP of the cluster. You can view the VIP on the system parameter page.

9.     Access the cluster deployment page and verify that the node is in normal state.

 

 

NOTE:

·     After you upgrade the first secondary master node, the kube-controller-manager and kube-scheduler check item results are red on the node, which is normal. After you switch the role of the upgraded node to primary master, these check item results will be restored to the normal state on the node and the node icon will become blue after a certain period of time.

·     After the first secondary master node is upgraded and its role is switched to primary master, you can ignore abnormal results for the heapster and kube-proxy check items (if any) on the other nodes, and proceed to the next step.

 

10.     After you upgrade the first secondary master node, switch the node role to primary master:

a.     Log in to the master nodes to be upgraded, execute the systemctl stop matrix.service command to stop services on the nodes.

The secondary master node that has been upgraded becomes the primary node automatically.

b.     Access the cluster deployment page, verify that the primary/secondary switchover is performed successfully, and the primary master node is in normal state.

c.     Use the systemctl start matrix.service command to start services on the two secondary master nodes.

d.     On the cluster deployment page, verify that all the three master nodes are operating correctly.

11.     Repeat steps 2 to 9 on the other master nodes.

12.     Repeat steps 2 to 9 on the worker nodes.

The icon of a node is blue-colored if the node is operating correctly. If any issue occurs during the upgrade process, see "Troubleshooting cluster upgrade failure."

Troubleshooting cluster upgrade failure

1.     Identify the node that failed to be upgraded.

¡     If the first node fails, perform the following tasks and then proceed to the next step:

# Use the cd /opt/matrix/tools/matrix_upgrade_across_kubernetes/ command to access the /opt/matrix/tools/matrix_upgrade_across_kubernetes/ directory on the failed node.

# Execute the bash rollback_first_node_upgrade.sh or sudo bash rollback_first_node_upgrade.sh command to roll back ETCD member data.

# Execute the etcdctl member list command on the primary master node. Verify that the IP addresses used before upgrade have been restored in the peerURLs and clientURLs.

¡     If the second or third master node or any worker node fails, proceed to the next step.

2.     Log in to the back end of the node, access the opt/matrix/ directory, and execute the sh uninstall.sh or sudo sh uninstall.sh script to uninstall B06 Matrix.

3.     Upload and decompress the installation package for B07 Matrix. If the B06 or E0704 Matrix has been installed, upload the package to the /h3Linux path. Execute the bash install.sh or sudo bash install.sh script to install B07 Matrix.

4.     After B07 Matrix is installed successfully, access the /opt/matrix/tools/matrix_upgrade_across_kubernetes/ directory, execute the bash matrix_upgrade_across_kubernetes.sh VIP or sudo bash matrix_upgrade_across_kubernetes.sh VIP script to upgrade Matrix, where VIP represents the internal virtual IP of the cluster. You can view the VIP on the system parameter page.

Upgrading Matrix in standalone mode

Upgrade in back end

1.     Obtain the new version of the software package, and copy the software package to the destination directory on the server or upload the software image to the specified directory (/single-upgrade in this example) through FTP. Log in to the back end of the Matrix node to be upgraded, and install the new version of the Matrix software package. Install the x86_64 version by using the root account in this example.

 

IMPORTANT

IMPORTANT:

When using a non-root account, to execute the standaloneUpgrade.sh script, you must execute the sudo bash standaloneUpgrade.sh command.

 

[root@matrix-node1 single-upgrade]# unzip Matrix-V900R001B07D006-x86_64.zip

Archive:  Matrix-V900R001B07D006-x86_64.zip

   creating: Matrix-V900R001B07D006-x86_64/

  inflating: Matrix-V900R001B07D006-x86_64/matrix.tar.xz

  inflating: Matrix-V900R001B07D006-x86_64/install.sh

  inflating: Matrix-V900R001B07D006-x86_64/uninstall.sh

  inflating: Matrix-V900R001B07D006-x86_64/standaloneUpgrade.sh

[root@matrix-node1 single-upgrade]# cd Matrix-V900R001B07D006-x86_64

[root@matrix-node1 Matrix-V900R001B07D006-x86_64]# ./standaloneUpgrade.sh

Upgrading...

[upgrade] -----------------------------------

[upgrade]   Matrix-V900R001B07D006-x86_64

[upgrade]   H3Linux Release 1.1.2

[upgrade]   Linux 3.10.0-957.27.2.el7.x86_64

[upgrade] -----------------------------------

[upgrade] WARNING: To avoid unknown error, do not interrupt this installation procedure.

[upgrade] Checking environment...

[upgrade] Done.

[upgrade] Checking current user permissions...

[upgrade] Done.

[upgrade] Decompressing matrix package...

[upgrade] Done.

[upgrade] Installing dependent software...

[upgrade] Done.

[upgrade] Starting matrix service...

[upgrade] Done.

Complete!

2.     Use the systemctl status matrix command to identify whether the Matrix service is installed successfully. The Active field displays active (running) if Matrix is installed successfully on the node.

3.     Use the northbound service VIP to log in to Matrix.

4.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

5.     Select the node that has been installed with the new software package, click  in the upper right corner of the node, and select Upgrade. In the dialog box that opens, click the Full Upgrade tab, and select Upgrade in Back End, as shown in Figure 45.

Figure 45 Selecting an upgrade method

 

6.     Click Apply.

The upgrade operation succeeds if the node icon turns blue.

Upgrading Unified Platform

CAUTION

CAUTION:

If you uninstall the existing Unified Platform and then install the new version, the custom configuration and data on the component might be lost. To retain the component configuration and data, use the backup function, as shown in "Backing up and restoring the configuration."

 

IMPORTANT

IMPORTANT:

When upgrading Unified Platform, install the applications in strict accordance with the required sequence.

 

The following two approaches are available for upgrading Unified Platform:

·     Directly install the new version. The upgrade retains Unified Platform configuration and the deployed Unified Platform components are still available after the upgrade. If the upgrade fails, uninstall and then install Unified Platform again.

·     Uninstall the existing Unified Platform and then install the new version.

As a best practice, install the new version directly.

To upgrade Unified Platform:

1.     Use the northbound service VIP to log in to Matrix.

2.     Select Deploy from the top navigation bar and then select Application from the left navigation pane.

3.     Click the  icon to upload the new version of Unified Platform installation package.

The uploaded package will be displayed on the Deployment Procedure page.

4.     Install the new Unified Platform version. For more information, see "Deploying the applications."

 

 


Uninstalling Unified Platform

IMPORTANT

IMPORTANT:

·     To upgrade an application that does not support In-Service Software Upgrade (ISSU), first uninstall the applications deployed after the target application in a reverse order of the application installation and then uninstall the target application. After the upgrade, you must deploy those applications in sequence.

·     The uninstallation of a single component or application will affect the use of other components or applications. Please uninstall a component or application with caution.

·     After you uninstall and reinstall Unified Platform, you must clear the content in the /var/lib/ssdata/ directory on each node.

 

To retain the component configuration and data, use the backup function before uninstallation, as shown in "Backing up and restoring the configuration."

To uninstall Unified Platform, you can uninstall the applications in the reverse order of the application installation.

To uninstall an application:

1.     Enter https://ip_address:8443/matrix/ui in your browser to log in to Matrix. ip_address represents the northbound service VIP address.

2.     On the top navigation bar, click Deploy, and then select Applications from the left navigation pane.

3.     Click the  icon for an application.

4.     In the confirmation dialog box that opens, click OK.

 

 


FAQ

How can I prepare a disk partition for GlusterFS on a node?

Prepare a disk partition for GlusterFS on each node in the cluster, and record the disk partition name for future use.

To prepare a disk partition for GlusterFS on a node, perform one of the following tasks:

·     Manually create a new disk partition.

a.     Reserve sufficient disk space for GlusterFS when you install the operating system.

b.     After the operating system is installed, execute the fdisk command to create a disk partition.

Figure 46 gives an example of how to create a 200 GiB disk partition /dev/sda7 on disk sda. If the system fails to obtain the partition list, execute the reboot command to restart the node.

Figure 46 Creating a disk partition

 

·     Use an existing disk partition.

If an existing disk partition with 250 GB or more capacity has not been used or mounted, you can clear the disk partition and use it for GlusterFS, for example, /dev/sda7 as shown in Figure 3. To clear the disk partition, execute the wipefs -a /dev/sda7 command. After the command above is executed, if the system prompts invalid parameters when the GlusterFS application is being deployed or when a new partition is being created, execute the wipefs -a --force command.

Figure 47 Disk partition information

 

·     Prepare an independent disk for GlusterFS.

Execute the wipefs -a command to clear the disk before using it for GlusterFS.

Figure 48 Clearing disk

 

What is and how can I configure NIC bonding?

NIC bonding allows you to bind multiple NICs to form a logical NIC for NIC redundancy, bandwidth expansion, and load balancing.

Seven NIC bonding modes are available for a Linux system. As a best practice, use mode 2 or mode 4 in Unified Platform deployment.

·     Mode 2 (XOR)—Transmits packets based on the specified transmit hash policy and works in conjunction with the static aggregation mode on a switch.

·     Mode 4 (802.3ad)—Implements the 802.3 ad dynamic link aggregation mode and works in conjunction with the dynamic link aggregation group on a switch.

This example describes how to configure NIC bonding mode 2 on the servers after operating system installation.

To configure the mode 2 NIC redundancy mode, perform the following steps on each of the three servers:

1.     Create and configure the bonding interface.

a.     Execute the vim /etc/sysconfig/network-scripts/ifcfg-bond0 command to create bonding interface bond0.

b.     Access the ifcfg-bond0 configuration file and configure the following parameters based on the actual networking plan. All these parameters must be set.

Set the NIC binding mode to mode 2.

Sample settings:

DEVICE=bond0

IPADDR=192.168.15.99

NETMASK=255.255.0.0

GATEWAY=192.168.15.1

ONBOOT=yes

BOOTPROTO=none

USERCTL=no

NM_CONTROLLED=no

BONDING_OPTS="mode=2 miimon=120"

DEVICE represents the name of the vNIC, and miimon represents the link state detection interval.

2.     Execute the vim /etc/modprobe.d/bonding.conf command to access the bonding configuration file, and then add configuration alias bond0 bonding.

3.     Configure the physical NICs.

a.     Create a directory and back up the files of the physical NICs to the directory.

b.     Add the two network ports to the bonding interface.

c.     Configure the NIC settings.

Use the ens32 NIC as an example. Execute the vim /etc/sysconfig/network-scripts/ifcfg-ens32 command to access the NIC configuration file and configure the following parameters based on the actual networking plan. All these parameters must be set.

Sample settings:

TYPE=Ethernet

DEVICE=ens32

BOOTPROTO=none

ONBOOT=yes

MASTER=bond0

SLAVE=yes

USERCTL=no

NM_CONTROLLED=no

DEVICE represents the name of the NIC, and MASTER represents the name of the vNIC.

4.     Execute the modprobe bonding command to load the bonding module.

5.     Execute the service network restart command to restart the services. If you have modified the bonding configuration multiple times, you might need to restart the server.

6.     Verify that the configuration has taken effect.

¡     Execute the cat /sys/class/net/bond0/bonding/mode command to verify that the bonding mode has taken effect.

Figure 49 Verifying the bonding mode

 

¡     Execute the cat /proc/net/bonding/bond0 command to verify bonding interface information.

Figure 50 Verifying bonding interface information

 

7.     Execute the vim /etc/rc.d/rc.local command, and add configuration ifenslave bond0 ens32 ens33 ens34 to the configuration file.

How can I configure a security policy when a node has multiple NICs in up state?

1.     Log in to Matrix.

2.     Click Deploy on the top navigation bar and then select System > Security > Security Policies.

3.     Click Add.

4.     In the Basic Settings area, set the default action to ACCEPT.

5.     Click Add in the Rules Info area, and then configure a rule as follows in the dialog box that opens:

¡     Specify a source address, which can be the IP address of any NIC except for the IP address of the NIC used by Matrix.

¡     Select TCP as the protocol.

¡     Specify the following destination ports: 8101,44444,2379,2380,8088,6443,10251,10252,10250,10255,10256.

¡     Set the action to ACCEPT.

 

IMPORTANT

IMPORTANT:

You must add NIC IP addresses on all nodes except for the IP address of the NIC used by Matrix to the security policy. For example, except for the IP address of the NIC used by Matrix, node 1, node 2, and node 3 each have a NIC with an IP address of 1.1.1.1, 2.2.2.2, and 3.3.3.3, respectively. You must add three rules to the security policy, with different source addresses and the same protocol, ports, and action. The source addresses are 1.1.1.1, 2.2.2.2, and 3.3.3.3, respectively. The protocol is TCP, the destination ports are 8101, 44444, 2379, 2380, 8088, 6443, 10251, 10252, 10250, 10255, and 10256, and the action is ACCEPT.

 

6.     Click Apply.

Figure 51 Configuring a security policy

 

7.     Enable the disabled NICs. NIC eth33 is enabled in this example.

[root@node01 ~]# ifup eth33

How can I expand disks for the GlusterFS application?

Providing blank disks or blank physical partitions

GlusterFS uses multiple disks. You must provide blank disks or blank physical partitions. For detailed requirements, see Table 15.

Table 15 Hardware requirements

Method

Remarks

Using blank disks

·     Do not partition the blank disks.

·     Before deploying the GlusterFS component, first use the wipefs -a diskname command to clear the disks to ensure that the component can be normally deployed.

Using blank physical partitions

Blank physical partitions include partitions newly created on existing disks and cleared mounted partitions.

·     To create new partitions on an existing disk, use the fdisk diskname command. Execute the lsblk command to identify whether the disk partitions are allocated successfully.

·     To clear the mounted partitions, use the wipefs -a diskname command to clear contents in the partitions.

 

Adding and expanding disks for the GlusterFS application

1.     In the address bar of the browser, enter https://ip_address:8443/matrix/ui and press Enter to log in to Matrix. Navigate to the Deploy > Applications page.

2.     Click the Add Application  icon to enter the deployment steps page.

3.     Click the Upload Application  icon. The application deployment window opens.

4.     Click Upload to upload the component installation package common_PLAT_GlusterFS_2.0_<version>.zip to Matrix.

Figure 52 Uploading the application package

 

5.     After uploading the package, select common_PLAT_GlusterFS_2.0_<version>.zip on the page for selecting an installation package, and click Next to start parsing the package. After the package is parsed, the page for selecting applications opens.

Figure 53 Selecting an installation package

 

6.     On the page for selecting applications, click Next.

7.     Click Next to enter the page for configuring the database. The GlusterFS application does not support configuring the shared storage in the current software version.

8.     Click Next to enter the page for configuring parameters. The GlusterFS application does not support configuring the database in the current software version.

9.     Click the Edit  icon on the parameter configuration page. Then, you can edit configuration parameters as needed. Parameters include the host name displayed on the cluster page and the full device file path, as shown in Figure 54.

Figure 54 Editing configuration parameters

 

10.     You can confirm the name of the host to be deployed on the cluster deployment page.

Figure 55 Viewing the host name

 

CAUTION

CAUTION:

·     GlusterFS E0608P01 supports providing multiple disks or disk partitions for GlusterFS to use.

·     After disks are used by GlusterFS, you cannot edit or replace the disk names.

 

11.     After configuration is completed, click Deploy to deploy the GlusterFS application on the platform.

How can I uniformly modify the node passwords for Unified Platform?

After you modify the password of each node in the cluster, you must log in to Matrix and modify the passwords accordingly on the cluster deployment page. Otherwise, the passwords saved on Matrix will be different from the actual passwords. As a result, the cluster fails or the other SSH-related operations are abnormal.

 

 

NOTE:

If you log in to Matrix by using username admin, as a best practice, switch to the root user and execute commands. If you cannot switch to the root user, you must add sudo before the commands.

 

Perform the following tasks to modify the passwords:

1.     SSH to the back end of the node, and temporarily disable SSH-related configurations to prevent the user from being locked by SSH because the user frequently uses the incorrect password to initiate requests during the process of modifying the password.

¡     Edit the configuration file /etc/pam.d/password-auth, and comment out four commands in the file, as shown in Figure 56.

Figure 56 Commenting out commands

 

2.     In the back end of the node, use the passwd command to modify the password.

The password must contain a minimum of 12 characters from the following categories: digits, uppercase letters, lowercase letters, and special characters.

3.     Use the northbound service VIP to log in to Matrix.

4.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

5.     Select the node, and click  in the upper right corner of the node, and select Edit. In the dialog box that opens, enter the new password, and click Apply.

6.     If some other applications (for example, third-party monitor, service system, and inspection tools) connect to a cluster node through SSH, you must synchronously modify the password.

7.     After the password is modified, uncomment the commands in the /etc/pam.d/password-auth file in step 1.

How can I manually modifying the ES values in the back end?

1.     Execute the kubectl edit sts elasticsearch-node-1 -n service-software command.

2.     Find ES_JAVA_OPTS in the env entry, and modify its value to -Xms8g -Xmx8g, where 8g can be adjusted as needed. However, make sure the value following Xms is the same as that following Xmx.

3.     Find memory in the limits entry, and modify its value from 2G to 12G. You can adjust the value 12G as needed. However, make sure the value is at least 3G greater than the value following Xms or Xmx.

4.     Save the configuration and exit.

5.     Execute the kubectl edit sts elasticsearch-node-2 -n service-software command. Repeat steps 2 through 4.

6.     Execute the kubectl edit sts elasticsearch-node-3 -n service-software command. Repeat steps 2 through 4.

7.     After modification, enter any elasticsearch pod, and execute the curl ‘elasticsearch-http-service:9200/_cat/health?v’ command to view the cluster state. When the status is green or yellow in the returned value, the cluster is available. If there is a large amount of data, ES takes a certain period of time to restore data after you complete the preceding steps.

How can I identify whether the RAID controller cache policies are enabled when configuring a RAID array for servers if the IOPS of a disk is as low as 1k?

When configuring a RAID array for servers, you must enable the RAID controller cache policies. Table 16 shows only some parameters in the RAID configuration. In the actual configuration, see the storage controller user guide for the corresponding server model.

Table 16 Parameters

Parameter

Description

Array Label

Name of the RAID array. The default is DefaultValue0.

Stripe Size

Strip size, which determines the data block size of a stripe on each drive.

Array Size Selection

RAID array capacity.

Read Cache

Read cache policy status. Options include Enabled and Disabled.

Write Cache

Write cache policy status. Options include:

·     Enable Always—Always enables the write cache. Without a supercapacitor installed, this status might cause data loss if power supply fails.

·     Enable With Backup Unit—Disables the write cache when the supercapacitor is absent or not ready.

·     Disabled—Disables the write cache.

Create RAID via

Operation after the RAID array is created. Options include Quick Init, Skip Init, Build/Verify, and Clear.

 

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网