H3C SNA Center Installation and Component Deployment Guide-E11xx-5W102

HomeSupportInstall & UpgradeInstallation GuidesH3C SNA Center Installation and Component Deployment Guide-E11xx-5W102

01-Text


Introduction

H3C SeerNetwork Architecture (SNA) Center is the core component of the next-generation smart network SNA to provide unified network management, control, analysis, and service orchestration. The SNA Center is able to coordinate services in different management zones to simplify operations and reduce operation costs. It detects network conditions in real time and allows intent-based or network condition-based analysis to bring automated service deployment and risk prediction. This makes networks more simple, intelligent, and effective.

The SNA Center can be deployed only in a cluster with three master nodes on the SNA Installer.

This document describes how to deploy the SNA Center and its components. As shown in Figure 1, follow these steps to deploy the SNA Center and its components.

Figure 1 SNA Center deployment flowchart

 


Preparing for installation

Hardware requirements

IMPORTANT

IMPORTANT:

For correct operation of the SNA Installer, install it on a physical server rather than a virtual machine.

 

The server hardware requirements depend on the SNA Center deployment scheme. Table 1 describes the hardware requirements for deploying only controllers in the SNA Center. Table 2 describes the hardware requirements for deploying controllers and the SeerAnalyzer in the SNA Center.

Table 1 Hardware requirements for deploying only controllers in the SNA Center

Item

CPU

Memory size

Drive

Ethernet port

Minimal configuration (supports only essential functions, inadequate for actual service deployment)

x86-64 (Intel 64/AMD 64)

16 cores

2.0 GHz or above

64 GB or above

500 GB or above of 7.2K RPM SATA/SAS HDDs

One RAID controller with 1G cache memory of RAID 0, RAID 1, or RAID 10

2 × 10Gbps or above Ethernet ports + 2 × 1Gbps or above Ethernet ports

Recommended configuration (supports all functions)

x86-64 (Intel 64/AMD 64)

16 cores

2.0 GHz or above

128 GB or above

1 TB or above of 7.2K RPM SATA/SAS HDDs

One RAID controller with 1G cache memory of RAID 0, RAID 1, or RAID 10

2 × 10Gbps or above Ethernet ports + 2 × 1Gbps or above Ethernet ports

 

Table 2 Hardware requirements for deploying controllers and the SeerAnalyzer in the SNA Center

Item

CPU

Memory size

Drive

Ethernet ports

Minimal configuration (supports only essential functions, inadequate for actual service deployment)

x86-64 (Intel 64/AMD 64)

16 cores

2.0 GHz or above

256 GB or above

1 TB or above of 7.2K RPM SATA/SAS HDDs

One RAID controller with 1G cache memory of RAID 0, RAID 1, or RAID 10

2 × 10Gbps or above Ethernet ports + 2 × 1Gbps or above Ethernet ports

Recommended configuration (supports all functions)

x86-64 (Intel 64/AMD 64)

24 cores

2.0 GHz or above

256 GB or above

2 TB or above of 7.2K RPM SATA/SAS HDDs

One RAID controller with 1G cache memory of RAID 0, RAID 1, or RAID 10

2 × 10Gbps or above Ethernet ports + 2 × 1Gbps or above Ethernet ports

 

Software requirements

Device

Requirements

Server

Operating system: Support for CentOS 7.5 or higher versions

Client

Browser for login: Google Chrome 60 or higher

 

 

NOTE:

Software on which the SNA Center depends will be installed automatically during SNA Center installation and no extra operations are required. For more information about the software dependencies, see "Appendix A SNA Center software dependencies."

 

Pre-installation checklist

Table 3 Pre-installation checklist

Item

Requirements

Result

Server

Hardware

The CPU, memory, disk, and NIC settings are as required.

 

Software

·     The server supports operating system CentOS 7.5 or its higher versions.

·     The system time settings are configured correctly. As a best practice, configure NTP on each node and specify the same time source for all the nodes.

 

Client

Google Chrome 60 or a higher version is installed on the client.

 

 


Installing the H3Linux operating system and the SNA Installer platform

IMPORTANT

IMPORTANT:

·     Installing the H3Linux operating system on a server that already has an operating system installed replaces the existing operating system. To avoid data loss, back up data before you install the H3Linux operating system.

·     As a best practice, set the server's next startup mode to UEFI.

·     When using HDM to install the operating system for H3C servers, do not use KVM to install the same image simultaneously for multiple servers.

·     A power failure during the installation might cause installation failure of some service components. For function integrity, perform a reinstallation if a power failure has occurred during the installation process.

 

This section describes the installation procedure on a server without an operating system installed.

Obtaining the operating system image

To install the H3Linux operating system, first obtain file SNA_CENTER-PACKAGE-version.iso of the required version. version represents the software version number. The image file contains the installation packages of the operating system, SNA Installer, SNA Center, license server, and software dependencies.

Installing the H3Linux operating system and the SNA Installer platform on a server

1.     Use the remote console on the server to load the ISO image through the virtual optical drive.

2.     Configure the server to boot from the virtual optical drive and then restart the server.

The INSTALLATION SUMMARY page opens.

Figure 2 INSTALLATION SUMMARY page

 

3.     (Optional.) In the SOFTWARE area, click LICENSE SERVER, and then select Install License Server to install the license server.

 

 

NOTE:

·     You can install the license server on two or three nodes in the SNA Installer cluster to achieve HA or on only one server that is in or out of the cluster. As a best practice, install the license server on all the three master nodes. For more information about license server HA, see H3C License Server Installation Guide.

·     If you install the license server on a server that is not in the cluster, make sure the server can communicate with all the three master nodes in the cluster.

 

Figure 3 LICENSE SERVER page

 

4.     In the SYSTEM area, click NETWORK & HOST NAME. On the NETWORK & HOST NAME page, perform the following tasks:

a.     Enter a new host name in the Host name field and then click Apply.

 

IMPORTANT

IMPORTANT:

·     The host names of the nodes in a cluster must be different. A host name must be a string of up to 63 characters. It can contain only lower-case letters, digits, hyphens (-), and dots (.) but cannot start or end with a hyphen (-) or dot (.).

·     You cannot change the host name after the installation. If you must change it, contact the after-sales engineer.

 

b.     Click Configure to enter the network configuration page.

 

IMPORTANT

IMPORTANT:

Configure network settings for only one NIC and disable the other NICs.

 

Figure 4 NETWORK & HOST NAME page

 

5.     Configure the network settings as follows:

a.     Click the General tab, select Automatically connect to this network when it is available, and retain the default setting for the All users may connect to this network field.

Figure 5 General settings

 

b.     Click the IPv4 Settings tab, select Manual from the Method field, and then click Add to add an IPv4 address for the server. You must specify a gateway for the server when adding an IPv4 address.

 

CAUTION

CAUTION:

The 172.17.0.0/16 and 172.18.0.0/16 network segments are used by Docker and Harbor. To avoid unavailability of the SNA Installer cluster, do not add an IPv4 address in either network segment.

 

Figure 6 Configuring an IPv4 address for the server

 

c.     Click Save to save the configuration.

6.     On the NETWORK & HOST NAME page, verify that the NIC settings are correct and the NICs are in correct state. Then, click Done to return to the INSTALLATION SUMMARY page.

Figure 7 NETWORK & HOST NAME page

 

7.     Click Begin Installation to start the installation. During the installation process, you will be prompted to configure USER SETTINGS as shown in Figure 8. You can set the root password for the system at the prompt or set the password after the installation as shown in Figure 9.

Figure 8 Setting the root password

计算机生成了可选文字:
CONFIGURATION 
H3Linux 
U S E R SEITINGS 
ROOT PASSWORD 
Root password 丨 5 not set 
0 Creating xfs 。 n ev 5 
Please complete it e m S marked With this ico n before continuing t 0 the n ext Step. 
H3LlNUX RELEASE 1 ℃ ℃ INSTALLATION 
Helpl 
USER CREATION 
No user WII be created

 

Figure 9 Setting the root password

 

IMPORTANT

IMPORTANT:

Before the system restarts, terminate the connection to the virtual optical drive used for ISO image loading.

 

After the installation is complete, the system reboots to finish the installation of the operating system. If you set the root password after the installation, click Finish configuration for the system to restart.

The H3Linux login page opens after the system restarts.

Figure 10 H3Linux login page

 

8.     Use the root password to access the H3Linux operating system and execute the systemctl status matrix command to view the installation result. The installation succeeds if the active state is active (running).

[root@sna001 ~]# systemctl status matrix

matrix.service - Matrix Server

   Loaded: loaded (/usr/lib/systemd/system/matrix.service; enabled; vendor preset: disabled)

   Active: active (running) since Thu 2019-04-11 00:44:51 CST; 18h ago

 Main PID: 1180 (karaf)

...


Installing the SNA Center

Logging in to the SNA Installer platform

1.     Select one from the three master nodes for logging in to SNA Installer.

2.     Enter https://sna_Installer_ip_address:8443/matrix/ui in the address bar and then press Enter. sna_Installer_ip_address represents the IP address of the master node. 8443 represents the default port number.

Figure 11 SNA Installer platform login page

 

3.     Select a language, enter the username and password, and then click Log in.

The default username is admin and the default password is matrix123.

Configuring cluster parameters

The system opens the deployment wizard at the first login. On the Configure cluster parameters page, configure cluster parameters as shown in Table 4 and then click Apply. After the system displays a message that the cluster parameters are configured or modified successfully, click Next.

Figure 12 Configuring cluster parameters

 

Table 4 Cluster parameters description

Item

Description

Cluster internal virtual IP

Enter the IP address for internal communication. The mask length is set to 32 and the IP address cannot be modified once configured. This address must be in the same network segment as the master nodes.

VIP

Enter the IP address for northbound interface services. The mask length is set to 32. This address must be in the same network segment as the master nodes.

Service IP pool

Specify an address pool for IP assignment to services in the cluster. This address cannot conflict with other network segments in the deployment environment. By default, the address pool is 10.96.0.0/16.

Container IP pool

Specify an address pool for IP assignment to containers. This address cannot conflict with other network segments in the deployment environment. By default, the address pool is 177.177.0.0/16.

Use internal NTP server

Select whether to use the internal NTP server. If you select No, you must specify an NTP server.

NTP server IP

Enter the IP address of an NTP server. The IP address cannot be the same as the IP address of any node in the cluster.

Deploy NFS shared storage

Select whether to deploy NFS shared storage. If you select External, you must specify an external storage path. In this configuration, select Undeployed.

External storage path

Specify the external storage path for the NFS server.

 

Creating an SNA Installer cluster

1.     In the Master node area, click the plus icon.

Figure 13 Creating a cluster

 

2.     Configure node parameters as shown in Table 5 and then click Apply.

Figure 14 Configuring node parameters

 

Table 5 Node parameter description

Item

Description

Type

Displays the node type. Options include Master and Worker. This field cannot be modified.

IP address

Specify the IP address of the node.

Username

Specify the username for the root user to access the operating system.

Password

Specify the password for the root user to access the operating system.

 

3.     Add the other two master nodes in the same way the first master node is added.

Figure 15 Master node configuration completed

 

4.     Click Start deployment.

Figure 16 Deployment in process

 

When the deployment progress of each node reaches 100%, the deployment finishes. The tile that has a star icon  at the left corner represents the primary master node.

Figure 17 Cluster deployment completed

 

5.     Click Next.

The page that opens displays the content of the decoded SNA Center installation package obtained by the SNA Installer platform automatically.

Installing the SNA Center

1.     Click the right chevron button  to enter the configuration page.

Figure 18 Installing SNA Installer applications

 

2.     Configure the parameters as needed. If you are to deploy the SeerAnalyzer component, configure big data-related parameters in the snac-be configuration item, as shown in Table 6.

Figure 19 Deploying SNA Installer applications

 

Table 6 Big data parameters

Item

Description

DCIps

Specify the IP addresses of the Big Data cluster nodes 1, 2, and 3. Use commas to separate the IP addresses.

DCnames

Specify the domain names of the Big Data cluster nodes 1, 2, and 3. Use commas to separate the domain names.

DC_virtual_ip

Specify the virtual IP address of the Big Data cluster (IP address of the primary master node).

DC_pwd

Specify the password for logging in to the Big Data cluster through SSH.

DCsshServerport

Specify the SSH service port for the Big Data cluster.

 

3.     Click Deploy.

The page displays that SNA Center has been deployed after the deployment finishes. You can click the link to access the SNA Center login page or click Completed to enter the SNA Installer application list. The application list displays information about deployed components, as shown in Figure 21.

Figure 20 Component deployment completed

 

Figure 21 Application list

 

Logging in to the SNA Center

1.     Enter http://sna_center_ip_address: 10080/portal/ in the address bar and then press Enter. sna_center_ip_address represents the VIP of the SNA Installer cluster.

Figure 22 SNA Center login page

 

2.     Select a language, enter the username and password, and then click Log in.

The default username is admin and the default password is admin@h3c.

Figure 23 SNA Center homepage

 


Registering the SNA Center

After you install the SNA Center, you can use complete features and functions of the SNA Center for a 180-day trial period. After the trial period expires, you must get your SNA Center licensed.

To register the SNA Center:

1.     Log in to the license server and obtain the device information file for the license server.

2.     Log in to the H3C website and use the license keys and device information file to apply for license files.

3.     Upload the license files to the license server and connect the SNA Center to the license server.

 

 

NOTE:

A license file might become invalid if the following hardware changes occur on the server:

·     NIC forbidden for use, new NIC start, NIC replacement, or NIC damage.

·     CPU replacement.

 

Obtaining the device information file

1.     Enter http://license_ip_address:8090/ in the address bar and then press Enter. If HA is not configured, license_ip_address represents the IP address of the server that hosts the license server. If HA is configured, license_ip_address represents the virtual IP address or the IP address of the master license server.

2.     Select a language, enter the username and password, and then click Login.

The default username is admin and the default password is admin@h3c.

3.     Select License Management > License Files.

4.     On the License Files page, click Export DID.

Applying for license files

To apply for a license file for the SNA Center for the first time, see "Registering a license for the first time." For any subsequent license file applications, see "Registering an upgrade license."

A license key is required for each license file application.

Registering a license for the first time

1.     Go to the H3C website at http://www.h3c.com.hk/Technical_Support___Documents/Product_Licensing/ and select Register the First Time.

2.     From the Product category list, select New Network_H3C SNA Center.

3.     Provide the license, device, and contact information as described in Table 7.

Table 7 Configuration items

Item

Description

License information

Enter the license key.

Device information

Upload the device information file.

Contact information

Enter your contact information.

Items marked with an asterisk (*) are required.

 

4.     Enter the verification code and select I accept all terms of H3C Legal Statement, and then click Get activation key or file.

5.     Save the activation file to the PC.

Registering an upgrade license

1.     Go to the H3C Website at http://www.h3c.com.hk/Technical_Support___Documents/Product_Licensing/ and select Register Upgrade Licenses.

2.     From the Product category list, select New Network_H3C SNA Center.

3.     Provide the license, device, and contact information as described in Table 8.

Table 8 Configuration items

Item

Description

Device information

Upload the device information file.

License information

Enter the license key.

Contact information

Enter your contact information.

Items marked with an asterisk (*) are required.

 

4.     Enter the verification code and select I accept all terms of H3C Legal Statement, and click Get activation key or file.

5.     Save the activation file to the PC.

Licensing

1.     Upload the license files to the license server and add a license client.

a.     Log in to the license server. Select License Management > License Files.

b.     On the License Files page, click Install license file.

c.     In the dialog box that opens, click Browse… to select the license files saved locally. Then, click OK.

After the license files are uploaded, licensing information is displayed on the License Files page.

d.     Select Configuration > License Clients. On the page that opens, click Add.

e.     Enter client information such as client name and password, and then click OK.

2.     Log in to the SNA Center to obtain licensing information.

a.     Click Settings on the top navigation bar and then select License from the left navigation pane.

b.     In the License Server Info area, provide the IP address, username, password, and port number as described in Table 9.

Table 9 License server information

Item

Description

IP address

Specify the IP address configured on the license server used for internal communication in the SNA Installer cluster.

Port number

Specify the service port number of the license server. The default value is 5555.

Username

Specify the username configured on the license server.

Password

Specify the user password configured on the license server.

 

c.     Click Connect to connect the controller to the license server.

The SNA Center automatically obtains licensing information after connecting to the license server.


Managing SNA Center components

IMPORTANT

IMPORTANT:

You can deploy, upgrade, and uninstall SNA Center components only on the SNA Center.

 

Different scenarios require different components. This section uses the Layer 2 campus network scenarios to illustrate the procedures.

Deploying components

Preparing for deployment

1.     Use Table 10 to identify the required components.

Table 10 Required components for each scenario

Scenario

Component

Component installation package

Campus network

VCFC-Campus

VCFC-CAMPUS-version-MATRIX.zip

vDHCP Server

vDHCPS-version-X64.zip

Cloud DC

VCFC-DataCenter

VCFC_DC-version-MATRIX.zip

vDHCP Server

vDHCPS-version-X64.zip

WAN

VCFC-WAN

VCFCWAN-version.zip

Config Channel (optional)

ConfigChannel-version.zip

vDHCP Server (optional)

vDHCPS-version-X64.zip

Intelligent Analysis Engine

SeerAnalyzer

SEERANALYZER-version-MATRIX.zip

Config Channel

ConfigChannel-version.zip

Campus network + Intelligent Analysis Engine

Oasis Platform

oasis-version.zip

 

 

NOTE:

·     version in an installation package name represents the software version number.

·     Cloud DC and WAN scenarios are not supported in the current software version.

 

2.     To deploy the SeerAnalyzer component, modify the configuration file to configure big data-related parameters if you did not specify the parameters during SNA deployment. For more information about the parameters to specify, see step 2 in "Installing the SNA Center."

3.     Enable Ethernet adapters on the server that hosts the SNA Center. For the Layer 2 campus network scenarios, you must enable two Ethernet adapters. For any other scenarios, enable only one Ethernet adapter.

To enable an Ethernet adapter:

a.     Access the server that hosts the SNA Center remotely.

b.     Access the configuration file of an Ethernet adapter. This procedure uses Ethernet adapter ens34 as an example.

[root@sna001 /]# vi /etc/sysconfig/network-scripts/ifcfg-ens34

c.     Set the BOOTPROTO field to none to specify no boot-time protocol and set the ONBOOT field to yes to activate the Ethernet adapter at system startup.

Figure 24 Modifying the configuration file of an Ethernet adapter

 

d.     Execute the ifdown and ifup commands in sequence to reboot the Ethernet adapter.

[root@sna001 /]# ifdown ens34

[root@sna001 /]# ifup ens34

e.     Execute the ifconfig command to verify that the Ethernet adapter is in UP state.

Deploying a component

Uploading the component installation package

1.     Log in to the SNA Center, click Settings, and then select Components > Components.

2.     Click Upload to upload the component installation package and then click Next.

Figure 25 Uploading the component installation package

 

Selecting a component

Select the component to be installed, configure the other settings as required, and then click Next.

Figure 26 Selecting a component (1)

 

Figure 27 Selecting a component (2)

 

Table 11 Component description

Item

Description

Remarks

Campus network

Specify the VCF controller for setting up a campus network to implement campus network automation, user access control automation, and policy automation.

You must specify a network scheme.

Layer 2 network scheme

Enables automatically deployed devices to communicate with the VCFC-Campus controller at Layer 2.

The controller can use devices' IP addresses to manage the devices automatically.

You must configure a management network and a default network.

For information about the network diagram, see Figure 28.

Layer 3 network scheme

Enables automatically deployed devices to communicate with the VCFC-Campus controller at Layer 3.

You must configure routes in the automated device association template to enable online devices to reach the controller in the management network.

For information about the network diagram, see Figure 29.

Cloud DC

Specify the VCF controller for setting up a cloud DC network to implement DC network automation and dynamically manage virtual networks and network services.

N/A

WAN

Specify the VCF controller for setting up a WAN to implement service automation and intelligent traffic scheduling for WAN backbone networks, vertical networks, and branch networks.

N/A

Intelligent Analysis Engine

Specify the intelligent analysis engine, which collects network data through telemetry technologies, and analyzes and processes the data through big data and AI to implement intelligent assurance and prediction for network services.

N/A

Public Service

Specify services shared by multiple scenarios mentioned above. Options include Config Channel and vDHCP server.

vDHCP Server is used for automated device deployment. The SNA Center deploys a dual-node cluster for the vDHCP servers based on the predefined policies.

N/A

 

Figure 28 Layer 2 campus network

 

Figure 29 Layer 3 campus network

 

Configuring network settings

1.     Determine the number of networks and subnets to be created.

¡     For the Layer 2 campus network, create two networks, one as the management network and the other as the default network.

¡     For the Layer 3 campus network, cloud DC, and WAN, create one network.

¡     For the Intelligent Analysis Engine, no network is required. Click Next to proceed to the next step.

2.     Create networks and subnets and configure network settings.

If you create different networks that use the same uplink interface, configure different VLANs for the networks to isolate network traffic.

The subnet address range and address pool are used to assign IP addresses to components. Make sure the address range of each subnet contains a minimum of 32 addresses. As a best practice, use Table 12 to determine the address pool size for a subnet.

Table 12 Number of IP addresses required for a subnet

Component

Number of default network's IP addresses

Number of management network's IP addresses

Maximum cluster members

Default cluster members

VCFC-Campus

Number of cluster members + 1 (secondary cluster IP)

Number of cluster members + 1 (cluster IP)

32

3

VCFC-DataCenter

N/A

32

3

VCFC-WAN

N/A

5

3

vDHCP Server

Number of cluster members

2

2

 

The total number of IP addresses required for a subnet is the sum of IP addresses required by all components for the subnet scenario. For information about components for a scenario, see Table 10.

For example, to deploy a Layer 2 campus network, if the VCFC-Campus cluster has 32 members and the vDHCP cluster has 2 members, the number of required IP addresses for a subnet is as follows:

¡     For a default network subnet: 32 + 1 + 2=35.

¡     For a management network subnet: 32 + 1 + 3=36.

3.     Click Next.

Figure 30 Configuring network settings (1)

 

Figure 31 Configuring network settings (2)

 

Binding networks

Bind networks to the components and then click Next.

 

IMPORTANT

IMPORTANT:

·     For vDHCP services to be used by the campus network, specify the same management network, default network, and subnets for the VCFC-Campus and the vDHCP components.

·     In a cloud DC scenario, to deploy the VCFC-DataCenter and vDHCP components on different networks, you must specify a management network for each of the two components and ensure Layer 3 reachability between the two components.

 

Figure 32 Binding networks to components (1)

 

Figure 33 Binding networks to components (2)

 

Verifying the configuration

On the page that opens, verify that the component settings are correct, and then click Deploy.

A component automatically gets an IP address from the IP address pool of the subnet bound to it. To modify the IP address, click Modify and then specify another IP address for the component. The IP address specified must be in the IP address range of the subnet bound to the component.

Figure 34 Verifying the configuration (1)

 

The system starts to deploy the components. For a Layer 2 campus network, the system assigns an IP address in the management network as the primary cluster IP and an IP address in the default network as the secondary cluster IP. For a Layer 3 campus network or a cloud DC network, the system assigns an IP address in the management network as the primary cluster IP.

The page displays the deployment progress as shown in Figure 35.

Figure 35 Deployment progress

 

Viewing deployed component information

1.     Select Components > Components.

2.     Click the right chevron button  to expand component information.

Figure 36 Deployed component list

 

3.     To view detailed information, click the  icon for the target component.

 

Upgrading a component

1.     Log in to the SNA Center and then select Components > Components.

Figure 37 Deployed component list

 

2.     Click the right chevron button  for the target component to expand component information, and then click the upgrade icon .

3.     Upload the installation package.

4.     Select the package and then click Deploy.

Figure 38 Select an installation package for upgrade

 

5.     If the upgrade fails, click Roll Back to roll back to the previous version.

Uninstalling a component

1.     Log in to the SNA Center and select Components > Components.

2.     Select the target component and then click .

Figure 39 Uninstalling a component

 


Uninstalling the SNA Center

1.     Log in to the SNA Installer platform.

2.     Select Deploy from the top navigation bar and then select Application from the left navigation pane.

3.     Click the delete icon  for the SNA Center application.

4.     In the confirmation dialog box that opens, click OK.

Figure 40 Uninstalling the SNA Center

 


Upgrading the software

This chapter describes procedures for upgrading the SNA Center, SNA Installer, and H3Linux OS. You can upgrade the SNA Installer alone or upgrade it together with the H3Linux OS.

Upgrading the SNA Installer does not affect operation of the SNA Center.

Upgrading the SNA Center

To upgrade the SNA Center, you must uninstall the existing SNA Center and then install the new version. This operation retains the SNA Center configuration and the deployed SNA Center components are still available after the upgrade.

To upgrade the SNA Center:

1.     Log in to the SNA Installer platform and uninstall the existing SNA Center. For more information, see "Uninstalling the SNA Center."

2.     Select Deploy from the top navigation bar and then select Application from the left navigation pane.

3.     Click the  icon to upload the new version of the SNA Center installation package.

The uploaded package will be displayed on the Deployment Procedure page.

Figure 41 Deployment process page

 

4.     Install the new version SNA Center. For more information, see "Installing the SNA Center."

Upgrading the SNA Installer platform

You can upgrade the SNA Installer platform on a node by using the following methods:

·     Upload image for upgradeUpgrade through image uploading from the Web interface. You can use this method to upgrade only secondary master nodes. To upgrade a primary master node by using this method, first switch the primary master node to a secondary node.

·     Upgrade in backgroundUpload through platform reinstallation from the CLI. You can use this method to upgrade any master node.

Restrictions and guidelines

For the SNA Installer platform to operate correctly, do not power off or reboot the server that hosts the platform during the upgrade process.

The SNA Installer platform supports In-Service Software Upgrade (ISSU) to ensure service continuity during the upgrade. To perform an ISSU in an SNA Installer cluster, upgrade secondary master nodes first and then the primary master node. Make sure you upgrade these nodes one by one.

Prerequisites

Obtain the ISO image file for the SNA Installer platform, and decompress the package to obtain the matrix-version.rpm image file. version represents the version number.

Upgrade through image uploading

1.     Log in to the SNA Installer platform. Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

2.     Click the set icon  for the target node, and then select Upgrade.

3.     In the dialog box than opens, select Upload image for upgrade.

Figure 42 Selecting an upgrade method

 

4.     Select the matrix-version.rpm image file, and then click Upload. After the upload is complete, click Apply to start the upgrade.

Figure 43 Uploading the image file

 

The node icon turns to blue when the upgrade succeeds.

Figure 44 Upgrade completed

 

5.     Click the  icon at the upper right corner of the page and then select About SNA Installer to verify that the SNA Installer platform has been upgraded successfully.

Upgrade through platform reinstallation

Disabling a node to be upgraded

1.     Log in to the SNA Installer platform. Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

2.     Click the set icon  for the target node, and then select Disable.

Figure 45 Disabling a node

 

Upgrading a disabled node

1.     Access the CLI of the node that hosts the SNA Installer.

2.     Uninstall the SNA Installer platform.

[root@sna001 ~]# rpm -e matrix

3.     Copy or use a file transfer protocol (for example, FTP) to upload the matrix-version.rpm image file to the target directory on the node. The directory is /root in this example.

4.     Access the storage directory of the image file and execute the rpm -ivh matrix-version.rpm command. The version number is V500R001B04D001.x86_64 in this example.

[root@sna001 ~]# rpm -ivh matrix-V500R001B04D001.x86_64.rpm

Preparing...                          ################################# [100%]

Updating / installing...

   1:matrix-V500R001B04D001-1         ################################# [100%]

Complete!

5.     Execute the systemctl status matrix command to view the installation result. The installation succeeds if the active state is active (running).

[root@sna001 ~]# systemctl status matrix

matrix.service - Matrix Server

   Loaded: loaded (/usr/lib/systemd/system/matrix.service; enabled; vendor preset: disabled)

   Active: active (running) since Sat 2019-06-29 01:15:21 CST; 2min 23s ago

 Main PID: 29028 (karaf)

...

Enabling an upgraded node

1.     Log in to the SNA Installer platform.

2.     Click Deploy on the top navigation bar and then select Cluster from the navigation pane.

3.     Click the set icon  for the target node, and then select Upgrade. In the dialog box that opens, select Upgrade in background and then click Apply.

Figure 46 Upgrading a node

 

4.     Click the set icon  for the node, and then select Enable. The node icon turns to blue when the upgrade succeeds.

Figure 47 Upgrade completed

 

5.     Click the  icon at the upper right corner of the page and select About SNA Installer to verify that the SNA Installer platform has been upgraded successfully.

Upgrading the H3Linux OS and the SNA Installer platform

1.     Log in to the SNA Installer. Click Deploy on the top navigation bar and then select Cluster from the navigation pane. Click the set icon  for the target node, and then select Disable to disable the node.

2.     Obtain the ISO image file and reinstall the H3Linux OS and the SNA Installer platform. For more information, see "Installing the H3Linux operating system and the SNA Installer platform."

3.     From the SNA Installer GUI, click Deploy on the top navigation bar and then select Cluster from the navigation pane. Click the set icon  for the target node, and then select Upgrade to upgrade the node.

4.     After the upgrade finishes, click the set icon  for the node, and then select Enable to enable the node.

5.     Click the  icon at the upper right corner of the page and select About SNA Installer to verify that the SNA Installer platform has been upgraded successfully.


Appendix A SNA Center software dependencies

Table 13 SNA Center software dependencies

Software name

Software version

java openjdk

1.8.0

zip

3.0

unzip

6.0

tar

1.26-34.el7.x86_64

chrony

3.2-2

ntpdate

4.2.6p5

docker-client

1.13.1

docker-common

1.13.1

docker

1.13.1

 

H3C reserves the right to modify its collaterals without any prior notice. For the latest information of the collaterals, please consult H3C sales or call 400 hotline.
  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网