H3C SeerEngine-DC Controller Installation Guide (Unified Platform)-E36xx-5W609

HomeSupportAD-NET(SDN)H3C SeerEngine-DCInstall & UpgradeInstallation GuidesH3C SeerEngine-DC Controller Installation Guide (Unified Platform)-E36xx-5W609
01-Text
Title Size Download
01-Text 1.18 MB

About the SeerEngine-DC controller

SeerEngine-DC (also called VCFC-DataCenter) is a data center controller. Similar to a network operating system, the SeerEngine-DC controller allows users to develop and run SDN applications. It can control various resources on an OpenFlow network and provide interfaces for applications to enable specific network forwarding.

The controller has the following features:

·     It supports OpenFlow 1.3 and provides built-in services and a device driver framework.

·     It is a distributed platform with high availability and scalability.

·     It provides extensible REST APIs, GUI, and H3C IMC management interface.

·     It can operate in standalone or cluster mode.

 


Preparing for installation

Server requirements

Hardware requirements

CAUTION

CAUTION:

To avoid unrecoverable system failures caused by unexpected power failures, use a RAID controller that supports power fail protection on the servers and make sure a supercapacitor is in place.

 

Node to host the controller (x86 server)

Table 1 Hardware requirements for the node to host the controller (high-end configuration)

Item

Requirements

CPU

x86-64 (Intel 64/AMD 64)

20 cores

2.2 GHz or above

Memory size

128 GB or above

Drive

The drives must be configured in RAID 1 or 10.

·     Drive configuration option 1:

¡     System drive: 4*960 GB SSDs or 8*480 GB SSDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*480 GB SSDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

·     Drive configuration option 2:

¡     System drive: 4*1200 GB or 8*600 GB 7.2K RPM or above HDDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*600 GB 7.2K RPM or above HDDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

¡     Storage controller: 1GB cache, power fail protected with a supercapacitor installed.

NIC

vBGP not configured:

·     Non-bonding mode: 1 × 10 Gbps or above Ethernet port

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 10 Gbps Linux bonding interfaces

vBGP configured:

·     Non-bonding mode: 2 × 10 Gbps or above Ethernet ports

·     Bonding mode (recommended mode: mode 4): 4 × 10 Gbps Linux bonding interfaces

For the controller to support the remote disaster recovery system (RDRS), add an extra 10 Gbps or above Ethernet port.

 

Table 2 Hardware requirements for the node to host the controller (standard configuration)

Item

Requirements

CPU

x86-64 (Intel 64/AMD 64)

16 cores

2.0 GHz or above

Memory size

128 GB or above

Drive

The drives must be configured in RAID 1 or 10.

·     Drive configuration option 1:

¡     System drive: 4*960 GB SSDs or 8*480 GB SSDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*480 GB SSDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

·     Drive configuration option 2:

¡     System drive: 4*1200 GB or 8*600 GB 7.2K RPM or above HDDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*600 GB 7.2K RPM or above HDDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

¡     Storage controller: 1GB cache, power fail protected with a supercapacitor installed.

NIC

vBGP not configured:

·     Non-bonding mode: 1 × 10 Gbps or above Ethernet port

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 10 Gbps Linux bonding interfaces

vBGP configured:

·     Non-bonding mode: 2 × 10 Gbps or above Ethernet ports

·     Bonding mode (recommended mode: mode 4): 4 × 10 Gbps Linux bonding interfaces

 

Node to host the controller (ARM server)

Table 3 Hardware requirements for the node to host the controller (high-end configuration)

Item

Requirements

CPU

ARM (Huawei Kunpeng architecture)

2*Kunpeng 920 (48 cores, 2.6 GHz)

Memory size

384 GB (12*32 GB)

Drive

The drives must be configured in RAID 1 or 10.

·     Drive configuration option 1:

¡     System drive: 4*960 GB SSDs or 8*480 GB SSDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*480 GB SSDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

·     Drive configuration option 2:

¡     System drive: 4*1200 GB or 8*600 GB 7.2K RPM or above HDDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*600 GB 7.2K RPM or above HDDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

¡     Storage controller: 1GB cache, power fail protected with a supercapacitor installed.

NIC

vBGP not configured:

·     Non-bonding mode: 1 × 10 Gbps or above Ethernet port

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 10 Gbps Linux bonding interfaces

vBGP configured:

·     Non-bonding mode: 2 × 10 Gbps or above Ethernet ports

·     Bonding mode (recommended mode: mode 4): 4 × 10 Gbps Linux bonding interfaces

 

Table 4 Hardware requirements for the node to host the controller (standard configuration)

Item

Requirements

CPU

ARM (Huawei Kunpeng architecture)

2*Kunpeng 920 (24 cores, 2.6 GHz)

Memory size

128 GB (4*32 GB)

Drive

The drives must be configured in RAID 1 or 10.

·     Drive configuration option 1:

¡     System drive: 4*960 GB SSDs or 8*480 GB SSDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*480 GB SSDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

·     Drive configuration option 2:

¡     System drive: 4*1200 GB or 8*600 GB 7.2K RPM or above HDDs configured in RAID 10 that provides a minimum total drive size of 1920 GB.

¡     etcd drive: 2*600 GB 7.2K RPM or above HDDs configured in RAID 1 that provides a minimum total drive size of 50 GB. (Installation path: /var/lib/etcd.)

¡     Storage controller: 1GB cache, power fail protected with a supercapacitor installed.

NIC

vBGP not configured:

·     Non-bonding mode: 1 × 10 Gbps or above Ethernet port

·     Bonding mode (recommended mode: mode 2 or mode 4): 2 × 10 Gbps Linux bonding interfaces

vBGP configured:

·     Non-bonding mode: 2 × 10 Gbps or above Ethernet ports

·     Bonding mode (recommended mode: mode 4): 4 × 10 Gbps Linux bonding interfaces

 

RDRS third-party site

You can deploy the RDRS third-party site on the same server as the primary or backup site. However, automatic RDRS switchover will fail when the server fails. As a best practice, deploy the RDRS third-party site on a separate server.

The RDRS third-party site can be deployed on a physical server. Table 5 describe the hardware requirements for a physical server to host the RDRS third-party site, respectively.

Table 5 Hardware requirements for a physical server to host the RDRS third-party site

Item

Requirements

CPU

x86-64 (Intel 64/AMD 64)

2 cores

2.0 GHz or above

Memory size

16 GB or above

Drive

The drives must be configured in RAID 1, 5, or 10.

·     Drive configuration option 1:

¡     System drive: 2*480 GB SSDs configured in RAID 1 that provides a minimum total drive size of 256 GB.

¡     etcd drive: 2*480 GB SSDs configured in RAID 1 that provides a minimum total drive size of 20 GB. (Installation path: /var/lib/etcd.)

·     Drive configuration option 2:

¡     System drive: 2*600 GB 7.2K RPM or above HDDs configured in RAID 1 that provides a minimum total drive size of 256 GB.

¡     etcd drive: 2*600 GB 7.2K RPM or above HDDs configured in RAID 1 that provides a minimum total drive size of 20 GB. (Installation path: /var/lib/etcd.)

¡     Storage controller: 1GB cache, power fail protected with a supercapacitor installed.

NIC

1 × 10 Gbps or above Ethernet port

 

Software requirements

SeerEngine-DC runs on the Unified Platform as a component. Before deploying SeerEngine-DC, first install the Unified Platform.

Client requirements

You can access the Unified Platform from a Web browser without installing any client. As a best practice, use Google Chrome 55 or a later version.

Pre-installation checklist

Table 6 Pre-installation checklist

Item

Requirements

Server

Hardware

·     The CPUs, memory, drives, and NICs meet the requirements.

·     The server supports the Unified Platform.

Software

The system time settings are configured correctly. As a best practice, configure NTP for time synchronization and make sure the devices synchronize to the same clock source.

Client

You can access the Unified Platform from a Web browser without installing any client. As a best practice, use Google Chrome 55 or a later version.

 


Deployment procedure at a glance

Use the following procedure to deploy the controller:

1.     Prepare for installation.

Prepare a minimum of three physical servers. Make sure the physical servers meet the hardware and software requirements as described in "Server requirements."

2.     Deploy the Unified Platform.

For the deployment procedure, see H3C Unified Platform Deployment Guide.

3.     Deploy the controller.

 


Installing the Unified Platform

Partitioning the system drive

Before installing the Unified Platform, partition the system drive as described in Table 7.

Table 7 Drive partition settings

Mount point

2400GB RAID drive capacity

1920GB RAID drive capacity

/var/lib/docker

500 GiB

450 GiB

/boot

1024 MiB

1024 MiB

swap

1024 MiB

1024 MiB

/var/lib/ssdata

550 GiB

500 GiB

/

1000 GiB

700 GiB

/boot/efi

200 MiB

200 MiB

/var/lib/etcd

48 GiB

48 GiB

GFS

300 GiB

220 GiB

 

(Optional.) Configuring HugePages

To use the vBGP component (E1121 earlier), you must enable and configure HugePages. To deploy multiple vBGP clusters on a server, plan the HugePages configuration in advance to ensure adequate HugePages resources.

You are not required to configure HugePages if the vBGP component is not to be deployed or the vBGP component is E1121 or later. If SeerAnalyzer is to be deployed, you are not allowed to configure HugePages. After you enable or disable HugePages, restart the server for the configuration to take effect. HugePages are enabled on a server by default.

H3Linux operating system

The H3Linux operating system supports two huge page sizes. Table 8 describes HugePages parameter settings for the two page sizes.

Table 8 HugePages parameter settings for different page sizes

Huge page size

Number of huge pages

1 G

8

2 M

5120

 

You must configure HugePages on each node, and then reboot the system for the configuration to take effect.

 

IMPORTANT

IMPORTANT:

To deploy the Unified Platform on an H3C CAS-managed VM, enable HugePages for the host and VM on the CAS platform and set the CPU operating mode for the VM to pass-through before issuing the HugePages configuration.

 

To configure HugePages on the H3Linux operating system:

1.     Execute the hugepage.sh script.

[root@node1 ~]# cd /etc

[root@node1 etc]#  ./hugepage.sh

2.     Set the memory size and number of pages for HugePages during the script execution process.

The default parameters are set as follows: default_hugepagesz=1G hugepagesz=1G hugepages=8 }

Do you want to reset these parameters? [Y/n] Y

Please enter the value of [ default_hugepagesz ],Optional units: M or G >> 1G

Please enter the value of [ hugepagesz ],Optional units: M or G >> 1G

Please enter the value of [ hugepages ], Unitless >> 8

3.     Reboot the server as promoted.

GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet default_hugepagesz=1G hugepagesz=1G hugepages=8 "

Legacy update grub.cfg

Generating grub configuration file ...

Found linux image: /boot/vmlinuz-3.10.0-957.27.2.el7.x86_64

Found initrd image: /boot/initramfs-3.10.0-957.27.2.el7.x86_64.img

Found linux image: /boot/vmlinuz-0-rescue-664108661c92423bb0402df71ce0e6cc

Found initrd image: /boot/initramfs-0-rescue-664108661c92423bb0402df71ce0e6cc.img

done

update grub.cfg success,reboot now...

************************************************

GRUB_TIMEOUT=5

GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"

GRUB_DEFAULT=saved

GRUB_DISABLE_SUBMENU=true

GRUB_TERMINAL_OUTPUT="console"

GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet  default_hugepagesz=1G hugepagesz=1G hugepages=8 "

GRUB_DISABLE_RECOVERY="true"

************************************************

Reboot to complete the configuration? [Y/n] Y

4.     Verify that HugePages is configured correctly. If the configuration result is displayed as configured, the configuration is successful.

[root@node1 ~]# cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-957.27.2.el7.x86_64 root=UUID=6b5a31a2-7e55-437c-a6f9-e3bc711d3683 ro crashkernel=auto rhgb quiet default_hugepagesz=1G hugepagesz=1G hugepages=8

5.     Repeat the procedure to configure HugePages on each node.

Galaxy Kylin operating system

The Galaxy Kylin operating system supports the huge page size and parameter setting as described in Table 9.

Table 9 Huge page size and parameter setting supported by the Galaxy Kylin operating system

Huge page size

Number of huge pages

2 M

5120

 

You must configure HugePages on each node, and reboot the system for the configuration to take effect.

To configure HugePages on the Galaxy Kylin operating system:

1.     Open the grub file.

vim /etc/default/grub

2.     Edit the values of the default_hugepagesz, hugepagesz, and hugepages parameters at the line starting with GRUB_CMDLINE_LINUX.

GRUB_CMDLINE_LINUX="default_hugepagesz=2M hugepagesz=2M hugepages=5120

3.     Update the configuration file.

¡     If the system boots in UEFI mode, execute the following command:

grub2-mkconfig -o /boot/efi/EFI/kylin/grub.cfg

¡     If the system boots in Legacy mode, execute the following command:

grub2-mkconfig -o /boot/grub2/grub.cfg

4.     Restart the server for the configuration to take effect.

5.     Execute the cat/proc/cmdline command to view the configuration result. If the result is consistent with your configuration, the configuration succeeds.

default_hugepagesz=2M hugepagesz=2M hugepages=5120

6.     Repeat the procedure to configure HugePages on each node.

Deploying the Unified Platform

To deploy the controller, first install the Unified Platform and then install the controller on the Unified Platform.

The Unified Platform can be installed on x86 or ARM servers. Select the installation packages specific to the server type as described in Table 10 and upload the selected packages. For the installation procedures of the packages, see H3C Unified Platform Deployment Guide.

The common_PLAT_GlusterFS_2.0, general_PLAT_portal_2.0, and general_PLAT_kernel_2.0 installation packages are required and must be deployed during the Unified Platform deployment process. For the installation package deployment procedure, see "Deploying the applications" in H3C Unified Platform Deployment Guide.

The general_PLAT_kernel-base_2.0, general_PLAT_Dashboard_2.0, and general_PLAT_widget_2.0 installation packages are required. They will be installed automatically during the controller deployment process. You only need to upload the packages.

To use the general_PLAT_network_2.0 installation package, deploy it on Installer. You can deploy it before or after SeerEngine-DC components are deployed. To avoid deployment failure, make sure the required and optional packages use the same version. For the deployment procedure, see "Deploying the applications" in H3C Unified Platform Deployment Guide.

Table 10 Installation packages required by the controller

Installation package

Description

·     x86: common_PLAT_GlusterFS_2.0_version.zip

·     ARM: common_PLAT_GlusterFS_2.0_version_arm.zip

Provides local shared storage functionalities.

·     x86: general_PLAT_portal_2.0_version.zip

·     ARM: general_PLAT_portal_2.0_version_arm.zip

Provides portal, unified authentication, user management, service gateway, and help center functionalities.

·     x86: general_PLAT_kernel_2.0_version.zip

·     ARM: general_PLAT_kernel_2.0_version_arm.zip

Provides access control, resource identification, license, configuration center, resource group, and log functionalities.

·     x86: general_PLAT_kernel-base_2.0_version.zip

·     ARM: general_PLAT_kernel-base_2.0_version_arm.zip

Provides alarm, access parameter template, monitoring template, report, email, and SMS forwarding functionalities.

·     x86: general_PLAT_network_2.0_version.zip

·     ARM: general_PLAT_network_2.0_version_arm.zip

(Optional.) Provides basic management of network resources, network performance, network topology, and iCC.

Install this application if you are to check match of the software versions with the solution.

·     x86: general_PLAT_Dashboard_2.0_version.zip

·     ARM: general_PLAT_Dashboard_2.0_version_arm.zip

Provides the dashboard framework.

·     x86: general_PLAT_widget_2.0_version.zip

·     ARM: general_PLAT_widget_2.0_version_arm.zip

Provides dashboard widget management.

 


Deploying the controller

IMPORTANT

IMPORTANT:

·     The controller runs on the Unified Platform. You can deploy, upgrade, and uninstall it only on the Unified Platform.

·     Before deploying the controller, make sure the required applications have been deployed.

 

Preparing for deployment

Enabling NICs

If the server uses multiple NICs for connecting to the network, enable the NICs before deployment.

The procedure is the same for all NICs. The following procedure enables NIC ens34.

To enable a NIC:

1.     Access the server that hosts the Unified Platform.

2.     Access the NIC configuration file.

[root@node1 /]# vi /etc/sysconfig/network-scripts/ifcfg-ens34

3.     Set the BOOTPROTO field to none to not specify a boot-up protocol and set the ONBOOT field to yes to activate the NIC at system startup.

Figure 1 Modifying the configuration file of a NIC

 

4.     Execute the ifdown and ifup commands in sequence to reboot the NIC.

[root@node1 /]# ifdown ens34

[root@node1 /]# ifup ens34

5.     Execute the ifconfig command to verify that the NIC is in up state.

Planning the networks

Network planning

Plan for the following three types of networks:

·     Calico network

Calico is an open source networking and network security solution for containers, Vims, and native host-based workloads. The Calico network is an internal network used for container interactions. The network segment of the Calico network is the IP address pool set for containers when the cluster is deployed. The default network segment is 177.177.0.0. You do not need to configure an address pool for the Calico network when installing and deploying the controller. The Calico network and MACVLAN network can use the same network interface.

·     MACVLAN network

The MACVLAN network is used as a management network.

The MACVLAN virtual network technology allows you to bind multiple IPs and MAC addresses to a physical network interface. Some applications, especially legacy applications or applications that monitor network traffic, require a direct connection to the physical network. You can use the MACVLAN network driver program to assign a MAC address to the virtual network interface of each container, making the virtual network interface seem to be a physical network interface directly connected to the physical network. The physical network interface must be able to handle "promiscuous mode", supporting bundling of multiple MAC addresses to a physical interface.

·     (Optional.) OVS-DPDK network

The OVS-DPDK type network is used as a management network.

Open vSwitch is a multi-layer virtual switch that supports SDN control semantics through the OpenFlow protocol and its OVSDB management interface. DPDK has a set of user space libraries and enables faster development of high speed data packet networking applications. Integrating DPDK and Open vSwitch, the OVS-DPDK network architecture can accelerate OVS data stream forwarding.

 

NOTE:

·     For the vBGP component E1121 or later, the required management networks is MACVLAN and you are not required to configure HugePages.

·     For the vBGP component earlier than E1121, the required management networks is OVS-DPDK and you should configure HugePages before deployment. For the configuration procedure, see “(Optional.) Configuring HugePages.”

 

The required management networks depend the deployed components and application scenarios. Before deployment, plan the network address pools in advance.

Table 11 Network types and numbers used by components in the non-RDRS scenario

Component

Network type

Number of networks

Remarks

SeerEngine-DC

MACVLAN (management network)

1

The SeerEngine-DC and vDHCP components can use the same MACVLAN network as the management network.

vDHCP

MACVLAN (management network)

1

vBGP

E1121 or later

Management network and service network converged

MACVLAN (management network)

1

Used for communication between the vBGP and SeerEngine-DC components and service traffic transmission.

Management network and service network separated

MACVLAN (management network)

1

Used for communication between the vBGP and SeerEngine-DC components.

MACVLAN (service network)

1

Used for service traffic transmission.

E1121 earlier

Management network and service network converged

OVS-DPDK (management network)

1

Used for communication between the vBGP and SeerEngine-DC components and service traffic transmission.

Management network and service network separated

OVS-DPDK (management network)

1

Used for communication between the vBGP and SeerEngine-DC components.

OVS-DPDK (service network)

1

Used for service traffic transmission.

 

Figure 2 Cloud data center networks in the non-RDRS scenario (only vBGP deployed, management and service networks converged)

 

To deploy RDRS, follow these guidelines to plan the networks:

·     Use the same IP address for the vDHCP components at the primary and backup sites.

·     As a best practice, configure separate MACVLAN-type management networks for the SeerEngine-DC and vDHCP components. If the two MACVLAN networks share a NIC, configure VLANs to isolate the networks.

·     Configure a separate MACVLAN network as the RDRS network. The RDRS network is used to synchronize data between the primary and backup sites. Ensure connectivity between the RDRS networks at the primary site and backup site. If the RDRS and management networks use the same NIC, configure VLANs to isolate the networks. As a best practice, use a separate NIC for the RDRS network as shown in Table 12 and Figure 3.

Table 12 Network types and numbers used by components at the primary/ backup site in the RDRS scenario

Component

Network type

Number of networks

Remarks

SeerEngine-DC

MACVLAN (management network)

1

As a best practice, configure separate MACVLAN-type management networks for the SeerEngine-DC and vDHCP components.

MACVLAN (RDRS network)

1

·     Used for carrying traffic for real-time data synchronization between the primary and backup sites.

·     Used for communication between the RDRS networks at the primary and backup sites 

·     As a best practice, use a separate network interface.

vDHCP

MACVLAN (RDRS network)

1

As a best practice, configure separate MACVLAN-type management networks for the SeerEngine-DC and vDHCP components.

vBGP

E1121 or later

Management network and service network converged

MACVLAN (management network)

1

Used for communication between the vBGP and SeerEngine-DC components and service traffic transmission.

Management network and service network separated

MACVLAN (management network)

1

Used for communication between the vBGP and SeerEngine-DC components.

MACVLAN (service network)

1

Used for service traffic transmission.

E1121 earlier

Management network and service network converged

OVS-DPDK (management network)

1

Used for communication between the vBGP and SeerEngine-DC components and service traffic transmission.

Management network and service network separated

OVS-DPDK (management network)

1

Used for communication between the vBGP and SeerEngine-DC components.

OVS-DPDK (service network)

1

Used for service traffic transmission.

 

Figure 3 Network planning for the cloud data center scenario (to deploy vBGP and RDRS)

 

 

IP address planning

To calculate the IP addresses required for a MACVLAN or OVS-DPDK subnet.

Table 13 IP addresses required for the networks in the non-RDRS scenario

Component

Network type

Maximum team members

Default team members

Number of IP addresses

Remarks

SeerEngine-DC

MACVLAN (management network)

32

3

Number of cluster nodes + 1 (cluster IP)

The SeerEngine-DC and vDHCP components can use the same MACVLAN network as the management network.

vDHCP

MACVLAN (management network)

2

2

Number of cluster nodes + 1 (cluster IP)

vBGP

E1121 or later

Management network and service network converged

MACVLAN (management network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

Management network and service network separated

MACVLAN (management network)

2

2

Number of cluster nodes

N/A

MACVLAN (service network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

E1121 earlier

Management network and service network converged

OVS-DPDK (management network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

Management network and service network separated

OVS-DPDK (management network)

2

2

Number of cluster nodes

N/A

OVS-DPDK (service network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

 

Table 14 IP addresses required for the networks at the primary/backup site in the RDRS scenario

Component

Network type

Maximum team members

Default team members

Number of IP addresses

Remarks

SeerEngine-DC

MACVLAN (management network)

32

3

Number of cluster nodes + 1 (cluster IP)

As a best practice, configure separate MACVLAN-type management networks for the SeerEngine-DC and vDHCP components.

MACVLAN (RDRS network)

32

3

Number of cluster nodes

A separate network interface is required.

vDHCP

MACVLAN (management network)

2

2

Number of cluster nodes + 1 (cluster IP)

As a best practice, configure separate MACVLAN-type management networks for the SeerEngine-DC and vDHCP components.

vBGP

E1121 or later

Management network and service network converged

MACVLAN (management network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

Management network and service network separated

MACVLAN (management network)

2

2

Number of cluster nodes

N/A

MACVLAN (service network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

E1121 earlier

Management network and service network converged

OVS-DPDK (management network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

Management network and service network separated

OVS-DPDK (management network)

2

2

Number of cluster nodes

N/A

OVS-DPDK (service network)

2

2

Number of cluster nodes + 1 (cluster IP)

N/A

 

Table 15 shows an example of IP address planning for a vBGP cluster in a non-RDRS scenario where the management network and service network are converged

Table 15 IP address planning for the non-RDRS scenario

Component

Network type

IP addresses

Remarks

SeerEngine-DC/vDHCP

MACVLAN (management network)

Subnet: 10.0.234.0/24 (gateway 10.0.234.254)

The SeerEngine-DC and vDHCP components can use the same MACVLAN network as the management network.

Network address pool: 10.0.234.11 to 10.0.234.32

vBGP

E1121 or later

MACVLAN (management network)

Subnet: 192.168.13.0/24 (gateway 192.168.13.1)

Management network and service network are converged.

Network address pool: 192.168.13.101 to 192.168.13.132

E1121 earlier

OVS-DPDK (management network)

Subnet: 192.168.13.0/24 (gateway 192.168.13.1)

Management network and service network are converged.

Network address pool: 192.168.13.101 to 192.168.13.132

 

Table 16 shows shows an example of IP address planning for a vBGP cluster in an RDRS scenario where the management network and service network are converged.

Table 16 IP address planning for the RDRS scenario

Site

Component

Network

IP address

Remarks

Primary site

SeerEngine-DC

MACVLAN (management network)

Subnet: 10.0.234.0/24 (gateway 10.0.234.254)

Make sure the primary and backup sites use different IP addresses for the RDRS networks and controllers.

Address pool: 10.0.234.11 to 10.0.234.32

MACVLAN (RDRS network)

Subnet: 192.168.16.0/24 (gateway 192.168.16.1)

As a best practice, use a separate network interface.

Address pool: 192.168.16.101 to 192.168.16.132

vDHCP

MACVLAN (management network)

Subnet: 10.0.233.0/24 (gateway 10.0.233.254)

The vDHCP components at the primary and backup sites use the same IP address.

Address pool: 10.0.233.6 to 10.0.233.38

vBGP

E1121 or later

MACVLAN (management network)

Subnet: 192.168.13.0/24 (gateway 192.168.13.1)

Management network and service network are converged.

Address pool: 192.168.13.101 to 192.168.13.132

E1121 earlier

OVS-DPDK (management network)

Subnet: 192.168.13.0/24 (gateway 192.168.13.1)

Management network and service network are converged.

Address pool: 192.168.13.101 to 192.168.13.132

Backup site

SeerEngine-DC

MACVLAN (management network)

Subnet: 10.0.234.0/24 (gateway 10.0.234.254)

Make sure the primary and backup sites use different IP addresses for the RDRS networks and controllers.

Address pool: 10.0.234.33 to 10.0.234.54

MACVLAN (RDRS network)

Subnet: 192.168.16.0/24 (gateway 192.168.16.1)

As a best practice, use a separate network interface.

Address pool: 192.168.16.133 to 192.168.16.164

vDHCP

MACVLAN (management network)

Subnet: 10.0.233.0/24 (gateway 10.0.233.254)

The vDHCP components at the primary and backup sites use the same IP address.

Address pool: 10.0.233.6 to 10.0.233.38

vBGP

E1121 or later

MACVLAN (management network)

Subnet: 192.168.13.0/24 (gateway 192.168.13.1)

Management network and service network are converged.

Address pool: 192.168.13.133 to 192.168.13.164

E1121 earlier

OVS-DPDK (management network)

Subnet: 192.168.13.0/24 (gateway 192.168.13.1)

Management network and service network are converged.

Address pool: 192.168.13.133 to 192.168.13.164

 

IMPORTANT

IMPORTANT:

·     The management networks of SeerEngine-DC and vBGP are on different network segments. You must configure routing entries on the connected switches to enable Layer 3 communication between the SeerEngine-DC management network and vBGP management network.

·     If two MACVLAN networks share a NIC, configure the port on the switch that connects to the server as a trunk port, configure the port on the server that connects to the switch to work in hybrid mode, and configure VLAN and VLAN interface settings on the switch.

·     For RDRS to operate correctly, make sure the IP addresses of RDRS networks at the primary and backup sites do not overlap with the IP address of the SeerEngine-DC component, and the vDHCP components at the primary and backup sites use the same IP address.

 

Deploying the controller

1.     Log in to the Unified Platform. Click System > Deployment.

2.     Obtain the SeerEngine-DC installation packages. Table 17 provides the names of the installation packages. Make sure you select installation packages specific to your server type.

Table 17 Installation packages

Component

Installation package name

SeerEngine-DC

·     x86: SeerEngine_DC-version-MATRIX.zip

·     ARM: SeerEngine_DC-version-ARM64.zip

vBGP (optional)

·     x86: vBGP-version.zip

·     ARM: vBGP-version-ARM64.zip

 

3.     Click Upload to upload the installation package and then click Next.

4.     Select Cloud DC. To deploy the vBGP component simultaneously, select vBGP and select a network scheme for vBGP deployment. For the controller to support RDRS, select Support RDRS. Then click Next.

Figure 4 Selecting components

 

5.     Configure the MACVLAN networks and add the uplink interfaces according to the network plan in "Planning the network." If you are to deploy vBGP earlier than E1121, you need to configure OVS-DPDK networks.

To deploy RDRS, configure the network settings as follows:

¡     Configure a MACVLAN management network separately for the SeerEngine-DC and vDHCP components.

¡     Specify a VLAN for the MACVLAN network configured for the vDHCP component, and make sure the VLAN ID is different from the PVID.

¡     Add a same uplink interface for the two MACVLAN networks.

¡     Configure a separate MACVLAN network as the RDRS network.

Figure 5 Configuring a MACVLAN management network for the SeerEngine-DC component

 

Figure 6 Configuring a MACVLAN management network for the vDHCP component

 

Figure 7 Configuring an RDRS network

 

Figure 8 Configuring an MACVLAN network for vBGP (E1121 or later)

 

Figure 9 Configuring an OVS-DPDK network for vBGP (E1121 earlier)

 

6.     On the Bind to Nodes page, select whether to enable node binding. If you enable node binding, select a minimum of three master nodes to host and run microservice pods.

Figure 10 Enabling node binding

 

7.     Bind networks to the components, assign IP address to the components, and then click Next.

Figure 11 Binding networks (1)

 

Figure 12 Binding networks (2)

 

8.     On the Confirm Parameters tab, verify network information, configure the RDRS status, and specify a VRRP group ID for the components.

A component automatically obtains an IP address from the IP address pool of the subnet bound to it. To modify the IP address, click Modify and then specify another IP address for the component. The IP address specified must be in the IP address range of the subnet bound to the component.

You are required to configure the RDRS status for the controller if you have selected the Support RDRS option for it:

¡     Select Primary from the Status in RDRS list for a controller at the primary site.

¡     Select Backup from the Status in RDRS list for a controller at the backup site.

If vDHCP and vBGP components are to be deployed, you are required to specify a VRRP group ID in the range of 1 to 255 for the components. The VRRP group ID must be unique within the same network.

9.     Click Deploy.

Figure 13 Deployment in progress

 

 

NOTE:

The general_PLAT_kernel-base_2.0, general_PLAT_Dashboard_2.0, and general_PLAT_widget_2.0 installation packages will be installed automatically during the controller deployment process. You only need to upload the packages.

 

 


Accessing the controller

After the controller is deployed on the Unified Platform, the controller menu items will be loaded on the Unified Platform. Then you can access the Unified Platform to control and manage the controller.

To access the controller:

1.     Enter the address for accessing the Unified Platform in the address bar and then press Enter.

By default, the login address is http://ucenter_ip_address:30000/central/index.html.

¡     ucenter_ip_address represents the northbound virtual IP address of the Unified Platform.

¡     30000 is the port number.

2.     Enter the username and password, and then click Log in.

The default username is admin and the default password is Pwd@12345.

 


Registering and installing licenses

After you install the controller, you can use its complete features and functions for a 180-day trial period. After the trial period expires, you must get the controller licensed.

Installing the activation file on the license server

For the activation file request and installation procedure, see H3C Software Products Remote Licensing Guide.

Obtaining licenses

1.     Log in to the Unified Platform and then click System > License Management > DC license.

2.     Configure the parameters for the license server as described in Table 18.

Table 18 License server parameters

Item

Description

IP address

Specify the IP address configured on the license server used for internal communication in the Unified Platform cluster.

Port number

Specify the service port number of the license server. The default value is 5555.

Username

Specify the client username configured on the license server.

Password

Specify the client password configured on the license server.

 

3.     Click Connect to connect the controller to the license server.

The controller will automatically obtain licensing information after connecting to the license server.


Backing up and restoring the controller configuration

You can back up and restore the controller configuration on the Unified Platform. For the procedures, see H3C Unified Platform Deployment Guide.

 


Upgrading the controller

CAUTION

CAUTION:

·     The upgrade might cause service interruption. Be cautious when you perform this operation.

·     Before upgrading or scaling out the Unified Platform or the controller, specify the manual switchover mode for the RDRS if the RDRS has been created.

·     Do not upgrade the controllers on the primary and backup sites simultaneously if the RDRS has been created. Upgrade the controller on a site first, and upgrade the controller on another site after data is synchronized between the two sites.

·     In an RDRS system, the IP addresses of the vDHCP components at the primary and backup sites must be the same. As a best practice, remove and reinstall the vDHCP component after upgrading the controller to support RDRS in an environment where the vDHCP component has been deployed.

 

This section describes the procedure for upgrading and uninstalling the controller. For the upgrading and uninstallation procedure for the Unified Platform, see H3C Unified Platform Deployment Guide.

The controller can be upgraded on the Unified Platform with the configuration retained.

To upgrade the controller:

1.     Log in to the Unified Platform. Click System > Deployment.

Figure 14 Deployment page

 

2.     Click the left chevron button  for the controller to expand controller information, and then click the upgrade icon  .

3.     Continue the upgrade procedure as guided by the system.

¡     If the controller already supports RDRS, the upgrade page is displayed.

# Upload and select the installation package.

# Select whether to enable Add Master Node-Component Bindings. The nodes that have been selected during controller deployment cannot be modified or deleted.

Figure 15 Adding node binding

 

# Click Deploy.

¡     If the controller does not support RDRS, the system displays a confirmation dialog box with a Support RDRS option.

Figure 16 Support RDRS option

 

-     If you leave the Support RDRS option unselected, the upgrade page is displayed. Upload and select the installation package and then click Upgrade.

-     If you select the Support RDRS option, perform the following steps:

# On the Configure Network tab, create a MACVLAN network as the RDRS network. Make sure the RDRS network and the management network are on different network segments.

 

# On the Bind Network tab, bind the controller to the corresponding RDRS network and subnet, and then click Next.

 

# On the Confirm Parameters tab, verify that the IP addresses assigned to the RDRS network are correct, and then click Next.

# On the Upgrade tab, upload and select the installation package or patch package, and then click Upgrade.

4.     If the upgrade fails, click Roll Back to roll back to the previous version.


Upgrading vBGP

vBGP can be upgraded on the Unified Platform with the configuration retained. The network type and adapted Unified Platform for vBGP see Table 19.

Table 19 The network type and adapted Unified Platform for vBGP

vBGP

Network type

Adapted Unified Platform version

E1121 or later

MACVLAN

E0613H07 or later

E1121 earlier

OVS-DPDK

E0613H07 earlier

 

vBGP upgrade includes the following two types:

·     No cross-E1121-version upgrade

During the upgrade process, you do not need to change the network type of vBGP. It is applicable to upgrade the version earlier than E1121 to the version earlier than E1121 because they all use OVS-DPDK; or upgrade E1121 or later version because they all use MACVLAN.

·     Cross-E1121-version upgrade

During the upgrade process, the network type used by vBGP needs to be changed from OVS-DPDK to MACVLAN. It is applicable to upgrade from the vBGP earlier than E1121 to E1121 or later.

No cross-E1121-version upgrade

To upgrade the vBGP:

1.     Log in to the Unified Platform. Click System > Deployment.

Figure 17 Deployment page

 

2.     Click the left chevron button  for Cloud DC to expand component information. Click the  icon for the vBGP component to upgrade the vBGP.

3.     Upload and select the installation package.

Figure 18 Upgrade page

2022-07-26_151920

 

4.     Click Upgrade.

5.     If the upgrade fails, click Roll Back to roll back to the previous version.

Cross-E1121-version upgrade

To upgrade the vBGP:

1.     Upgrade the Unified Platform to E0613H07 or later that supports vBGP using MACVLAN.For the upgrading procedure for Unified Platform, see H3C Unified Platform Deployment Guide.

2.     Before uninstalling vBGP, make sure the route not aging has been configured under the BGP view on the RR device. Otherwise, uninstalling vBGP will interrupt the hybrid overlay traffic.

3.     Uninstall the current version of vBGP. For the uninstallation procedure, see "Uninstalling vBGP."

4.     After the uninstallation, log in to the Unified Platform. Click System > Deployment. In the Deployment page, click Configure Network and then delete the OVS-DPDK network used by the original vBGP.

Figure 19 Deployment page

 

Figure 20 Delete OVS-DPDK network

2022-07-26_152025

 

5.     Obtain the vBGP E1121 or later installation packages and re-install vBGP. For the installation procedure, see "Deploying the controller." And you need to configure the MACVLAN network as management and service network for vBGP.

IMPORTANT

IMPORTANT:

In addition to changing the network type, it is recommended to keep the configuration of other parameters in vBGP deployment consistent with those before uninstallation to avoid service exceptions after the upgrade.

 

 


Hot patching the controller

CAUTION

CAUTION:

·     Hot patching the controller might cause service interruption. To minimize service interruption, select the time to hot patch the controller carefully.

·     You cannot upgrade the controller to support RDRS through hot patching.

·     If you are to hot patch the controller after the RDRS is created, first specify the manual switchover mode for the RDRS.

·     Do not hot patch the controllers at the primary and standby sites at the same time after the RDRS is created. Only after the controller at a site is upgraded and data is synchronized, you can upgrade the controller at the other site.

 

On the United Platform, you can hot patch the controller with the configuration retained.

To hot patch the controller:

1.     Log in to the Unified Platform. Click System > Deployment.

Figure 21 Deployment page

 

2.     Click the left chevron button  of the controller to expand controller information, and then click the hot patching icon  .

3.     Upload the patch package and select the patch of the required version, and then click Upgrade.

Figure 22 Hot patching page

 

4.     If the upgrade fails, click Roll Back to roll back to the previous version or click Terminate to terminate the upgrade.


Uninstalling the controller

1.     Log in to the Unified Platform. Click System > Deployment.

2.     Click the  icon to the left of the controller name and then click Uninstall.

Figure 23 Uninstalling the controller

 


Uninstalling vBGP

1.     Log in to the Unified Platform. Click System > Deployment.

2.     Click the  icon to the left of the vBGP and then click Uninstall..

Figure 24 Uninstalling vBGP

2022-07-26_151852

 


RDRS

About RDRS

A remote disaster recovery system (RDRS) provides disaster recovery services between the primary and backup sites. The controllers at the primary and backup sites back up each other. When the RDRS is operating correctly, data is synchronized between the site providing services and the peer site in real time. When the service-providing site becomes faulty because of power, network, or external link failure, the peer site immediately takes over to ensure service continuity.

The RDRS supports the following switchover modes:

·     Manual switchover—In this mode, the RDRS does not automatically monitor state of the controllers on the primary or backup site. You must manually control the controller state on the primary and backup sites by specifying the Switch to Primary or Switch to Backup actions. This mode requires deploying the Unified Platform of the same version on the primary and backup sites.

·     Auto switchover with arbitration—In this mode, the RDRS automatically monitors state of the controllers. Upon detecting a controller or Unified Platform failure (because of site power or network failure), the RDRS automatically switches controller state at both sites by using the third-party arbitration service. This mode also supports manual switchover. To use this mode, you must deploy the Unified Platform of the same version at the primary and backup sites and the third-party arbitration service.

The third-party arbitration service can be deployed on the same server as the primary or backup site. However, when the server is faulty, the third-party arbitration service might stop working. As a result, RDRS auto switchover will fail. As a best practice, configure the third-party arbitration service on a separate server.

Creating an RDRS

1.     Deploy the primary and backup sites and a third-party site.

2.     Deploy RDRS on the controllers.

3.     Create an RDRS.

Deploying the primary and backup sites and a third-party site

Restrictions and guidelines

Follow these restrictions and guidelines when you deploy the primary and backup sites and a third-party site:

·     The Unified Platform version, transfer protocol, username and password, and IP version of the primary and backup sites must be the same.

·     The arbitration service package on the third-party site must match the Unified Platform version on the primary and backup sites.

·     To use the auto switchover with arbitration mode, you must deploy a standalone Unified Platform as the third-party site, and deploy arbitration services on the site.

·     To use the allowlist feature in an RDRS scenario, you must add the IP addresses of all nodes on the backup site to the allowlist on the primary site, and add the IP addresses of all nodes on the primary site to the allowlist on the backup site.

·     To avoid service failure during a primary/backup switchover, you must configure a same IP address for the vDHCP components at the primary and backup sites.

Procedure

This procedure uses a separate server as the third-party site and deploys the Unified Platform in standalone mode on this site.

To deploy the primary and backup sites and a third-party site:

1.     Deploy Installer on primary and backup sites and the third-party site. For the deployment procedure, see H3C Unified Platform Deployment Guide.

2.     Deploy the Unified Platform on primary and backup sites. Specify the same NTP server for the primary and backup sites. For the deployment procedure, see H3C Unified Platform Deployment Guide.

3.     Deploy arbitration services on the third-party site.

a.     Log in to Installer.

b.     Select Deploy from the top navigation bar and then select Application from the left navigation pane.

c.     Click Upload to upload the arbitration service package SeerEngine_DC_ARBITRATOR-version.zip (for an x86 server) or SeerEngine_DC_ARBITRATOR-version-ARM64.zip (for an ARM server).

For some controllers, only one arbitration service package is available, either for an x86 server or for an ARM server. See the release notes for the service packages available for a controller.

d.     Click Next and then configure the parameters.

e.     Click Deploy.

Deploying RDRS on the controllers

Restrictions and guidelines

If the controller installed on the primary site does not support disaster recovery, click the  icon on the controller management page to upgrade it to support disaster recovery. For the upgrade procedure, see "Upgrading the controller."

If the controller installed on the specified backup site does not support disaster recovery or is not in backup state, remove the controller and install it again.

The SeerEngine-DC installation package name and SeerEngine-DC version must be the same on the primary and backup sites.

Procedure

To deploy RDRS on the controller, select the Support RDRS option when deploying the controller and configure the primary and backup RDRS state for it. For the controller deployment procedure, see "Deploying the controller."

Creating an RDRS

Restrictions and guidelines

Ensure network connectivity between the primary and backup sites during the RDRS creation process. If the creation failures, first check the network connectivity between the primary and backup sites.

Do not create an RDRS at the primary and backup sites simultaneously.

You cannot back up or restore data on the RDRS configuration page, including the primary or backup site name, primary or backup site IP address, backup site username and password, and third-party site IP address.

After an RDRS is created, you cannot change the internal virtual IP of the cluster at the primary and backup sites and the node IPs.

Procedure

1.     Click System on the top navigation bar and then select RDRS from the navigation pane.

2.     In the Site Settings area, configure the primary, backup, and third-party site settings, and specify the switchover mode.

3.     Click Connect.

If the heartbeat link is successfully set up, the RDRS site settings have been configured successfully.

After the sites are built successfully, the backup site will automatically synchronize its user, log, and backup and restore settings to the primary site, with the exception of the log content.

4.     In the Disaster Recovery Components area, click Add to configure disaster recovery components.


Cluster 2+1+1 deployment

About cluster 2+1+1 deployment

The cluster 2+1+1 mode is a low-cost failure recovery solution. To set up this solution, deploy the three nodes for setting up the DC controller cluster in two different cabinets or equipment rooms and reserve a standby node outside the cluster as a redundant node. When the cluster is operating correctly, leave the standby node unpowered. If two master nodes in the cluster fail at the same time, power on the standby node. The standby node will join the cluster quickly and the cluster service will have a fast recovery.

Figure 25 Cluster disaster recovery deployment

 

Deployment process

1.     Prepare four servers: three used for setting up the Unified Platform cluster and one used as the standby server.

2.     Install the four servers at different locations. As a best practice, install two of the servers for setting up the cluster in one cabinet (or equipment room), and the other server for setting up the cluster and the standby server in another cabinet (or equipment room).

3.     Install the Unified Platform on the three servers for setting up the cluster. For the installation procedure, see H3C Unified Platform Deployment Guide. As a best practice, assign IP addresses in the same network segment to the three servers and make sure they are reachable to each other.

4.     Deploy the SeerEngine-DC controller in the cluster. For the deployment procedure, see "Deploying the controller."

5.     Install the Installer platform on the standby server. Make sure the Installer version is consistent with that installed on the three cluster servers. You are not required to deploy the Unified Platform on the standby server.

Preparing for disaster recovery

1.     Record the host name, NIC name, IP address, and username and password of the three nodes in the cluster.

2.     Install Installer on the standby node. The Installer must be the same version as that installed on the cluster nodes.

 

IMPORTANT

IMPORTANT:

·     The drive letter and partitioning scheme of the standby node must be consistent with those of the cluster nodes.

·     If a Unified Platform patch version has been installed on the cluster nodes, use the following steps to install Installer on the standby node for the standby node to have the same version of Installer as the cluster nodes:

1.     Install the Unified Platform base version (E06xx/E07xx) ISO image.

2.     Uninstall Installer from the operating system of the host.

3.     Install the same version of Installer as that in the Unified Platform patch version on the operating system of the host.

 

Two node-failure recovery

In a cluster with three leader nodes as shown in Figure 26, if two nodes (for example, DC controllers 1 and 2) fail at the same time, the cluster cannot operate correctly. Only DC controller 3 is accessible and will automatically enter emergency mode. In emergency mode, you can only view and recover configuration data on the controller.

Figure 26 Failure of two nodes

 

To recover the cluster, perform the following steps:

1.     Power on the standby node (without connecting it to the management network) and verify that Installer has been installed on it. If not installed, see H3C Unified Platform Deployment Guide to install the Installer.

Do not configure any cluster-related settings on the standby node after Installer is installed on it.

2.     Verify that the host name, NIC name, IP address, and username and password of the standby node are exactly the same as those of the failed nodes, DC controller 1 in this example.

3.     Disconnect the network connections of the failed controllers 1 and 2, and connect the standby node to the management network.

4.     Log in to the Installer Web interface of controller 3, and then click Deploy > Cluster. Click the  button for controller 1 and select Rebuild from the list. Then use one of the following methods to rebuild the node:

¡     Select and upload the same version of software package as installed on the current code. Then click Apply.

¡     Select the original software package version and then click Apply.

5.     Log out to quit emergency mode. Then log in to the system again. As a best practice, use the VIP to access Installer.

6.     Repair or recover DC controller 2.

After the cluster resumes services, you can repair or recover DC controller 2.

¡     To use a new physical server to replace controller 2, you are required to log in to the Installer page to perform repair operations.

¡     If the file system of the original controller 2 can be restored and started correctly, the controller can automatically join the cluster after you power on it. Then the cluster will have three correctly operating controllers.

 

CAUTION

CAUTION:

·     After the nodes are rebuilt, the standby node will join the cluster as controller 1. The original controller 1 cannot join the cluster directly after failure recovery. As a best practice, format the drive on the original controller 1, install Installer on it, and use it as the new standby node.

·     If two controllers in the cluster are abnormal, you are not allowed to restart the only normal node. If the normal node is restarted, the cluster cannot be recovered through 2+1+1 disaster recovery.

 

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网