- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
01-Text | 2.69 MB |
Contents
(Optional.) Configuring network settings
Registering and installing licenses
Installing the activation file on the license server
Backing up and restoring the controller configuration
Software upgrade and uninstallation
Upgrading the controller and DTN
Uninstalling SeerEngine-DC and DTN
Uninstalling the DTN component only
Scaling out or in the controller
Scaling out the controller from standalone mode to cluster mode
Scaling out the controller in cluster mode
Scaling in the controller in cluster mode
Deploying Unified Platform at the primary and backup sites
Deploying the controller at the primary and backup sites
Configuring disaster recovery components for the controller
Restrictions and guidelines for RDRS switchover
Moving RDRS to Unified Platform
License changes before and after RDRS moving from the controller to Unified Platform
Changing the license owner at an RDRS switchover
Cluster deployment over a Layer 3 network
Deploying the controller at Layer 3
About cluster 2+1+1 deployment
About the controller
SeerEngine-DC is a data center controller. Similar to a network operating system, various SDN applications can run on it. It controls various resources on the network, provides APIs for applications, and implements specific network forwarding.
The controller has the following features:
· It supports OpenFlow 1.3 and provides built-in services and a device driver framework.
· It is a highly available, scalable distributed platform.
· It provides extensible REST APIs and GUI.
· It can deployed in standalone or cluster mode.
|
NOTE: "drive" and "disk" are used interchangeably in this document. |
Preparing for installation
Server requirements
Hardware requirements
The controller can be deployed on a single server (standalone mode) or on a cluster of servers (cluster mode). As a best practice, deploy the controller on a cluster of three servers.
The controller supports RDRS, which provides disaster recovery services between the primary and backup sites. In the typical 3+3 RDRS mode, deploy three severs at the primary and backup sites respectively.
The controller supports deploying the simulation feature. To use the simulation feature, you need to deploy the DTN component and DTN physical host, and simulate the real network environment 1:1 through virtual switches.
For more information about the hardware requirements in various deployment scenarios of the controller, see the AD-Net solution hardware configuration guide.
Software requirements
SeerEngine-DC runs on Unified Platform as a component. Before deploying SeerEngine-DC, first install Unified Platform. Table 1 describes the CPU and operating system compatibility.
Table 1 CPU and operating system compatibility
CPU |
Supported operating systems |
Recommended operating system |
x86-64 (Intel64/AMD64) |
· H3Linux 1.1.2 · H3Linux 2.0 (Unified Platform E0711 and later) |
H3Linux 1.1.2 |
Disk partitioning
Configure the drives based on the requirements described in AD-Net solution hardware configuration guide, and partition the drives based on the requirements provided in the following table. Do not use automatic partitioning for drives.
Table 2 Drive partition configuration information (2400G partition)
RAID settings |
Partition name |
Mount point |
Minimum capacity |
|
RAID 10 A minimum total capacity of 2400 GB after RAID setup |
/dev/sda1 |
/boot/efi |
200 MiB |
EFI system partition, required to be configured only in UEFI mode. |
/dev/sda2 |
/boot |
1024 MiB |
||
/dev/sda3 |
/ |
740 GiB |
||
/dev/sda4 |
/var/lib/docker |
460 GiB |
||
/dev/sda6 |
swap |
1024 MiB |
||
/dev/sda7 |
/var/lib/ssdata |
520 GiB |
||
/dev/sda8 |
/var/lib/dtn-virt-host |
160 GiB |
||
/dev/sda9 |
None. |
300 GiB |
Reserved for GlusterFS, not required to be configured during the operating system installation. |
|
RAID 1 A minimum total capacity of 50 GB after RAID setup |
/dev/sdb |
/var/lib/etcd |
50 GiB |
· The ETCD partition must occupy a separate physical disk if the controller version is earlier than E6203 and Unified Platform is earlier than E0706 E06xx included). · The ETCD partition can share a separate physical disk with the other partitions if the controller version is E6203 or later and Unified Platform is E0706 or later. As a best practice, configure the ETCD partition on a separate physical disk. |
Table 3 Drive partition configuration information (1920G partition)
RAID settings |
Partition name |
Mount Point |
Minimum capacity |
|
RAID 10 A minimum total capacity of 1920 GB after RAID setup |
/dev/sda1 |
/boot/efi |
200 MiB |
EFI system partition, required to be configured only in UEFI mode. |
/dev/sda2 |
/boot |
1024 MiB |
||
/dev/sda3 |
/ |
550 GiB |
||
/dev/sda4 |
/var/lib/docker |
380 GiB |
||
/dev/sda6 |
swap |
1024 MiB |
||
/dev/sda7 |
/var/lib/ssdata |
420 GiB |
||
/dev/sda8 |
/var/lib/dtn-virt-host |
160 GiB |
||
/dev/sda9 |
None. |
220 GiB |
Reserved for GlusterFS and does not need to be configured during the installation of the operating system. |
|
RAID 1 A minimum total capacity of 50 GB after RAID setup |
/dev/sda1 |
/var/lib/etcd |
50 GiB |
· The ETCD partition must occupy a separate physical disk if the controller version is earlier than E6203 and Unified Platform is earlier than E0706 E06xx included). · The ETCD partition can share a separate physical disk with the other partitions if the controller version is E6203 or later and Unified Platform is E0706 or later. As a best practice, configure the ETCD partition on a separate physical disk |
Client requirements
You can access Unified Platform from a Web browser without installing any client. As a best practice, use Google Chrome 70 or a later version.
Pre-installation checklist
Table 4 Pre-installation checklist
Item |
Requirements |
|
Server |
Hardware |
· The CPUs, memory, drives, and network interfaces meet the requirements. · The server supports Unified Platform. |
Software |
The system time settings are configured correctly. As a best practice, configure NTP for time synchronization and make sure the devices synchronize to the same clock source. |
|
Client |
You can access Unified Platform from a Web browser without installing any client. As a best practice, use Google Chrome 70 or a later version. |
|
Server and OS compatibility |
To view the compatibility matrix between H3C servers and operating systems, click http://www.h3c.com/en/home/qr/default.htm?id=65 |
(Optional.) Configuring network settings
Enabling network interfaces
If the server uses multiple network interfaces for connecting to the network, enable the network interfaces before deployment.
To enable a network interface:
1. Access the server that hosts Unified Platform remotely.
2. Open and edit the configuration file of the network interface. In this example, the configuration file of network interface ens34 is edited.
[root@node1 /]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
3. Set the BOOTPROTO field to none to not specify a boot-up protocol and set the ONBOOT field to yes to activate the network interface at system startup.
Figure 1 Modifying the configuration file for a network interface
4. Execute the ifdown and ifup commands in sequence to reboot the network interface.
[root@node1 /]# ifdown ens34
[root@node1 /]# ifup ens34
5. Execute the ifconfig command to verify that the network interface is in up state.
Planning the networks
Network planning
Calico networks and MACVLAN networks are present in the environment.
· Calico network
Calico is an open source networking and network security solution for containers, VMs, and native host-based workloads. The Calico network is an internal network used for container interactions. The network segment of the Calico network is the IP address pool set for containers when the cluster is deployed. The default network segment is 177.177.0.0. You do not need to configure an address pool for the Calico network when installing and deploying the controller. The Calico network and MACVLAN network can use the same network interface.
· MACVLAN network
The MACVLAN network is used as a management network.
The MACVLAN virtual network technology allows you to bind multiple IPs and MAC addresses to a physical network interface. Some applications, especially legacy applications or applications that monitor network traffic, require a direct connection to the physical network. You can use the MACVLAN network driver program to assign a MAC address to the virtual network interface of each container, making the virtual network interface seem to be a physical network interface directly connected to the physical network. The physical network interface must be able to handle "promiscuous mode", supporting bundling of multiple MAC addresses to a physical interface.
The required management networks depend on the deployed components and application scenarios. Before deployment, plan the network address pools in advance.
Table 5 Network types and numbers used by components in the non-RDRS scenario
Component |
Network type |
Number of networks |
Remarks |
|
SeerEngine-DC |
MACVLAN (management network) |
1 |
N/A |
|
vBGP |
Management network and service network converged |
MACVLAN (management network) |
1*Number of vBGP clusters |
· Used for communication between the vBGP and SeerEngine-DC components and service traffic transmission. · Each vBGP cluster requires a separate management network. |
Management network and service network separated |
MACVLAN (management network) |
1*Number of vBGP clusters |
· Used for communication between the vBGP and SeerEngine-DC components. · Each vBGP cluster requires a separate management network. |
|
MACVLAN (service network) |
1*Number of vBGP clusters |
· Used for service traffic transmission. · Each vBGP cluster requires a separate service network. |
||
Digital Twin Network (DTN) |
MACVLAN (simulation management network) |
1 |
Used for simulation services. A separate network interface is required. |
Figure 2 Cloud data center networks in the non-RDRS scenario (only vBGP deployed, management and service networks converged)
Figure 3 Cloud data center networks in the non-RDRS scenario (only DTN deployed)
IMPORTANT: · The SeerEngine-DC management network and vBGP management network are on different network segments. You must configure routing entries on the switches connected to the network interfaces to enable Layer 3 communication between the SeerEngine-DC management network and vBGP management network. · DTN does not support RDRS. · If the simulation management network and the controller management network are connected to the same switch, you must configure VPN instances to separate them. If the simulation management network and the controller management network are connected to different switches, make sure the switches are physically isolated. · Make sure the simulation management IP and the simulated device management IP are reachable to each other. |
IP address planning
Use Table 6 as a best practice to calculate IP addresses required for the networks.
Table 6 IP addresses required for the networks in the non-RDRS scenario
Component |
Network type |
Maximum team members |
Default team members |
Number of IP addresses |
Remarks |
||
SeerEngine-DC |
MACVLAN (management network) |
32 |
3 |
Number of cluster nodes + 1 (cluster IP) |
N/A |
||
vBGP |
Management network and service network converged |
MACVLAN (management network) |
2 |
2 |
Number of vBGP clusters*number of cluster nodes + Number of vBGP clusters (cluster IP) |
Each vBGP cluster requires a separate management network. |
|
Management network and service network separated |
MACVLAN (management network) |
2 |
2 |
Number of vBGP clusters*number of cluster nodes |
Each vBGP cluster requires a separate management network. |
||
MACVLAN (service network) |
2 |
2 |
Number of vBGP clusters*number of cluster nodes + Number of vBGP clusters (cluster IP) |
Each vBGP cluster requires a separate service network. |
|||
DTN |
MACVLAN (simulation management network) |
1 |
1 |
1 (cluster IP) |
A separate network interface is required. For management IP address assignment for DTN hosts, see H3C SeerEngine-DC Simulation Network Deployment Guide. |
||
Table 7 shows an example of IP address planning for a single vBGP cluster in a non-RDRS scenario where the vBGP management network and service network are converged .
Table 7 IP address planning for the non-RDRS scenario
Component |
Network type |
IP addresses |
Remarks |
SeerEngine-DC |
MACVLAN (management network) |
Subnet: 192.168.12.0/24 (gateway 192.168.12.1) |
N/A |
Network address pool: 192.168.12.101 to 192.168.12.132 |
|||
vBGP |
MACVLAN (management network) |
Subnet: 192.168.13.0/24 (gateway 192.168.13.1) |
Management network and service network are converged. |
Network address pool: 192.168.13.101 to 192.168.13.132 |
|||
DTN |
MACVLAN (simulation management network) |
Subnet: 192.168.12.0/24 (gateway 192.168.12.1) |
A separate network interface is required. For management IP address assignment for DTN hosts, see H3C SeerEngine-DC Simulation Network Deployment Guide. |
Network address pool: 192.168.12.133 to 192.168.12.133 |
Deploying the controller
1. Install Unified Platform.
For the Unified Platform installation procedure, see H3C Unified Platform Deployment Guide.
You can manually deploy optional application packages on the Matrix page. You can choose to deploy optional packages after deploying the controller or before deploying the controller. Make sure the optional application package version matches the required package version to avoid deployment failure.
Table 8 Application installation packages required by the controller
Application installation packages |
Description |
· x86: UDTP_Middle_version_x86.zip · ARM: UDTP_Middle_version_arm.zip |
Middleware image repository service. |
· x86: UDTP_GlusterFS_version_x86.zip · ARM: UDTP_GlusterFS_version_arm.zip |
Local shared storage service. |
· x86: UDTP_Core_version_x86.zip · ARM: UDTP_Core_version_arm.zip |
Portal, unified authentication, user management, service gateway, help center, permissions, resource identities, licenses, configuration center, resource group, and log services. |
· x86: UDTP_IMonitor_version_x86.zip · ARM: UDTP_IMonitor_version_arm.zip |
Self-monitoring service. |
· x86: BMP_Report_version_x86.zip · ARM: BMP_Report_version_arm.zip |
Report service. |
· x86: BMP_Alarm_version_x86.zip · ARM: BMP_Alarm_version_arm.zip |
Alarm service. |
· x86: BMP_Dashboard_version_x86.zip · ARM: BMP_Dashboard_version_arm.zip |
Dashboard service. |
· x86: BMP_Widget_version_x86.zip · ARM: BMP_Widget_version_arm.zip |
Dashboard widget service. |
· x86: BMP_Subscription_version_x86.zip · ARM: BMP_Subscription_version_arm.zip |
Subscription service. |
· x86: BMP_Template_version_x86.zip · ARM: BMP_Template_version_arm.zip |
Access parameter template and monitoring template services. |
· x86: BMP_IMonitor_version_x86.zip · ARM: BMP_IMonitor_version_arm.zip |
(Optional.) Self-monitoring service. NOTE: Select an IMonitor installation package based on the version of Unified Platform deployed. · For Unified Platform 0715 or later, use the BMP_IMonitor installation package (optional). · For Unified Platform of versions earlier than 0715, use the UDTP_IMonitor installation package (required). |
· x86: BMP_NETCONF_version_x86.zip · ARM: BMP_NETCONF_version_arm.zip |
(Optional.) NETCONF channel service, NETCONF configuration validity check service. To use these services, install this application. |
· x86: BMP_OneClickCheck_version_x86.zip · ARM: BMP_OneClickCheck_version_arm.zip |
(Optional.) One-click inspection service |
· x86: BMP_Region_version_x86.zip · ARM: BMP_Region_version_arm.zip |
(Optional.) Hierarchical management service. If you are to use hierarchical management, install NSM and use it together with Super Controller for Super Controller to manage DC networks. |
· x86: BMP_Syslog_version_x86.zip · ARM: BMP_Syslog_version_arm.zip |
(Optional.) Syslog management service(log viewing, alarm upgrade rules, and log parsing) Install this application package if you are to use syslog management. |
|
NOTE: To deploy optional components, you are required to add hardware resources to the nodes. For information about the hardware resources to add, see AD-Net solution hardware configuration guide. |
2. Enter the address for accessing Unified Platform in the address bar and then press Enter.
By default, the login address is http://ip_address:30000/central/index.html.
¡ ip_address represents the cluster northbound virtual IP address of Unified Platform.
¡ 30000 is the port number.
3. Click System > Deployment.
4. Obtain the SeerEngine-DC installation packages. Table 9 provides the names of the installation packages. Make sure you select installation packages specific to your server type, x86 or ARM.
Component |
Installation package name |
Remarks |
SeerEngine-DC |
· x86: SeerEngine_DC-version-MATRIX.zip · ARM: SeerEngine_DC-version-ARM64.zip |
Required |
vBGP |
· x86: vBGP-version.zip · ARM: vBGP-version-ARM64.zip |
Optional |
DTN |
· x86: SeerEngine_DC_DTN-version.zip · ARM: SeerEngine_DC_DTN-version-ARM64.zip |
Optional, for providing simulation services |
IMPORTANT: · For some controller versions, the installation packages are released only for one server architecture, x86 or ARM. · The DTN version must be consistent with the SeerEngine-DC version. · ARM servers do not support multi-vBGP clusters. |
5. Click Upload , click Select File in the dialog box that opens, select an installation package, and then click Upload to upload the installation package. After the upload finishes, click Next.
Figure 4 Uploading an installation package
6. Select Cloud Data Center and then select DC Controller. To deploy the vBGP component simultaneously, select vBGP and select a network scheme for vBGP deployment. To deploy the DTN component simultaneously, select Simulation. Then click Next.
Figure 5 Selecting components
CAUTION: To avoid malfunction of simulation services, do not delete the worker node on which DTN has been deployed on the Matrix cluster deployment page. |
7. Configure the MACVLAN networks and add the uplink interfaces according to the network plan in "Planning the networks."
To use simulation services, configure the network settings as follows:
¡ Configure a separate MACVLAN network for the DTN component. Be sure that the subnet IP address pool for the network contains a minimum of one IP address.
¡ If the servers are in standard configuration, the DTN component must have an exclusive use of a worker node server. If the servers are in high-end configuration, the DTN component can have an exclusive use of a worker node server or be deployed on the same master node with a controller. In this example, the DTN component is deployed on a worker node residing on a high-end server.
Figure 6 Configuring a MACVLAN management network for the SeerEngine-DC component
Figure 7 Configuring a MACVLAN management network for the DTN component
|
NOTE: Select one host for the DTN component: · With standard configuration, DTN must have an exclusive use of a worker node server. · With high-end configuration, DTN must have an exclusive use of a worker node server, or share a same master node sever with the controller. |
Figure 8 Configuring a MACVLAN management network for the vBGP component
8. (Optional.) On the Bind to Nodes page, select whether to enable node binding. If you enable node binding, select a minimum of three master nodes to host and run microservice pods.
If a resource-intensive component such as Analyzer is required to be deployed simultaneously with the controller, enable node binding and bind the components to different nodes for better use of server resources.
Figure 9 Enabling node binding
9. Bind networks to the components, assign IP address to the components, specify a network node for the service simulation network, and then click Next.
Figure 10 Binding networks (cloud DC)
Figure 11 Binding networks (vBGP)
10. On the Confirm Parameters tab, verify network information and specify a VRRP group ID for the components.
A component automatically obtains an IP address from the IP address pool of the subnet bound to it. To modify the IP address, click Modify and then specify another IP address for the component. The IP address specified must be in the IP address range of the subnet bound to the component.
If vBGP is to be deployed, you are required to specify a VRRP group ID in the range of 1 to 255 for the components. The VRRP group ID must be unique within the same network.
Figure 12 Confirming parameters (SeerEngine-DC)
Figure 13 Confirming parameters (DTN)
Figure 14 Confirming parameters (vBGP)
11. Click Deploy.
Accessing the controller
After the controller is deployed on Unified Platform, the controller menu items will be loaded on Unified Platform. Then you can access Unified Platform to control and manage the controller.
To access the controller:
1. Enter the address for accessing Unified Platform in the address bar and then press Enter.
By default, the login address is http://ip_address:30000/central/index.html.
¡ ip_address represents the northbound virtual IP address of Unified Platform.
¡ 30000 is the port number.
Figure 15 Unified Platform login page
2. Enter the username and password, and then click Log in.
The default username is admin and the default password is Pwd@12345.
Registering and installing licenses
After you install the controller, you can use its complete features and functions for a 180-day trial period. After the trial period expires, you must get the controller licensed.
Installing the activation file on the license server
For the activation file request and installation procedure, see H3C Software Products Remote Licensing Guide.
Obtaining licenses
1. Log in to Unified Platform. On the top navigation bar, click System, and then select License Management > License Information.
2. Configure the parameters for the license server as described in Table 10.
Table 10 License server parameters
3. Click Connect to connect the controller to the license server.
The controller will automatically obtain licensing information after connecting to the license server.
Backing up and restoring the controller configuration
You can back up and restore the controller configuration on Unified Platform. For the procedures, see H3C Unified Platform Deployment Guide.
Software upgrade and uninstallation
Upgrading the controller and DTN
CAUTION: · After the DTN component is upgraded, check the simulation software version and proceed with its upgrade if necessary. · Before upgrading DTN, first upgrade the controller. The DTN and DC versions must be consistent after the upgrade. · Before upgrading or scaling out Unified Platform or the controller, specify the manual switchover mode for the RDRS if the RDRS has been created. · Do not upgrade the controllers on the primary and backup sites simultaneously if the RDRS has been created. Upgrade the controller on a site first, and upgrade the controller on another site after data is synchronized between the two sites. · If the simulation network construction page has a display issue after the DTN component is upgraded, clear the browser cache and log in again. · After upgrading the DTN component from E6102 or earlier to E6103 or later, you must reinstall the operating system and reconfigure settings for DTN hosts and delete the original hosts from the simulation network and then reincorporate the hosts. For how to install the operating system and configure settings for DTN hosts, see H3C SeerEngine-DC Simulation Network Environment Deployment Guide. · After upgrading the DTN component from E6202 or earlier to E6203 or later, you must uninstall and reconfigure the DTN hosts and delete the original hosts from the simulation network and then reincorporate the hosts. For how to uninstall and configure DTN hosts, see H3C SeerEngine-DC Simulation Network Environment Deployment Guide. · After upgrading the DTN component from E6302 or earlier to E6302 or later, you must delete the simulation network from all fabrics and then reconstruct the simulation network. · The DTN component does not support direct upgrade from a version earlier than E6501 to E6501 or later. For such an upgrade, you must first remove the old version and then install the new version. |
This section describes the procedure for upgrading and uninstalling the controller and DTN. For the upgrading and uninstallation procedure for Unified Platform, see H3C Unified Platform Deployment Guide.
The components can be upgraded on Unified Platform with the configuration retained.
To upgrade the controller and DTN:
1. Log in to Unified Platform. Click System > Deployment.
Figure 16 Deployment page
2. Click the left chevron button for Cloud DC to
expand component information. Then upgrade SeerEngine-DC and DTN.
a. Click the icon for the
SeerEngine-DC component to upgrade the SeerEngine-DC component.
- If the controller already supports RDRS, the upgrade page is displayed.
# Upload and select the installation package.
# Select whether to enable Add Master Node-Component Bindings. The nodes that have been selected during controller deployment cannot be modified or deleted.
# Click Upgrade.
# If the upgrade fails, click Roll Back to roll back to the previous version.
- If the controller does not support RDRS, the system displays a confirmation dialog box with a Support RDRS option.
If you leave the Support RDRS option unselected, the upgrade page is displayed. Proceed with the upgrade.
If you select the Support RDRS option, the system will guide you to upgrade the component to support RDRS.
b. Click the icon for the DTN
component to upgrade the DTN component.
# Upload and select the installation package.
# Click Upgrade.
# If the upgrade fails, click Roll Back to roll back to the previous version.
Hot patching the controller
CAUTION: · Hot patching the controller might cause service interruption. To minimize service interruption, select the time to hot patch the controller carefully. · You cannot upgrade the controller to support RDRS through hot patching. · If you are to hot patch the controller after the RDRS is created, first specify the manual switchover mode for the RDRS. · Do not hot patch the controllers at the primary and standby sites at the same time after the RDRS is created. Only after the controller at a site is upgraded and data is synchronized, you can upgrade the controller at the other site. |
On the United Platform, you can hot patch the controller with the configuration retained.
To hot patch the controller:
1. Log in to Unified Platform. Click System > Deployment.
Figure 17 Deployment page
2. Click the left chevron button of the controller to
expand controller information, and then click the hot patching icon
.
3. Upload the patch package and select the patch of the required version, and then click Upgrade.
Figure 18 Hot patching page
4. If the upgrade fails, click Roll Back to roll back to the previous version or click Terminate to terminate the upgrade.
Uninstalling SeerEngine-DC and DTN
When you uninstall the controller, DTN will be uninstalled simultaneously.
To uninstall SeerEngine-DC and DTN:
1. Log in to Unified Platform. Click System > Deployment.
2. Click the icon to the left of the controller name and then
click Uninstall.
Figure 19 Uninstalling the controller and DTN
Uninstalling the DTN component only
The DTN component can be uninstalled separately.
To uninstall the DTN component only:
1. Log in to Unified Platform. Click System > Deployment.
2. Click the icon to the left of the DTN
component and then click Uninstall.
Figure 20 Uninstalling the DTN component
Uninstalling a hot patch
1. Log in to Unified Platform. Click System > Deployment.
2. Select a patch, and then click Uninstall.
Figure 21 Uninstalling a hot patch
Scaling out or in the controller
The controller can be scaled out from standalone mode to cluster mode or in cluster mode.
To scale in the controller, delete worker nodes in the cluster.
Scaling out the controller from standalone mode to cluster mode
To scale out the controller from standalone mode to cluster mode, add two master nodes on Matrix to form a three-host cluster with the existing master node. Then scale out Unified Platform and the controller sequentially.
To scale out the controller from standalone mode to cluster mode:
1. Scale out Matrix. For more information, see H3C Unified Platform Deployment Guide.
2. Scale out Unified Platform. For more information, see H3C Unified Platform Deployment Guide.
3. Add network bindings.
a. Log in to Unified Platform. On the top navigation bar, click System, and then select Deployment from the left navigation pane.
Figure 22 Deployment page
b. On the Deployment page that opens, click Configure Network to edit the MACVLAN network (management network). Click Add in the Host area, and then select the host to scale out and its uplink interface.
Figure 23 The MACVLAN network (management network)
Figure 24 Host area
c. Click Apply.
4. Scale out the controller.
a. On the top navigation bar, click System, and then select Deployment from the left
navigation pane. Select the controller component to
scale out, and then click the icon in the Actions column.
b. On the Scale-Out page that opens, verify that the network name, subnet name, and uplink interface for the host to scale out are correct, and then click OK.
c. In the Host Information area, click Scale out.
Scaling out the controller in cluster mode
In cluster mode, scale out worker nodes one by one.
To scale out the controller in cluster mode:
1. Make sure you have added worker nodes to the Matrix cluster. For more information, see H3C Unified Platform Deployment Guide.
2. Log in to Unified Platform. On the top navigation bar, click System, and then select Deployment from the left navigation pane.
Figure 25 Deployment page
3. On the Deployment page that opens, click Configure Network to edit the MACVLAN network. Click Add in the Host area, select the host to scale out and its uplink interface, and then click Apply.
4. Log in to Unified Platform. On the top
navigation bar, click System, and then select Deployment from the left navigation pane. On the Deployment page that opens, select
the controller component to scale out, and then click the icon in the Actions column.
Figure 26 Deployment page
5. Select the host to scale out. Verify that the network name, subnet name, and uplink interface for the host are correct, and then click OK.
6. In the Host Information area, click Scale out. In cluster mode, you can scale out only one worker node at a time. Repeat this step to scale out multiple worker nodes.
Scaling in the controller in cluster mode
You can scale in the controller in cluster mode by deleting worker nodes in the cluster.
To scale in the controller in cluster mode:
1. Delete the host that has been scaled out. Only hosts scaled out in the cluster can be deleted.
a. Log in to Unified Platform. On the top navigation bar, click System, and then select Deployment from the left navigation pane.
b. On the Deployment
page that opens, select the component to scale in, and then click the icon in the Actions column.
c. On the Scale-Out page that opens, click the icon at the right of the host name in the
Host Information area. Click OK on the pop-up dialog box.
To avoid affecting existing services, back up the data before deleting a host.
2. Delete network bindings.
a. Log in to Unified Platform. On the top navigation bar, click System, and then select Deployment from the left navigation pane.
b. On the Deployment page that opens, click Configure Network in the upper right corner.
c. Click the icon
for the host in the Host area to delete the binding between the host and uplink interface.
3. Delete worker nodes.
a. Log in to Matrix. Click Deploy on the top navigation bar, and then select Cluster from the navigation pane.
b. In the Worker node area,
click the icon for a worker node, and then select Delete.
RDRS
IMPORTANT: · If the controller version is earlier than E65xx, you can use RDRS provided by the controller. To deploy RDRS, navigate to the System > RDRS page. · If the controller version is E65xx or later, you can only use RDRS provided by Unified Platform. To configure RDRS, first deploy the RDR installation package (BMP_RDR_version_platform.zip) and then navigate to the System > Emergent Recovery > RDRS page for configuration. |
About RDRS
This section describes how to configure remote disaster recovery (RDR) on the controller. A remote disaster recovery system (RDRS) provides disaster recovery services between the primary and backup sites. The controllers at the primary and backup sites back up each other. When the RDRS is operating correctly, data is synchronized between the site providing services and the peer site in real time. When the service-providing site becomes faulty because of power, network, or external link failure, you can use manual switchover for the peer site to take over to ensure service continuity.
In manual switchover mode, the RDRS does not automatically monitor state of the controllers on the primary or backup site. You must manually control the controller state on the primary and backup sites by specifying the Switch to Primary or Switch to Backup actions. This mode requires deploying Unified Platform of the same version on the primary and backup sites.
To deploy RDRS:
1. Deploy Unified Platform (with BMP_RDR) at the primary and backup sites.
2. Deploy the controller at the primary and backup sites.
3. Create an RDRS system.
4. Configure the disaster recovery components for the controllers at the primary and backup sites.
Planning the network
CAUTION: · In an RDRS scenario, if you configure DHCP relay on the management switch for automated underlay network deployment, you must specify the controller clusters' IPs of both the primary and backup sites as relay servers. · To use RDRS , make sure the RDRS data synchronization network and controller IP addresses of the primary and backup sites are different. · RDRS is not supported in hybrid overlay scenarios, because the vBGP component does not support RDRS deployment. |
Table 11 Network types and numbers used by components at the primary/ backup site in the RDRS scenario
Component |
Network type |
Number of networks |
Remarks |
SeerEngine-DC |
MACVLAN (management network) |
1 |
N/A |
MACVLAN (RDRS data synchronization network) |
1 |
· Used for carrying traffic for real-time data synchronization between the primary and backup sites. · Used for Layer 2 communication between the RDRS data synchronization network at the primary and backup sites. · As a best practice, use a separate network interface. |
Table 12 IP addresses required for the networks at the primary/backup site in the RDRS scenario
Component |
Network type |
Maximum team members |
Default team members |
Number of IP addresses |
Remarks |
SeerEngine-DC |
MACVLAN (management network) |
32 |
3 |
Number of cluster nodes + 1 (cluster IP) |
N/A |
MACVLAN (RDRS data synchronization network) |
32 |
3 |
Number of cluster nodes |
A separate network interface is required. |
Table 13 shows an example of IP address planning in an RDRS scenario.
Table 13 IP address planning for the RDRS scenario
Site |
Component |
Network |
IP address |
Remarks |
Primary site |
SeerEngine-DC |
MACVLAN (management network) |
Subnet: 192.168.12.0/24 (gateway 192.168.12.1) |
N/A |
Address pool: 192.168.12.101 to 192.168.12.132 |
||||
MACVLAN (RDRS data synchronization network) |
Subnet: 192.168.16.0/24 (gateway 192.168.16.1) |
As a best practice, use a separate network interface. Make sure the disaster recovery data synchronization network and the DC component management network of the primary and backup sites are on different subnets. |
||
Address pool: 192.168.16.101 to 192.168.16.132 |
||||
Backup site |
SeerEngine-DC |
MACVLAN (management network) |
Subnet: 192.168.12.0/24 (gateway 10.0.234.254) |
N/A |
Address pool: 192.168.12.133 to 192.168.12.164 |
||||
MACVLAN (RDRS data synchronization network) |
Subnet: 192.168.16.0/24 (gateway 192.168.16.1) |
As a best practice, use a separate network interface. Make sure the disaster recovery data synchronization network and the DC component management network of the primary and backup sites are on different subnets. |
||
Address pool: 192.168.16.133 to 192.168.16.164 |
Configuration procedure
Deploying Unified Platform at the primary and backup sites
Restrictions and guidelines
The Unified Platform version and transfer protocol (HTTP or HTTPS) of the primary and backup sites must be the same.
For RDRS deployment, the primary and secondary sites must use the same IP version.
Procedure
1. Deploy Matrix on the primary and backup sites. For the deployment procedure, see H3C Unified Platform Deployment Guide.
2. Deploy Unified Platform on primary and backup sites, during which you must upload and deploy the BMP_RDR application package. Specify the same NTP server for the primary and backup sites. For the deployment procedure, see H3C Unified Platform Deployment Guide.
Deploying the controller at the primary and backup sites
Restrictions and guidelines
If the controller installed on the specified backup site does not support disaster recovery or is not in backup state, remove the controller and install it again.
Procedure
1. Upload the installation package.
a. Obtain the SeerEngine-DC installation package.
The SeerEngine-DC installation packages used at the primary and backup sites must be consistent in version and name.
b. Log in to the Unified Platform, select System > Deployment to enter the deployment management page.
c. Click Upload to upload the installation package and then click Next.
2. Select Cloud DC and controller. Then select Global RDRS Settings and click Next.
Figure 27 Selecting Global RDRS Settings
Figure 28 Selecting components
3. Configure the networks required by the components and add the uplink interfaces according to the network plan in "Planning the network." You must configure a separate MACVLAN as the RDRS data synchronization network.
The following shows network configurations at the primary site.
Figure 29 Configuring the management network for the controller
Figure 30 Configuring the RDRS data synchronization network
4. (Optional.) On the Bind Node page, select whether to enable node binding. If you enable node binding, select a minimum of three master nodes to host and run microservice pods.
If a resource-intensive component such as Analyzer is required to be deployed simultaneously with the controller, enable node binding and bind the components to different nodes for better use of server resources.
5. Bind networks to the components and assign IP address to the components. Then click Next.
Figure 31 Binding networks
6. On the Confirm Parameters page, verify network information and specify the RDRS status for the components.
A component automatically obtains an IP address from the IP address pool of the subnet bound to it. To modify the IP address, click Modify and then specify another IP address for the component. The IP address specified must be in the IP address range of the subnet bound to the component.
7. Click Deploy.
Figure 32 Deployment in progress
Creating an RDRS system
Restrictions and guidelines
After the controller is deployed at the two sites, choose one site at the primary site, and create an RDRS system on the Web interface page of the site. You do not need to create an RDRS system again at the backup site.
Ensure network connectivity between the primary and standby sites during the RDRS creation process. If the RDRS fails to be created, first check the network connectivity.
You cannot back up or restore data on the RDRS configuration page, including the primary or backup site name, primary or backup site IP address, backup site username and password, and site IP address.
After an RDRS is created, you cannot change the internal virtual IP of the cluster at the primary and backup sites and the node IPs.
Procedure
1. Log in to the Unified Platform Web page of the primary backup site, and click System > Emergent Recovery > RDRS.
2. In the Site Settings area, configure the primary and backup site settings, and specify the switchover mode.
3. Click Connect.
If the heartbeat link is successfully set up, the RDRS site settings have been configured successfully.
After the sites are built successfully, the backup site will automatically synchronize its user, log, and backup and restore settings to the primary site, with the exception of the log content.
Figure 33 Creating an RDRS system
Configuring disaster recovery components for the controller
Click Add at the Disaster Recovery Components area and then configure the disaster recovery components in the dialog box that opens.
Restrictions and guidelines for RDRS switchover
When both the network and system are stable, manually perform a switchover in the RDRS as follows:
1. Access the primary site, and navigate to the System > Emergency Recovery > Remote Disaster Recovery page.
2. Click Switch to Backup in the disaster recovery component area, and wait for the new primary site to start up.
3. After the new primary site starts up, access the new primary site to view the disaster recovery state after switchover.
If the original primary site is inaccessible due to network issues, manually perform a switchover in the RDRS as follows:
1. Disconnect the management network between the original primary site and devices through the blackhole route or power off operation.
2. Access the backup site, and navigate to the System > Emergency Recovery > Remote Disaster Recovery page.
3. Click Switch to Primary in the disaster recovery component area, and wait for the new primary site to start up.
4. To restore the original primary site as the new backup site, make sure the management network between the original primary site and devices is disconnected. After the original primary successfully switches to backup, you can reconnect the management network to prevent the controller from affecting devices during the recovery process.
Moving RDRS to Unified Platform
If the controller is upgraded from a version earlier than E65xx to E65xx or later after an RDRS system has already been deployed, perform the following tasks to move RDRS to Unified Platform
1. Log in to the controller, and then navigate to the System > RDRS page to delete the RDRS status and the RDRS.
2. Upgrade the Unified Platform version of the primary and backup sites.
3. Upgrade the controller version of the primary and backup sites.
4. Deploy the RDR application installation package (BMP_RDR_version_platform.zip).
5. Navigate to the System > Emergency Recovery > Remote Disaster Recovery page and then redeploy the RDRS system.
RDRS licenses
License changes before and after RDRS moving from the controller to Unified Platform
Licenses before the moving
If the user has deployed a 3+3 RDRS system that has a total of 6 physical servers combined into two clusters to manage an SDN network that has a total of 100 physical devices, the licenses that the user needs to purchase are as shown in the table below.
Table 14 Original licenses
License |
Quantity |
Controller software, One DC controller server node license |
6 |
Controller software, One fixed-port switch management license |
100 |
Licenses after the moving
The server node license quantity required has not changed, but you need to apply for one license file for each three server nodes and a total of 2 license files for six server nodes. On the license server (E1204 and later) page, you can enable the primary and backup sites each to use 3 server node licenses by specifying different owners for the license files.
Changing the license owner at an RDRS switchover
About this task
Use two sites site A and site B as an example. Configure a cluster with three servers at each site and set up a 3+3 RDRS system for the two sites to manage an SDN network. If 100 fixed-port switches are on the network, the system requires the following licenses.
Table 15 Licenses required in the 3+3 RDRS system
License name |
Quantity |
One DC controller server node license |
6 |
One fixed-port switch management license |
100 |
Configure two clients, for example, sdn1 and sdn2 on the license server.
Figure 34 Configuring clients on the license server
Configure the RDRS system to use site A as the primary site and site B as the backup site. After the RDRS system is deployed. You can configure license server information on the license information page on the primary site and backup site separately.
Table 16 License server information
Site name |
Client name |
Site A |
sdn1 |
Site B |
sdn2 |
Figure 35 License server information page
Procedure
After an RDRS switchover occurs, you are required to change ownership of licenses. The following two methods can be used:
· Forcing offline the license client of the original primary site
· Specifying the new primary site as the owner of the licenses
Forcing offline the license client of the original primary site
1. When an RDRS switchover occurs, log in to the license server and force offline the license client of site A (original primary site) to release the licenses.
2. Log in to site B (new primary site) and configure the license server IP address, client name (sdn1), and password to be consistent with those on site A on the license information page. Then site B will connect the license server as the license client to obtain the controller server node licenses and all fixed-port switch management licenses.
Specifying the new primary site as the owner of the licenses
License Server E1204 or later supports specifying the owner for licenses.
1. Access the license server. Specify two license clients, for example sdn1 and sdn2 and specify owners Owner1 and Owner2 for the clients, respectively.
2. After the RDRS system is deployed, configure license server information on the license information page of the primary and backup sites separately. Use the site, client, and owner relations are as shown in Table 17.
Table 17 Site, client, and owner relations
Site |
Client name |
Owner ID |
Site A |
sdn1 |
Owner1 |
Site B |
sdn2 |
Owner2 |
3. On the license server, assign the three controller server node licenses of Owner1 to site A, the three b controller server node licenses of Owner2 to site B, and all fixed-port switch management licenses to Owner 1.
4. After an RDRS switchover occurs, reassign all NE licenses to Owner2 on the license server.
5. Disconnect and reconnect license server from the license information page of the new primary site (site B). Then site B obtains the three controller server node licenses and all fixed-port switch management licenses of Owner2.
Cluster deployment over a Layer 3 network
If the master nodes in a cluster are on different subnets, you deploy the cluster over a Layer 3 network.
In cluster deployment over a Layer 3 network, RDRS deployment, vBGP component deployment, and underlay IPv6 deployment are not supported.
The DTN component only supports deployment on a single node. The DTN component and simulation hosts (or simulated devices) support deployment over a Layer 3 network. For more information, see H3C SeerEngine-DC Simulation Network Deployment Guide.
|
NOTE: When VMware of an earlier version uses a VMXNET virtual network adapter, the TCP data packet length in the VXLAN frames might be calculated incorrectly. When using VMware to deploy a cluster over a Layer 3 network, use VMware ESXi 6.7P07 and 7.0U3 (7.0.3) or later. |
Network planning
As shown in Figure 36, Master 1 and Master 2 are on the management network of Fabric 1, Master 3 is on the management network of Fabric 2. The management networks of Fabric 1 and Fabric 2 are on different subnets and communicate with each other at Layer 3. Plan the IP addresses as shown in Table 18 to deploy the cluster.
Component |
IP type |
IP address |
Remarks |
Unified Platform cluster |
IP address of Master node 1 |
192.168.10.102/24 |
The default gateway is 192.168.10.1 on management switch 1. |
IP address of Master node 2 |
192.168.10.103/24 |
||
IP address of Master node 3 |
192.168.110.104/24 |
The default gateway is 192.168.110.1 on management switch 2. |
|
Cluster internal virtual IP |
192.168.10.101/32 |
N/A |
|
Northbound service VIP |
192.168.10.100/32 |
N/A |
|
SeerEngine-DC |
Management network 1 (bound to master 1 and master 2) |
Subnet: 192.168.12.0/24 Network address pool: 192.168.12.101-192.168.12.132 |
MACVLAN network. The default gateway is 192.168.12.1 on management switch 1. |
Management network 2 (bound to master 3) |
Subnet: 192.168.112.0/24 Network address pool: 192.168.112.101-192.168.112.132 |
MACVLAN network. The default gateway is 192.168.112.1 on management switch 2. |
|
Management network 3 (cluster VIP) |
Subnet: 8.8.8.0/24 Network address pool: 8.8.8.8-8.8.8.8 |
You do not need to specify a gateway address on the switch. |
|
Management switch |
Management switch 1 |
Vlan-interface 10: 192.168.10.1/24, 192.168.12.1/24 Vlan-interface 20: 192.168.20.9/30 |
The Unified Platform node management network and SeerEngine-DC management network use the same NIC interface. |
Management switch 2 |
Vlan-interface 11: 192.168.110.1/24, 192.168.112.1/24 Vlan-interface 20: 192.168.20.10/30 |
The Unified Platform node management network and SeerEngine-DC management network use the same NIC interface. |
Prerequisites
Before cluster deployment, complete routing settings for the underlay to make sure the nodes can communicate with each other at Layer 3, and the nodes can communicate with the two gateways at Layer 3.
1. Configure Management switch 1.
[device1] vlan 10
[device1-vlan10] quit
[device1] interface Vlan-interface10
[device1-Vlan-interface10] ip address 192.168.10.1 255.255.255.0
[device1-Vlan-interface10] ip address 192.168.12.1 255.255.255.0 sub
[device1-Vlan-interface10] quit
[device1] vlan 20
[device1-vlan20] quit
[device1] interface Vlan-interface20
[device1-Vlan-interface20] ip address 192.168.20.9 255.255.255.252
[device1-Vlan-interface20] quit
[device1] interface Ten-GigabitEthernet1/0/25
[device1-Ten-GigabitEthernet1/0/25] port link-mode bridge
[device1-Ten-GigabitEthernet1/0/25] port access vlan 10
[device1-Ten-GigabitEthernet1/0/25] quit
[device1] interface Ten-GigabitEthernet1/0/26
[device1-Ten-GigabitEthernet1/0/26] port link-mode bridge
[device1-Ten-GigabitEthernet1/0/26] port access vlan 10
[device1-Ten-GigabitEthernet1/0/26] quit
[device1] interface Ten-GigabitEthernet1/0/27
[device1-Ten-GigabitEthernet1/0/27] port link-mode bridge
[device1-Ten-GigabitEthernet1/0/27] port access vlan 20
[device1-Ten-GigabitEthernet1/0/27] quit
[device1] ip route-static 192.168.110.0 255.255.255.0 192.168.20.10
2. Configure Management switch 2.
[device2] vlan 11
[device2-vlan11] quit
[device2] vlan 20
[device2-vlan20] quit
[device2] interface Vlan-interface11
[device2-Vlan-interface11] ip address 192.168.110.1 255.255.255.0
[device2-Vlan-interface11] ip address 192.168.112.1 255.255.255.0 sub
[device2-Vlan-interface11] quit
[device2] vlan 20
[device2-vlan20] quit
[device2] interface Vlan-interface20
[device2-Vlan-interface20] ip address 192.168.20.10 255.255.255.252
[device2-Vlan-interface20] quit
[device2] interface Ten-GigabitEthernet1/0/25
[device2-Ten-GigabitEthernet1/0/25] port link-mode bridge
[device2-Ten-GigabitEthernet1/0/25] port access vlan 11
[device2-Ten-GigabitEthernet1/0/25] quit
[device2] interface Ten-GigabitEthernet1/0/26
[device2-Ten-GigabitEthernet1/0/26] port link-mode bridge
[device2-Ten-GigabitEthernet1/0/26] port access vlan 20
[device2-Ten-GigabitEthernet1/0/26] quit
[device2] ip route-static 192.168.10.0 255.255.255.0 192.168.20.9
Deploying a Matrix cluster
This section describes only settings different from Layer 2 deployment. For other procedures, see H3C Unified Platform Deployment Guide.
1. Set the cluster network mode to multisubnet.
Figure 37 Configuring cluster parameters
2. Configure BGP parameters for the nodes.
Table 19 Configuring BGP parameters for the nodes
Node |
IP address |
Local/Router ID |
Local/AS Number |
Peers/IP |
Peers/AS Number |
Master 1 |
192.168.10.102 |
192.168.10.102 |
100 |
192.168.10.1 |
100 |
Master 2 |
192.168.10.103 |
192.168.10.103 |
100 |
192.168.10.1 |
100 |
Master 3 |
192.168.110.104 |
192.168.110.104 |
200 |
192.168.110.1 |
200 |
Figure 38 Adding a node
3. Configure BGP on the switches connected to the cluster.
On management switch 1:
[device1] bgp 100
[device1-bgp] peer 192.168.10.102 as-number 100
[device1-bgp] peer 192.168.10.102 connect-interface Vlan-interface 10
[device1-bgp] peer 192.168.10.103 as-number 100
[device1-bgp] peer 192.168.10.103 connect-interface Vlan-interface 10
[device1-bgp] peer 192.168.110.1 as-number 200
[device1-bgp] peer 192.168.110.1 connect-interface Vlan-interface 20
[device1-bgp] address-family ipv4 unicast
[device1-bgp-ipv4] peer 192.168.10.102 enable
[device1-bgp-ipv4] peer 192.168.10.103 enable
[device1-bgp-ipv4] peer 192.168.110.1 enable
On management switch 2:
[device2] bgp 200
[device2-bgp] peer 192.168.110.104 as-number 200
[device2-bgp] peer 192.168.110.104 connect-interface Vlan-interface 11
[device2-bgp] peer 192.168.10.1 as-number 100
[device2-bgp] peer 192.168.10.1 connect-interface Vlan-interface 20
[device2-bgp] address-family ipv4 unicast
[device2-bgp-ipv4] peer 192.168.110.104 enable
[device2-bgp-ipv4] peer 192.168.10.1 enable
Deploying the controller at Layer 3
This section describes only settings different from Layer 2 deployment. For other procedures, see "Deploying the controller." This section uses operations on the Unified Platform Web interface as an example.
1. Create a MACVLAN network based on the IP address planning in Table 18.
Figure 39 Configuring network settings
2. Select networks.
Figure 40 Network bindings
3. After the controller is deployed successfully, configure routing settings for each node. You can configure OSPF and BGP. In this example, BGP is configured.
Table 20 Routing settings on the nodes
Node |
BGP instance settings |
Network |
Neighbor settings |
||
Router ID |
AS Number |
Neighbor |
Remote AS |
||
Master 1 |
192.168.12.101 |
100 |
8.8.8.8/32 |
192.168.10.1 |
100 |
Master 2 |
192.168.12.102 |
100 |
8.8.8.8/32 |
192.168.10.1 |
100 |
Master 3 |
192.168.112.101 |
200 |
8.8.8.8/32 |
192.168.110.1 |
200 |
Figure 41 Routing settings on Master 1
4. Configure routing settings on the management switches.
On management switch 1:
[device1] bgp 100
[device1-bgp] peer 192.168.12.101 as-number 100
[device1-bgp] peer 192.168.12.101 connect-interface Vlan-interface 10
[device1-bgp] peer 192.168.12.102 as-number 100
[device1-bgp] peer 192.168.12.102 connect-interface Vlan-interface 10
[device1-bgp] address-family ipv4 unicast
[device1-bgp-ipv4] peer 192.168.12.101 enable
[device1-bgp-ipv4] peer 192.168.12.102 enable
On management switch 2:
[device2] bgp 200
[device2-bgp] peer 192.168.112.101 as-number 200
[device2-bgp] peer 192.168.112.101 connect-interface Vlan-interface 11
[device2-bgp] address-family ipv4 unicast
[device2-bgp-ipv4] peer 192.168.112.101 enable
Cluster 2+1+1 deployment
About cluster 2+1+1 deployment
The cluster 2+1+1 mode is a low-cost failure recovery solution. To set up this solution, deploy the three nodes for setting up the controller cluster in two different cabinets or equipment rooms and reserve a standby node outside the cluster as a redundant node. When the cluster is operating correctly, leave the standby node unpowered. If two master nodes in the cluster fail at the same time, power on the standby node. The standby node will join the cluster quickly and the cluster service will have a fast recovery.
Figure 42 Cluster disaster recovery deployment
Deployment process
1. Prepare four servers: three used for setting up the Unified Platform cluster and one used as the standby server.
2. Install the four servers at different locations. As a best practice, install two of the servers for setting up the cluster in one cabinet (or equipment room), and the other server for setting up the cluster and the standby server in another cabinet (or equipment room).
3. Install Unified Platform on the three servers for setting up the cluster. For the installation procedure, see H3C Unified Platform Deployment Guide. As a best practice, assign IP addresses in the same network segment to the three servers and make sure they are reachable to each other.
4. Deploy the controller in the cluster. For the deployment procedure, see "Deploying the controller."
5. Install the Matrix platform on the standby server. Make sure the Matrix version is consistent with that installed on the three cluster servers. You are not required to deploy Unified Platform on the standby server.
Preparing for disaster recovery
1. Record the host name, NIC name, IP address, and username and password of the three nodes in the cluster.
2. Install Matrix on the standby node. The Matrix platform must be the same version as that installed on the cluster nodes.
IMPORTANT: · The drive letter and partitioning scheme of the standby node must be consistent with those of the cluster nodes. · If a Unified Platform patch version has been installed on the cluster nodes, use the following steps to install Matrix on the standby node for the standby node to have the same version of Matrix as the cluster nodes: 1. Install the Unified Platform base version (E06xx/E07xx) ISO image. 2. Uninstall Matrix from the operating system of the host. 3. Install the same version of Matrix as that in the Unified Platform patch version on the operating system of the host. |
Two node-failure recovery
In a cluster with three leader nodes as shown in Figure 43, if two nodes (for example, controllers 1 and 2) fail at the same time, the cluster cannot operate correctly. Only controller 3 is accessible and will automatically enter emergency mode.
Figure 43 Failure of two nodes
To recover the cluster, perform the following steps:
1. Power on the standby node (without connecting it to the management network) and verify that Matrix has been installed on it. If not installed, see H3C Unified Platform Deployment Guide to install Matrix.
Do not configure any cluster-related settings on the standby node after Matrix is installed on it.
2. Verify that the host name, NIC name, IP address, and username and password of the standby node are exactly the same as those of the failed nodes, controller 1 in this example.
3. Disconnect the network connections of the failed controllers 1 and 2, and connect the standby node to the management network.
4. Log in to the Matrix Web interface of controller 3, and then click Deploy > Cluster. Click
the button for
controller 1 and select Rebuild
from the list. Then use one of the following methods to rebuild the node:
¡ Select and upload the same version of software package as installed on the current code. Then click Apply.
¡ Select the original software package version and then click Apply.
5. Log out to quit emergency mode. Then log in to the system again. As a best practice, use the VIP to access Matrix.
6. Repair or recover controller 2.
After the cluster resumes services, you can repair or recover controller 2.
¡ To use a new physical server to replace controller 2, you are required to log in to the Matrix page to perform repair operations.
¡ If the file system of the original controller 2 can be restored and started correctly, the controller can automatically join the cluster after you power on it. Then the cluster will have three correctly operating controllers.
CAUTION: · After the nodes are rebuilt, the standby node will join the cluster as controller 1. The original controller 1 cannot join the cluster directly after failure recovery. As a best practice, format the drive on the original controller 1, install Matrix on it, and use it as the new standby node. · If two controllers in the cluster are abnormal, you are not allowed to restart the only normal node. If the normal node is restarted, the cluster cannot be recovered through 2+1+1 disaster recovery. |
Network changes
About this task
If IP address conflict exists or the network plan needs to be changed, for example, equipment room relocation or subnet mask change after component deployment, you can change networks for the components. This section describes how to change network settings for the SeerEngine-DC and DTN components.
Procedure
CAUTION: · To change the IP address of a Matrix node in the RDRS scenario, you must first delete the RDRS system. · To do a network change for the controller and DTN component in the RDRS scenario, you must first delete the RDRS system. · Network change for a component can cause service interruption. Please be cautious. |
To edit network settings:
1. Log in to Unified Platform. Click System > Deployment.
Figure 44 Deployment page
2. Click the left chevron button for the component to
expand component information.
Editing network settings for the controller
1. Click the icon
for the SeerEngine-DC
component.
2. Select the target network.
Figure 45 Editing network settings
3. Click Create, and then create a subnet in the dialog box that opens.
4. Click in the Actions column for a subnet, edit the name, CIDR, and
gateway for the subnet as needed, and then click OK.
Figure 47 Editing a subnet
5. Click Next.
Figure 48 Confirming network parameters
6. Review the parameters, and then click OK. The network editing progress is displayed on the page.
7. If network change fails, roll back the network settings or exit the network change process.
Editing network settings for the DTN component
1. Click the
icon for the DTN component.
2. Select a network.
3. Click Next.
4. Review the parameters, and then click OK. The network editing progress is displayed on the page.
5. If network change fails, roll back the network settings or exit the network change process.
Changing IP address settings after a network change
After a network change, you must edit some IP address related settings.
TFTP and Syslog services enabled
If you have enabled the TFTP and Syslog services on the Automation > Data Center Networks > Fabrics page, you must re-configure the IP address of the services.
Figure 49 Re-configuring the IP address for the TFTP and Syslog services
Deployment across a Layer 3 network
If you have configured routing settings on the System > System Maintenance > DC Controllers > Controller Setup page, you must re-configure routing settings after a network change.
Figure 50 Configuring routing settings
Updating the cluster IP through a configuration fragment
If an existing configuration fragment on the Automation > Data Center Networks > Fabrics > Auto Deployment > Configuration Fragment page contains the cluster IP for the controller, you must update the cluster IP of the controller after a network change and deploy the configuration fragment to the target device.
Figure 51 Configuration fragment page
Figure 52 Editing device configuration fragment
The undo info-center loghost vpn-instance mgmt 192.168.89.10 command removes the controller cluster IP.
The info-center loghost vpn-instance mgmt 192.168.89.11 command sets a new controller cluster IP.
Figure 53 Deploying device configuration fragment
Configuring a region
If you have configured a region on the System > System Maintenance > DC Controllers > Controller Setup page, the system clears the managed subnets settings after a network change. You must re-configure the region.
Figure 54 Configuring a region
Editing the OpenStack plug-in settings
After a network change, you must edit the URL in the OpenStack Neutron plug-in, because the northbound virtual IP for the Matrix cluster has changed. For more information, see the OpenStack plug-in installation guide.
[SDNCONTROLLER]
url = http://127.0.0.1:30000
Configuring the license server
If the license server is deployed on a node in a Matrix cluster and the IP address of the node has changed, you must re-configure the license server after you edit network settings. For more information, see "Registering and installing licenses."
Data source management configuration
If you configured the close loop feature, you must re-configure basic information for DC data sources on the data source management page.
Figure 55 Managing data sources
On the Analytics
> Analysis Options
> Resources > Assets
> Data Sources page,
click the Edit icon in the Actions column for the target data source, and then
change the IP address to the new Matrix northbound virtual IP
Figure 56 Editing basic configuration
Configuring an RDRS
You must configure an RDRS system after you edit network settings. For more information, see "Creating an RDRS."
Checking the DTN network
For simulation to operate correctly after you edit DTN network settings, make sure the DTN component and DTN node are reachable to each other.