- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
01-Text | 5.05 MB |
Contents
Hardware requirements (deployment on physical server)
Client configuration requirements
Check the installation environment
(Optional.) Configure network settings
Enable the network adapter ports
Converged deployment of the DTN component
Selecting installation packages
Independent deployment of a simulation host
Installing the operating system
Single simulation host scenario
Multiple simulation hosts scenario
Deployment across Layer 3 networks
Configuring basic simulation services
Preconfigure the simulation network
Installing the license on the license server
Obtaining the DTN component license
Obtaining the simulated device license
Upgrading and uninstalling software
Upgrading the DTN component through a hot patch
Uninstalling the DTN component
Uninstalling the DTN hot patch
Overview
In the DC scenario, the SeerEngine-DC services are complicated, and hard to operate. After complicated operations, you might fail to achieve the expected results. As a result, a large number of human and material resources are wasted. Therefore, it is necessary to perform a rehearsal before deploying actual services. During the rehearsal process, you can learn and avoid risks, so that you can reduce the risk possibilities for the production environment to the maximum extent. The simulation function is introduced for this purpose. The simulation function simulates a service and estimates resource consumption before you deploy the service. It helps users to determine whether the current service orchestration can achieve the expected effect and affect existing services, and estimate the device resources to be used.
This document mainly describes how to deploy the DTN component and simulation hosts and how to configure simulation services.
Prepare for installation
|
NOTE: DTN does not support remote disaster recovery. |
Server requirements
This document uses the deployment of simulation services on a cluster-mode controller as an example.
Hardware requirements (deployment on physical server)
(Recommended.) Converged deployment of DTN component + independent deployment of simulation host
The DTN component and the controller are deployed on the same master node. The simulation host is deployed independently on a physical server.
Figure 1 Matrix cluster (DTN component converged deployment + simulation host independent deployment)
Table 1 Hardware requirements for converged deployment of the DTN component
Application name |
Hardware requirements |
Remarks |
DTN component |
CPU (cores): · x86-64 (Intel64/AMD64): 4 · x86-64 (Hygon): 6 · ARM (Kunpeng): 10 · ARM (Phytium): 20 Memory (GB): 86 Network adapter ports · Non-bonding mode: 1 × 10 Gbps · Bonding mode: 2 × 10 Gbps |
The DTN component and the controller are deployed on a master node in a converged manner. You must add the required resources to the target master node. |
Table 2 Hardware requirements for independent deployment of the simulation host
Node name |
Node quantity |
Node settings |
Remarks |
Simulation host |
n |
CPU configuration options are as follows: · x86-64 (Intel64/AMD64): 16 cores, 2.0 GHz or above · x86-64 (Hygon): 20 cores, 2.5 GHz or above · ARM (Kunpeng): 40 cores, 2.6 GHz or above · ARM (Phytium): 78 cores, 2.1 GHz or above Memory: 128 GB or above Drives: · System drive: 600 GB or above Network adapter ports Single simulation host scenario · Non-bonding mode: 2 × 10Gbps or above · Bonding mode: 4 × 10 Gbps or above ports, each two forming a Linux bond interface. Multiple simulation hosts scenario · Non-bonding mode: 3 × 10Gbps or above Bonding mode: 6 × 10Gbps or above, each two ports forming a Linux bond interface. |
Standard configuration · Single simulation host: A maximum of 48 simulated devices can be created. (n = 1) · Multiple simulation hosts: A maximum of 48 simulated devices can be created on the first host, and a maximum of 60 simulated devices can be created on each of the other hosts. (n=1+(total number of simulated devices-48)/60) Network adapter port description: · Simulation management network: Used by the DTN component and the simulation hosts for communication. · Simulation service network: Used by simulated devices to exchange service information (this network adapter port is not required in single-host scenario). · Node management network: Used to log in to the simulation hosts for maintenance. |
Simulation host |
n |
CPU configuration options are as follows: · x86-64 (Intel64/AMD64): 20 cores, 2.2 GHz or above · x86-64 (Hygon): 24 cores, 2.5 GHz or above · ARM (Kunpeng): 48 cores, 2.6 GHz or above · ARM (Phytium): 96 cores, 2.1 GHz or above Memory: 256 GB or above Drives: · System drive: 600GB or above Network adapter ports Single simulation host scenario · Non-bonding mode: 2 × 10Gbps or above · Bonding mode: 4 × 10 Gbps or above ports, each two forming a Linux bond interface. Multiple simulation hosts scenario · Non-bonding mode: 3 × 10Gbps or above · Bonding mode: 6 × 10Gbps or above, each two ports forming a Linux bond interface. |
High-end configuration · Single simulation host: A maximum of 112 simulated devices can be created. (n = 1) · Multiple simulation hosts: A maximum of 112 simulated devices can be created on the first host, and a maximum of 124 simulated devices can be created on each of the other hosts. (n=1+(total number of simulated devices-112)/124) Network adapter port description: · Simulation management network: Used by the DTN component and the simulation hosts for communication. · Simulation service network: Used by simulated devices to exchange service information (this network adapter port is not required in single-host scenario). · Node management network: Used to log in to the simulation hosts for maintenance. |
Software requirements
Simulation is an independent microservice of the data center controller. Before deploying the simulation service, you must install SeerEngine-DC or deploy the simulation service together with SeerEngine-DC. Table 3 shows the CPU and operating system compatibility of the DTN component, and Table 4 shows the CPU and operating system compatibility of simulation hosts.
Table 3 CPU and operating system compatibility of the DTN component
CPU |
Supported operating systems |
Recommended operating system |
x86-64(Intel64/AMD64) |
· NingOS · Kylin V10 SP2 · TencentOS-Server-3.1 |
NingOS |
x86-64 Hygon |
· NingOS · Kylin V10 SP2 · TencentOS-Server-3.1 |
Kylin V10 SP2 |
ARM Kunpeng |
· NingOS · Kylin V10 SP2 |
Kylin V10 SP2 |
ARM Phytium |
· NingOS · Kylin V10 SP2 · TencentOS-Server-3.1 |
Kylin V10 SP2 |
Table 4 CPU and operating system compatibility of simulation hosts
Operating system name |
Version number |
Kernel version |
NingOS |
V3.1.0 |
5.10 |
Kylin |
Kylin V10 SP2 |
4.19 |
TencentOS Server |
TencentOS-Server-3.1 |
5.4.119-19.0009.54 |
Disk partitioning
For information about simulation-related disk partitioning, see H3C SeerEngine-DC Installation Guide (Unified Platform).
Client configuration requirements
You can access Unified Platform directly through a browser and do not need to install a client. As a best practice, use Google Chrome 96 or later.
Check the installation environment
The following table describes the pre-installation checklist. Make sure all requirements for installing Unified Platform are met.
Table 5 Check the installation environment
Item |
Requirements |
|
Server |
Hardware |
The CPU, memory, disk, and network adapter port requirements for installing the controller are met. Unified Platform can be deployed. |
Software check |
The system time settings are configured correctly. As a best practice, configure NTP on each node and specify the same time source for all the nodes. |
|
Client |
Verify that the browser version meets the requirements. |
(Optional.) Configure network settings
Enable the network adapter ports
If a server accesses the network by using multiple network adapter ports, enable the ports on the server before deployment.
Configure the network adapter port as follows:
1. Remotely log in to the server where Unified Platform resides, and then edit the network adapter port configuration file. This section uses network adapter port ens34 as an example.
2. Open and edit the network adapter port configuration file.
[root@node1 /]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
3. Edit the BOOTPROTO and ONBOOT settings in the network interface configuration file as shown in Figure 2. Set BOOTPROTO to none to specify no boot protocol for the network interface, and set ONBOOT to yes to automatically enable the network interface upon startup.
Figure 2 Enabling the network interface
4. Execute the ifdown and ifup commands to restart the network adapter port.
[root@node1 /]# ifdown ens34
[root@node1 /]# ifup ens34
5. Execute the ifconfig command to view the network information. The network interface is enabled successfully if it is in UP state.
Network planning
Network topology
Four types of networks are involved: node management network, controller management network, simulation management network, and simulation service network.
· Node management network
The node management network is used to log in to servers for maintenance.
· Controller management network
The controller management network is used for cluster communication between controllers and for managing devices/network elements.
· Simulation management network
The simulation management network is used by the DTN component and the simulation hosts to exchange management information.
· Simulation service network
The simulation service network is used by simulated devices on simulation hosts to exchange service information.
¡ When multiple simulation hosts exist, they communicate with each other through a switch, as shown in Figure 3.
¡ When only one simulation host exists, you do not need to connect the simulation host and the switch, without requiring a network adapter port.
Before deploying the simulation feature, you must first plan the simulation management network and simulation service network.
Figure 3 Cloud data center scenario without remote disaster recovery (deploying only the DTN component with multiple simulation hosts)
IMPORTANT: · If the controller management network and simulation management network use the same management switch, you must also configure VPN instances for isolation on the management switch to prevent IP address conflicts from affecting the services. If the controller management network and simulation management network use different management switches, physically isolate these switches. · Configure routes to provide Layer 3 connectivity between simulation management IPs and simulated device management IPs. · In the multiple simulation hosts scenario, on the port connecting the switch to the service interface of a simulation host, execute the port link-type trunk command to configure the link type of the port as trunk, and execute the port trunk permit vlan vlan-id-list command to assign the port to 150 continuous VLANs. Among these VLAN IDs, the first ID is the VLAN ID specified for simulation host installation, and the end VLAN ID is the start VLAN ID+149. For example, if the start VLAN ID is 11, the permitted VLAN ID range is 11 to 160. When you plan the network, do not use any VLAN ID permitted by the port. · When the device and controller are deployed across Layer 3, the simulation host and the DTN component must be connected through a management switch. |
IP address planning
As a best practice, calculate the number of IP addresses for each network according to Table 6.
Table 6 Number of addresses in subnet IP address pools
Component/node name |
Network name (type) |
Max members in cluster |
Default members in cluster |
Calculation method |
Remarks |
SeerEngine-DC |
Controller management network (MAC-VLAN) |
32 |
3 |
1 × number of cluster members + 1 (cluster IP) |
/ |
DTN component |
Simulation management network (MAC-VLAN) |
1 |
1 |
Single node deployment, which needs only one IP |
Used by the DTN component deployed on the controller |
Simulation host node |
Simulation management network |
1 × number of simulation hosts |
1 × number of simulation hosts |
1 × number of simulation hosts |
Used by the DTN component to incorporate hosts |
Simulation service network |
1 × number of simulation hosts |
1 × number of simulation hosts |
1 × number of simulation hosts |
Used by simulated devices to exchange service information, supporting only IPv4. For the address configuration, see "Configuring simulation hosts." |
|
Node management network |
1 × number of simulation hosts |
1 × number of simulation hosts |
1 × number of simulation hosts |
Used to log in to hosts for maintenance. |
Component/node name |
Network name (type) |
IP address |
SeerEngine-DC |
Controller management network (MAC-VLAN) |
Subnet address: 192.168.12.0/24 (the gateway address is 192.168.12.1) |
Network address pool: 192.168.12.101/24 to 192.168.12.132/24 (gateway address: 192.168.12.1) |
||
DTN component |
Simulation management network (MAC-VLAN) |
Subnet address: 192.168.12.0/24 (the gateway address is 192.168.12.1) |
Network address pool: 192.168.12.133/24 to 192.168.12.164/24 (gateway address: 192.168.12.1) |
||
Simulation host node |
Simulation management network |
Network address pool: 192.168.12.165/24 to 192.168.12.175/24 (gateway address: 192.168.12.1) |
Simulation service network |
Network address pool: 192.168.11.134/24 to 192.168.11.144/24 (gateway address: 192.168.11.1) |
|
Node management network |
Network address pool: 192.168.10.110/24 to 192.168.10.120/24 (gateway address: 192.168.10.1) |
|
NOTE: The simulation management network, simulation service network, and node management network must be on different network segments. |
Converged deployment of the DTN component
Deploy Unified Platform before deploying the DTN component. Deploy the SeerEngine-DC component and DTN component on the Matrix convergence deployment page. This document describes how to deploy the DTN component after the SeerEngine-DC component is deployed successfully.
Installing Unified Platform
For information about installing Unified Platform, see H3C Unified Platform Deployment Guide.
The optional application packages are manually deployed on the Matrix page. You can deploy the optional packages as needed before or after deploying the controller. To avoid deployment failures, make sure the optional application packages and required application packages are consistent in software version.
Logging in to Matrix
|
NOTE: Deploying the DTN component on the Matrix convergence deployment page is supported in Unified Platform E0722 and later versions. |
1. In the address bar of a browser, enter the login address of Matrix:
a. If an IPv4 address is used, access https://ip_address:8443/matrix/ui, for example, https://172.16.101.200:8443/matrix/ui. The following configuration in this document uses IPv4.
b. If an IPv6 address is used, access https://[ip_address]:8443/matrix/ui, for example, https://[2000::100:611]:8443/matrix/ui.
ip_address represents the IP address of the node, and 8443 is the default port number.
2. On the top navigation bar, click DEPLOY. From the navigation pane, select Convergence Deployment.
Figure 5 Convergence Deployment
Upload installation packages
1. Obtain the DTN installation package. The package name is as shown in Table 8, where version represents the version number. Select an installation package based on the server architecture.
Table 8 Installation package name
Component name |
Component installation package name |
Remarks |
DTN |
· x86: SeerEngine_DC_DTN-version.zip · ARM: SeerEngine_DC_DTN-version-ARM64.zip |
Used for the simulation feature. |
|
NOTE: The DTN version must be the same as the SeerEngine-DC version. |
2. Click Packages Management to access the installation package management page.
Figure 6 Installation package management
3. Click Upload. In the dialog box that opens, click Select Files, and select component installation packages. Then, click Upload to upload the selected component installation packages to the system.
Figure 7 Uploading or registering the installation package
4. After the installation package is uploaded, click Return on the installation package management page to return to the convergence deployment page.
Selecting applications
1. On the convergence deployment page, click Install to access the application selection page.
2. Select the DTN scenario. If the SeerEngine-DC component is not installed, install the controller first or select both the controller and the DTN component.
Figure 8 Selecting the DTN component
3. Click Next.
Selecting installation packages
1. On the installation package selection page, the latest version of the installation package is displayed by default. You can select an installation package version from the list.
Figure 9 Selecting installation packages
2. After selecting the installation package, click Next to configure resources.
Configure resources
The DTN component does not require resource configuration. Click Next to configure parameters.
Configure parameters
On the parameter configuration page, you can switch tabs to configure related parameters for SeerEngine-DC and DTN respectively. This document mainly introduces the DTN parameter configuration steps. For SeerEngine-DC parameter configuration steps, see H3C SeerEngine-DC Installation Guide.
Configure networks
1. Access the DTN > Network Configuration page, click Create Network, and configure the network name in the window that opens. The network type is MCVLAN by default.
2. In the subnets area, click Create. In the window that opens, based on the network planning in "Network planning", configure a separate MACVLAN network for the DTN component, which must contain at least one IP address in the subnet pool.
3. In the hosts area, click Create to associate a host and uplink port for the DTN component.
IMPORTANT: You can select only one host. Make sure the host has sufficient CPU and memory resources to ensure the stable operation of the DTN component. |
Figure 10 Associating host and uplink port
4. Click OK to complete the DTN network creation.
5. Click Next to access the node binding page.
Bind nodes
1. On the DTN tab, click Select Node, and then select node, network, and subnet.
2. Click OK to complete the binding operation.
3. Click Next to access the node information verification page.
Verifying the node information
1. On the node information confirm page, you can view the planned network information for the DTN component.
2. To edit the settings, click Previous to return to the node binding page.
3. After verifying the node information, click Deploy to start the deployment. The deployment progress will be displayed on the page.
Independent deployment of a simulation host
Installing the operating system
|
NOTE: · The simulation host relies on the system virtualization capabilities. Make sure you select the virtualization capability when selecting the software. · As a best practice to enhance system stability, use the recommended device types and file system types for partitioned devices. · Installing the operating system on a server that already has an operating system installed replaces the existing operating system. To avoid data loss, back up data before you install the operating system. |
NingOS operating system
Installing the operating system
The NingOS-V3-1.0.2403-x86_64-dvd.iso image is the installation image of the NingOS operating system. This section describes the procedure for installing the NingOS operating system on a server without an operating system installed.
1. After the ISO image is loaded, access the page for selecting a language.
2. Select a language (English (United States) in this example), and then click Continue.
Figure 11 Selecting a language
3. In the localization area, click Date and Time to set the system date and time. Select Asia as the region, Shanghai as the city, and then click Finish to return to the installation information summary page.
Figure 12 Setting the date and time for the system
4. Click the Keyboard link in the LOCALIZATION area. On the page that opens, set the keyboard layout to English (US).
Figure 13 Selecting a keyboard layout
5. For the Software Selection option in the SOFTWARE area, Virtualization Host is selected for the base environment by default, as shown in the following figure.
Figure 14 Virtualization host selected for base environment by default
6. Click the Installation Destination link in the SYSTEM area to access the INSTALLATION DESTINATION page. Select the destination disk in the Local Standard Disks area, and select the Custom option in the Storage Configuration area. Then, click Done to access the MANUAL PARTITIONING page.
Figure 15 Installation destination page
7. In the New mount points will use the following partitioning scheme list, select the standard partition scheme, and then click Click here to create them automatically.
Figure 16 Selecting a partitioning scheme
8. On the partition list page after automatic partitioning, the /boot/efi partition exists only when the server is enabled with the UEFI functionality to install the system (if this partition does not exist, you do not need to add it manually).
Table 9 Created disk partitions
Mount point |
Capacity |
Applicable mode |
File system |
Remarks |
/home |
1024 MiB |
BIOS mode, UEFI mode |
ext4 |
Not less than 1024 MiB. |
/boot |
1024 MiB |
BIOS mode, UEFI mode |
ext4 |
Not less than 1024 MiB. |
swap |
1024 MiB |
BIOS mode, UEFI mode |
swap |
Not less than 1024 MiB. |
/ |
500 GiB |
BIOS mode, UEFI mode |
ext4 |
Configure as much capacity as possible. |
/boot/efi |
200 MiB |
UEFI mode |
EFI System Partition |
Not less than 200 MiB. |
9. Click Done. If the following message is displayed, create a BIOS Boot partition of 1 MiB. If no prompt message is displayed, you can skip this step.
Figure 17 Prompt for creating the BIOS Boot partition
10. The SUMMARY OF CHANGES window will open, as shown in the following figure. Click Accept Changes to return to the INSTALLATION SUMMARY page.
Figure 18 Summary of changes
11. Select the administrator account settings. Use either the root or admin user as the administrator account.
¡ To use the root user as the administrator account:
Click the Root Account link in the USER SETTINGS area to configure the root user as the administrator account.
Select the Enable Root Account option and set the root user password. Then, click Done to return to the INSTALLATION SUMMARY page.
Figure 19 Setting the root account
If you use the root user as the administrator account, the user has privileges of all operations, and the admin user will not be created.
Figure 20 Using the root user as the administrator account
¡ To use the admin user as the administrator account:
- When using the admin user as the administrator account, you must also set the root password. You must first set the root password and then create the admin user. If you do not do that, the SSH permissions of the root user will be disabled.
- In the user settings area, click the create user link. Select the Add administrative privileges to this user account (wheel group membership) option to make the account an administrator.
- As shown in the following figure, set the admin user password. Then, click Done to return to the INSTALLATION SUMMARY page.
Figure 21 Creating the admin user
Figure 22 The admin user is created
12. Click the Network & Host Name link in the SYSTEM area to access the NETWORK & HOST NAME page.
13. As shown in the following figure, enter the host name in the Host Name field and then click Apply.
Figure 23 Network & host name page
14. On the network and host name configuration page, you can configure the network card. NIC bonding allows you to bind multiple NICs to form a logical NIC for NIC redundancy, bandwidth expansion, and load balancing.
If you do not configure NIC bonding, click Configure and configure the NIC in the window that opens. On the General tab, select the Connect automatically with priority option, and keep the All users may connect to this network option selected, as shown in the following figure.
|
NOTE: Configure network ports as planned. · The network port IP for the simulation management network is used for communication with the DTN component. · The network port IP for the simulated device service network is used for service communication between simulated devices. Specify this IP address in the installation script in "Configuring simulation hosts." You do not need to specify this IP address in this section. · The network port IP for the node management network is used for routine maintenance of servers. |
Figure 24 General tab
15. The host supports dual protocol stacks.
¡ To configure an IPv4 address, click the IPv4 Settings tab, select Manual from the Method list, click Add in the addresses area, and configure the simulation management IPv4 address for the simulation host. After configuration is complete, click Save.
Figure 25 Configuring the simulation management network (IPv4 address) for the simulation host
¡ To configure only IPv6 address, click the IPv4 Settings tab, select Disabled from the Method list. Then, click the IPv6 Settings tab, select Manual from the Method list, click Add in the addresses area, configure the simulation management IPv6 address for the simulation host. After configuration is complete, click Save.
¡ In a dual-stack environment, configure both IPv4 and IPv6 addresses.
|
NOTE: · Before you configure an IPv6 address in the IPv6 single-stack environment, you must disable the IPv4 address that has been configured. · After you complete operating system installation, use the nmcli connection reload and nmcli connection up commands to restart the NICs. |
16. After you configure the network, manually enable the specified NICs. Click Done to return to the INSTALLATION SUMMARY page.
Figure 26 Enabling network card
17. Repeat steps (14) and (16) to configure node management IP addresses for other simulation hosts (network address pool: 192.168.10.110 to 192.168.10.120, taking 192.168.10.110 as an example).
18. Click Begin Installation to start installing the operating system.
19. After installation is completed, the server will automatically restart. The screen after restart is as shown in the following figure.
Figure 27 Installation completed
Configuring simulation hosts
IMPORTANT: · Executing the uninstallation script will restart the network service and tear down the SSH connection. To avoid service interruption, perform the operations through the remote console of the server/VM. · Configure the following settings on each simulation host. · If you select a non-root user as the login user or the root user is disabled, add sudo before every command. |
Single simulation host scenario
When only one simulation host is deployed, the simulation service network does not require a network port because communication between simulated devices occurs within the host. The deployment method is as follows:
1. Obtain the SeerEngine_DC_DTN_HOST installation package, upload it to the server, and decompress it. The SeerEngine_DC_DTN_HOST installation package name is SeerEngine_DC_DTN_HOST-version.zip (version is the software version number). This example uses software version E7101.
[root@host01 root]# unzip SeerEngine_DC_DTN_HOST-E7101.zip
2. Execute the chmod command to assign permissions to the user.
[root@host01 root]# chmod +x -R SeerEngine_DC_DTN_HOST-E7101
3. Enter the SeerEngine_DC_DTN_HOST-version/ directory decompressed from the SeerEngine_DC_DTN_HOST installation package and use the ./install.sh management_nic command to install the package. This command uses 3.0.0.3/16 as the service address for simulated devices. If this address conflicts with the network plan, you can execute the ./install.sh management_nic service_cidr command to perform the installation with a service address specified.
Parameters:
¡ management_nic: Name of the simulation management network adapter port.
¡ service_cidr: Inter-simulated device communication address.
Use the default address 3.0.0.3/16 as the service address to install the simulation host:
[root@host01 SeerEngine_DC_DTN_HOST-E7101]# ./install.sh ens1f0
Installing ...cd
check network service ok.
check libvirtd service ok.
check management bridge ok.
check sendip ok.
check vlan interface ok.
Complete!
Specify a specific address (taking 192.168.11.134/24 as an example) to install the simulation host:
[root@host01 SeerEngine_DC_DTN_HOST-E7101]# ./install.sh ens1f0 192.168.11.134/24
Installing ...cd
check network service ok.
check libvirtd service ok.
check management bridge ok.
check sendip ok.
check vlan interface ok.
Complete!
|
NOTE: · service_cidr represents the inter-simulated device communication address. The connection between the devices is implemented through a UDP tunnel by using this IP. · The system's default restart timeout for network service is five minutes. After the simulation host is deployed, the system will automatically change the restart timeout for network service to 15 minutes. · After completing the configuration of the simulation host, if you edit the host name, make sure the new host name also exists in the /etc/hosts domain name resolution file. If the host name does not exist in the file, add it manually. |
Multiple simulation hosts scenario
To deploy multiple simulation hosts, the simulation service network requires a network port to ensure successful communication between simulated devices. The deployment method is as follows.
1. Obtain the SeerEngine_DC_DTN_HOST installation package, upload it to the server, and decompress it. The SeerEngine_DC_DTN_HOST installation package name is SeerEngine_DC_DTN_HOST-version.zip (version is the software version number). This example uses software version E7101.
[root@host01 root]# unzip SeerEngine_DC_DTN_HOST-E7101.zip
2. Execute the chmod command to assign permissions to the user.
[root@host01 root]# chmod +x -R SeerEngine_DC_DTN_HOST-E7101
3. Enter the SeerEngine_DC_DTN_HOST-version/ directory decompressed from the SeerEngine_DC_DTN_HOST installation package and use the ./install.sh management_nic service_nic vlan_start service_cidr command to perform the installation.
Parameters:
¡ management_nic: Name of the simulation management network adapter port.
¡ service_nic: Name of the simulation service network adapter port.
¡ vlan_start: Start VLAN ID.
¡ service_cidr: Inter-simulated device communication address.
[root@host01 SeerEngine_DC_DTN_HOST-E7101]# ./install.sh ens1f0 ens1f1 11 192.168.11.134/24
Installing ...cd
check network service ok.
check libvirtd service ok.
check management bridge ok.
check sendip ok.
check vlan interface ok.
Complete!
|
NOTE: · The VLAN range is [vlan_start, vlan_start+149], and all hosts must be consistent. Based on the VLAN, 150 VLAN subinterfaces will be created for the simulation network ports to facilitate inter-host communication. · If the script execution prompts "NIC {service_NIC_name} does not support 150 VLAN subinterfaces. Please select another service NIC for simulation," it indicates that the currently selected network port does not support configuring 150 VLAN subinterfaces. You need to select another service network port for the simulation. · service_cidr represents the communication address for inter-simulated devices. Device connections are established through a UDP tunnel, which will use this IP. Make sure the CIDR for multiple simulation hosts is in the same subnet. · The system's default network service restart timeout is 5 minutes. After the deployment of the simulation hosts, the system will automatically change the network service restart timeout to 15 minutes. · After completing the configuration of the simulation hosts, if you have modified the host name, make sure the new hostname is also present in the /etc/hosts domain name resolution file. If the host name does not exist in the file, add it manually. |
Deployment across Layer 3 networks
Network description
In this chapter, the controller management network, node management network, simulation management network, and simulated device service network share one switch to deploy the Layer 3 management networks for simulation.
|
NOTE: This section describes the deployment of simulation device hosts on physical servers as an example. The deployment method for integrated simulation device hosts is consistent with that of standalone simulation device hosts. See the actual network configuration for specific details. |
Figure 28 Management network diagram
Table 10 IP planning for the simulation management network
Component/node name |
IP address plan |
Interfaces |
DTN component |
IP address: 192.168.15.133/24 (gateway address: 192.168.10.1) |
Ten-GigabitEthernet1/0/25, VLAN 40 |
Simulation host 1 |
IP address: 192.168.12.134/24 (gateway address: 192.168.12.1, NIC: ens1f0) |
Ten-GigabitEthernet1/0/26, VLAN 40 |
Simulation host 2 |
IP address: 192.168.12.135/24 (gateway address: 192.168.12.1, NIC: ens1f0) |
Ten-GigabitEthernet1/0/27, VLAN 40 |
Simulated device 1 |
IP address: 192.168.20.136/24 (gateway address: 192.168.21.1) |
|
Simulated device 2 |
IP address: 192.168.20.137/24 (gateway address: 192.168.10.1) |
|
Simulated device 3 |
IP address: 192.168.21.134/24 (gateway address: 192.168.21.1) |
|
Simulated device 4 |
IP address: 192.168.21.135/24 (gateway address: 192.168.21.1) |
|
IPv4 Management Network Address Pool |
IP address: 2.0.0.0/22 (gateway address: 2.0.0.1) |
|
|
NOTE: In the Layer 3 management network, use the management network address pool (for IPv6 management networks, the IPv6 management network address pool is required). The configuration of the management network address pool can be performed by following these steps: Log in to the controller, navigate to the Automation > Data Center Network > Simulation > Build Simulation Network page, click Simulation Network Pre-configuration, select the parameter settings tab, and configure the corresponding management network address pool in the address pool information section. |
Table 11 Simulated device service network address planning
Component/node name |
IP address plan |
Interfaces |
Simulation host 1 |
IP address: 192.168.11.134/24 (gateway address: 192.168.12.1, NIC: ens1f1) |
Ten-GigabitEthernet1/0/28, VLAN 30 |
Simulation host 2 |
IP address: 192.168.11.135/24 (gateway address: 192.168.12.1, NIC: ens1f1) |
Ten-GigabitEthernet1/0/29, VLAN 30 |
Table 12 IP planning for the node management network
Component/node name |
IP address plan |
Interfaces |
Controller |
IP address: 192.168.10.110/24 (gateway address: 192.168.10.1) |
Ten-GigabitEthernet1/0/21, VLAN 10 |
DTN component |
IP address: 192.168.10.111/24 (gateway address: 192.168.10.1) |
Ten-GigabitEthernet1/0/22, VLAN 10 |
Simulation host 1 |
IP address: 192.168.10.112/24 (gateway address: 192.168.10.1) |
Ten-GigabitEthernet1/0/23, VLAN 10 |
Simulation host 2 |
IP address: 192.168.10.113/24 (gateway address: 192.168.10.1) |
Ten-GigabitEthernet1/0/24, VLAN 10 |
Configuration example
In the simulation environment, the interfaces that connect the management switch to the same type of network of the DTN component and different simulation hosts must belong to the same VLAN. More specifically, the interfaces that connect to the simulation management network belong to VLAN 40, the interfaces that connect to the simulated device service network belong to VLAN 30, and the interfaces that connect to the node management network belong to VLAN 10.
Perform the following tasks on the management switch:
1. Create VLANs 40, 30, and 10 for the simulation management network, simulated device service network, and node management network, respectively.
[device] vlan 40
[device-vlan40] quit
[device] vlan 30
[device-vlan30] quit
[device] vlan 10
[device-vlan10] quit
2. Assign to VLAN 40 the interface connecting the management switch to the simulation management network of the DTN component, Ten-GigabitEthernet 1/0/25 in this example. Assign to VLAN 10 the interface connecting the management switch to the node management network of the DTN component, Ten-GigabitEthernet 1/0/22 in this example.
[device] interface Ten-GigabitEthernet1/0/25
[device-Ten-GigabitEthernet1/0/25] port link-mode bridge
[device-Ten-GigabitEthernet1/0/25] port access vlan 40
[device-Ten-GigabitEthernet1/0/25] quit
[device] interface Ten-GigabitEthernet1/0/22
[device-Ten-GigabitEthernet1/0/22] port link-mode bridge
[device-Ten-GigabitEthernet1/0/22] port access vlan 10
[device-Ten-GigabitEthernet1/0/22] quit
3. Assign the interface (Ten-GigabitEthernet 1/0/26 in this example) connecting the management switch to the simulation management network of simulation host 1 to VLAN 40. Assign the interface (Ten-GigabitEthernet 1/0/28 in this example) connecting the management switch to the simulation service network of simulation host 1 to VLAN 30. Assign the interface (Ten-GigabitEthernet 1/0/23 in this example) connecting the management switch to the node management network of simulation host 1 to VLAN 10.
[device] interface Ten-GigabitEthernet1/0/26
[device-Ten-GigabitEthernet1/0/26] port link-mode bridge
[device-Ten-GigabitEthernet1/0/26] port access vlan 40
[device-Ten-GigabitEthernet1/0/26] quit
[device] interface Ten-GigabitEthernet1/0/28
[device-Ten-GigabitEthernet1/0/26] port link-mode bridge
[device-Ten-GigabitEthernet1/0/26] port access vlan 30
[device-Ten-GigabitEthernet1/0/26] quit
[device] interface Ten-GigabitEthernet1/0/23
[device-Ten-GigabitEthernet1/0/23] port link-mode bridge
[device-Ten-GigabitEthernet1/0/23] port access vlan 10
[device-Ten-GigabitEthernet1/0/23] quit
4. Assign the interface (Ten-GigabitEthernet 1/0/27 in this example) connecting the management switch to the simulation management network of simulation host 2 to VLAN 40. Assign the interface (Ten-GigabitEthernet 1/0/29 in this example) connecting the management switch to the simulation service network of simulation host 2 to VLAN 30. Assign the interface (Ten-GigabitEthernet 1/0/24 in this example) connecting the management switch to the node management network of simulation host 2 to VLAN 10.
[device] interface Ten-GigabitEthernet1/0/27
[device-Ten-GigabitEthernet1/0/27] port link-mode bridge
[device-Ten-GigabitEthernet1/0/27] port access vlan 40
[device-Ten-GigabitEthernet1/0/27] quit
[device] interface Ten-GigabitEthernet1/0/29
[device-Ten-GigabitEthernet1/0/27] port link-mode bridge
[device-Ten-GigabitEthernet1/0/27] port access vlan 30
[device-Ten-GigabitEthernet1/0/27] quit
[device] interface Ten-GigabitEthernet1/0/24
[device-Ten-GigabitEthernet1/0/24] port link-mode bridge
[device-Ten-GigabitEthernet1/0/24] port access vlan 10
[device-Ten-GigabitEthernet1/0/24] quit
5. Create a VPN instance.
[device] ip vpn-instance simulation
[device-vpn-instance-simulation] quit
6. Create a VLAN interface, and bind it to the VPN instance. Assign all gateway IP addresses to the VLAN.
[device] interface Vlan-interface40
[device-Vlan-interface40] ip binding vpn-instance simulation
[device-Vlan-interface40] ip address 192.168.12.1 255.255.255.0
[device-Vlan-interface40] ip address 192.168.15.1 255.255.255.0 sub
[device-Vlan-interface40] ip address 192.168.20.1 255.255.255.0 sub
[device-Vlan-interface40] ip address 192.168.21.1 255.255.255.0 sub
[device-Vlan-interface40] ip address 2.0.0.1 255.255.255.0 sub
[device-Vlan-interface40] quit
IMPORTANT: · When production physical devices use dynamic routing protocols (including but not limited to OSPF, IS-IS, BGP) to advertise management IP routes, the VLAN interface (VLAN 40) must be configured with the same routing protocol. · When the management port of a production physical device is of the loopback type and the subnet mask length of the management IPv4 address is 32, configure the gateway IP on the management switch according to Class A (8-bit mask), Class B (16-bit mask), or Class C (24-bit mask) addressing. · When production physical devices use OSPF to advertise management IP routes, the VLAN interface (VLAN 40) must be configured with the ospf peer sub-address enable command. |
7. When the simulation network uses the License Server of the controller, taking the license server deployed in the controller as an example, you need to configure the following static routes on the managed device. In this example, 192.168.10.110 is the IP of the server where the license server is located, and 192.168.15.133 is the IP of the DTN component.
[device] ip route-static vpn-instance simulation 192.168.10.110 32 192.168.15.133
When the management networks of the simulation hosts and the DTN component are deployed across Layer 3, the following configuration must be performed on simulation host 1 and simulation host 2.
8. Add the static route to the DTN component management network.
[root@host01 ~]# route add -host 192.168.15.133 gw 192.168.12.1
9. Make the static route to the DTN component management network persistent.
[root@host01 ~]#cd etc/sysconfig/network-scripts/
[root@host01 network-scripts]# vi route-mge_bridge
Enter 192.168.15.133/32 via 192.168.12.1 in the file, then save and exit.
[root@host01 network-scripts]# cat route-mge_bridge
192.168.15.133/32 via 192.168.12.1
Configuring basic simulation services
|
NOTE: · Make sure SeerEngine-DC and DTN have been deployed. For the deployment process, see H3C SeerEngine-DC Installation Guide (Unified Platform) and H3C SeerEngine-DC Simulation Installation Guide. · In the current software version, system administrators and tenant administrators can configure simulation services. |
Configuration workflow
Figure 29 Configuration flowchart
Procedure
Preconfigure the simulation network
Adding simulation hosts
1. Log in to the controller.
2. Access the Automation > Data Center Network > Simulation > Build Simulation Network page.
3. Click Preconfigure Simulation Network. The Manage Simulation Hosts page opens.
4. Click Add. In the dialog box that opens, configure the host name, IP address, username, and password.
Figure 30 Adding simulation hosts
5. Click Apply.
|
NOTE: · A simulation host can only be managed by one controller. · The controller supports both root and non-root users to manage simulation hosts. When incorporating simulation hosts as a non-root user, you must add the non-root user privilege before incorporating the hosts as follows: Execute the sudo ./addPermission.sh username command in the SeerEngine_DC_DTN_HOST-version/tool/ directory decompressed from the SeerEngine_DC_DTN_HOST package. · If you edit the configuration of a simulation host, add the simulation host on the page again. |
Uploading simulation images
1. Access the Automation > Data Center Network > Simulation > Build Simulation Network page, click Simulation Network Preconfiguration, and then click the Simulation Image Management tab.
2. Click Upload Image. In the dialog box that opens, select the type of the image to be uploaded and image of the corresponding type, and then click Upload.
Figure 31 Uploading simulation images
Configuring parameters-license server deployment
The license server provides licensing services for simulated devices. The following deployment modes are supported:
· (Recommended) Use a license server that has been deployed using the controller (the IP protocol type of the license server must match the IP protocol type of the DTN component MACVLAN network).
· Deploy a license server for each simulation host. If there are multiple simulation hosts, upload the License Server installation package to any one of the servers.
|
NOTE: To install the license server separately, see H3C License Server Installation Guide. |
Configuring parameters
1. Navigate to the Automation > Data Center Networks > Simulation > Build Simulation Network page. Click Preconfigure. On the page that opens, click the Parameters tab.
2. This page allows you to view and modify basic information, device information, UDP tunnel information, address pool information, and supports configuration of License Server-related parameters.
Figure 32 Configure parameters
|
NOTE: As a best practice, select the flavor named 1_cpu_4096MB_memory_2048MB_storage. The flavor named 1_cpu_2048MB_memory_2048MB_storage is applicable in the scenario where the number of each type of logical resources (vRouters, vNetworks, or vSubnets) is not greater than 1000. |
3. Click OK.
Adding reserved ports in the configuration file
To avoid the configured UDP ports from being occupied, configure this port range as reserved ports on the simulation host. The specific operation is as follows:
1. Access the backend of the simulation host, use the vi /etc/sysctl.conf command to enter the sysctl.conf configuration file, and add the following configuration.
[root@node1 ~]# vi /etc/sysctl.conf
...
net.ipv4.ip_local_reserved_ports=10000-15000
2. If the UDP port range has been modified on the page, update the reserved port range in the sysctl.conf configuration file to match the currently configured UDP port range on the page, and save the changes.
3. Execute the /sbin/sysctl –p command to make the changes effective.
4. Execute the cat /proc/sys/net/ipv4/ip_local_reserved_ports command to view the reserved ports. If the returned range matches the modified range, the modification is complete.
Building a simulation network
Online building
1. Log in to the controller.
2. Access the Automation>Data Center Network>Simulation>Build Simulation Network page.
3. Click Build Simulation Network to access the build simulation network process page. Select online data as the data source, and then click Next.
Figure 33 Selecting a data source
4. Select fabrics as needed, and then click Start Building to start building the simulation network. You can select multiple fabrics.
Figure 34 Selecting fabrics
5. After a simulation network is built, the network is displayed as Built on this page.
Figure 35 Built a simulation network successfully
6. The successfully built simulation network allows you to view simulation device information:
¡ The simulation status is Active.
¡ The device model is displayed correctly on the production network and the simulation network.
The VMs in the simulation network model are created on the host created. If multiple hosts are available, the controller selects a host with optimal resources for creating VMs.
Offline building
To use the offline data to build a simulation network, first back up and restore the environment, and obtain the link information and device configuration files before building a simulation network.
1. Back up the SeerEngine-DC environment
a. Log in to the controller that is operating normally. Navigate to the System > Emergency Management > Backup & Restore page.
b. Click Start Backup. In the dialog box that opens, select SeerEngine-DC. Click Backup to start backup.
Figure 36 Backing up the controller
a. After the backup is completed, click Download in the Actions column for the backup file to download it.
2. Obtain the device configuration file
a. Log in to the controller that is operating normally. Navigate to the Automation > Configuration Deployment > Device Maintenance > Physical Devices page.
b. Select all devices, and click Manual Backup.
Figure 37 Manually backing up all device information
a. Click the icon in the Actions
column for a device. The configuration file management page opens. Click Download
to download the configuration file of the specified device to your local host.
Figure 38 Configuration file management page
a. Compress all download configuration files into one .zip package. The .zip package name is not limited. The .zip package is the device configuration file.
3. Obtain the link information file through either of the following methods:
Method 1:
a. In the address bar of the browser, enter http://ip_address:port/sdn/ctl/rest/topologydata/all_link_info and then press Enter. Link information of all fabrics in the environment will be displayed. ip_address represents the IP address of the controller. port represents the port number.
b. Copy the obtained link information to a .txt file, and save the file. The file name is not limited. The file is the link information file.
Method 2:
a. Log in to the controller.
b. Access the Automation>Data Center Network>Simulation>Build Simulation Network page.
Figure 39 Building a simulation network
a. Click Build Simulation Network to enter the build simulation network process page, selecting offline data as the data source.
b. On the link information tab, click Import. In the dialog box that opens, click Download Link Template to download the link template.
Figure 40 Downloading link template
a. Edit the information in the template, and after modifications, upload the link information file.
4. Restore the SeerEngine-DC environment
a. Log in to the environment where you want to build a simulation network based on offline data.
b. Navigate to the System > Emergency Management > Backup & Restore page. Use the backup file to restore the environment.
Figure 41 Restore the environment
5. Build a simulation network
a. Log in to the controller.
b. Access the Automation>Data Center Network>Simulation>Build Simulation Network page.
Figure 42 Building a simulation network
a. Click Build Simulation Network to enter the build simulation network process page, with offline data selected as the data source.
b. On the Device Info page, click Import. In the dialog box that opens, import and upload the device configuration file.
Figure 43 Importing and uploading the device configuration file
a. On the Down Link Info page, click Import. In the dialog box that opens, import and upload the link information file. Skip this step if the link information has already been imported.
b. Click Next.
c. Select fabrics as needed, and click Start Building to start building the simulation network. You can select multiple fabrics.
d. After a simulation network is built, the network is displayed as Built on this page.
Figure 44 Built a simulation network successfully
a. The successful simulation network has been established, and you can view the simulated device information.
- The simulation status is Active.
- The device model is displayed correctly on the production network and the simulation network.
The VMs in the simulation network model are created on the host created. If multiple hosts are available, the controller selects a host with optimal resources for creating VMs.
Tenant service simulation
Enabling tenant design mode and simulation baselining
1. On the top navigation bar, click Automation.
2. From the navigation pane, select Data Center Networks > Simulation > Tenant Service Simulation. You are placed on the Enable Design Mode page by default.
3. Enable design mode for the specified tenant.
When the design mode is enabled, the tenant icon is displayed as . When the design mode is disabled, the tenant icon is displayed as
.
|
NOTE: You can enable the design mode and then perform tenant service simulation only when the simulation network is built normally. |
Figure 45 Enable the tenant design mode
4. Click Next to access the Simulation Baselining page.
5. On the simulation baseline page, click Execute to start the simulation baseline process.
|
NOTE: As a best practice to provide baseline values for network-wide impact analysis results, perform simulation baselining after you enable the design mode for a tenant and deploy configuration. |
Figure 46 Simulation baselining
Logical network resource orchestration
1. After you enable the design mode and execute simulation baselining, click Next. You are placed on the Service Changes > Logical Networks page by default.
2. Drag a resource icon in the Resources area to the canvas area. Then, a node of this resource is generated in the canvas area, and the configuration panel for the resource node opens on the right. In the canvas area, you can adjust node locations, bind/unbind resource, and zoom in/out the topology.
Figure 47 Logical networks
Application network resource orchestration
1. After you enable the design mode and execute simulation baselining, click Next. You are placed on the Logical Networks page by default.
2. Click the Application Network tab to enter the application network page, where you can configure the application network service resources.
Figure 48 Application network page
Public network resource orchestration
After you enable the design mode and execute simulation baselining, click Next.
You are placed on the Logical Networks page by default. Click the Public Network tab to enter the service resource management page and configure the service resources.
Figure 49 Public network page
Simulation & evaluation
After the resource arrangement is completed, click Next to enter the simulation evaluation page. On this page, you can sequentially perform operations such as simulation environment checks, capacity simulation, connectivity simulation, and overall network impact analysis.
Prepare for evaluation
This feature quantifies the evaluation results of various dimensions of the simulation environment through a simulation report scoring matrix, allowing users to better understand the assessment outcomes in a more intuitive way.
Figure 50 Prepare for evaluation
· Simulation network check—Evaluate the health of the simulation networks based on factors such as CPU and memory usage of simulated devices.
Figure 51 Simulation network check
· Data consistency check—Evaluate the service differences between production and simulation networks.
Figure 52 Data consistency check.
· Service change analysis—Analyze from the perspective of whether the service changes are reasonable.
Figure 53 Service change analysis
· Network-wide impact analysis—Evaluate the existence of baseline values for network-wide impact analysis.
Simulate capacity
This feature is used to calculate device resource consumption and configuration deployment caused by this service change and present them in multiple views in the form of differences.
· Resource capacity
The resource capacity evaluation function evaluates the resource consumption resulting from this service change. By analyzing the total capacity, consumed capacity, and capacity to be consumed of physical device resources on the network, this feature determines whether this service change will fall into the device resource blackhole.
· Configuration changes
The configuration changes mainly show the differences in NETCONF and command line before and after the business changes.
IMPORTANT: Configuration distribution is not allowed when the capacity assessment is incomplete or expired. |
Figure 54 Simulate capacity
Simulate connectivity
On this page, you can manually select ports for detection according to service requirements. The connectivity simulation feature simulates TCP, UDP, and ICMP protocol packets to detect connectivity between ports.
Figure 55 Connectivity detection
Network-wide impact analysis
From the perspective of the overall service, network-wide impact analysis can quickly assess the impact of service changes on the connectivity of networks, and identify the links with state changes. This feature compares the initial state results before this simulation with the network-wide impact analysis results of this simulation, and outputs the comparison results. Then, you can quickly view the link state changes of the entire network.
In the current software version, network-wide impact analysis supports multi-tenant, multi-port filters (vRouters, vNetworks, and subnets), and multiple protocols (ICMP, TCP, and UDP).
Figure 56 Network-wide impact analysis
Configuration deployment
You can click Deploy Configuration to deploy the service configuration to real devices when the simulation evaluation result is as expected. Additionally, you can view details on the deployment details page.
Figure 57 Viewing deployment details
Register the software
After DTN is installed, all functions can be tried within 180 days. To continue using the software beyond this trial period, you must obtain the license authorization.
Register the software as follows:
1. Obtain the device information file: Log in to the license server, and obtain the device information file of the license server.
2. Obtain the activated license: Upload the license file to the license server and set up a connection between DTN and the license server. Then, the license server will authorize the license for DTN.
|
NOTE: After applying for a license file, if the server of the device information file experiences network port changes (such as disabling the network port, enabling a new network port, replacing the network port, or damage to the network port) or changes in hardware such as CPU replacement, the license file might become invalid. |
Installing the license on the license server
For more information about requesting and installing the license, see H3C Software Product Remote Licensing Guide.
Obtaining the DTN component license
After you install the license for the product on the license server, connect to the license server from the license management page to obtain the license. To do that, perform the following tasks:
1. Log in to Unified Platform. On the top navigation bar, click System. From the navigation pane, select License Management > License Information.
2. Configure the license server parameters on the page. The following table describes each parameter.
Table 13 Parameters
Parameter |
Description |
IP address |
The IP address is configured on the license server and used for communication among nodes in the SeerEngine-DC cluster. |
Port number |
Specify the service port number of the license server. The default value is 5555. |
Username |
Specify the username configured on the license server. |
Password |
Specify the user password configured on the license server. |
3. The DTN component automatically obtains licensing information after connecting to the license server.
Obtaining the simulated device license
After installing the license for the product on the license server, connect to the license server on the license management page of the DTN component to obtain the license.
1. Log in to Unified Platform.
2. Access the Automation > Data Center Network > Simulation > Build Simulation Network page.
3. Configure the license server parameters on the page. The following table describes each parameter.
Table 14 Parameters
Parameter |
Description |
IP address |
The IP address is configured on the license server and used for communication among nodes in the SeerEngine-DC cluster. |
Port number |
Specify the service port number of the license server. The default value is 5555. |
Username |
Specify the username configured on the license server. |
Password |
Specify the user password configured on the license server. |
4. The simulated device automatically obtains licensing information after connecting to the license server.
Perform backup & restoration
The DTN component is an independent microservice of the controller. On Unified Platform, backing up or restoring the controller will also back up or restore the DTN component. For more information, see H3C Unified Platform Deployment Guide.
Upgrading and uninstalling software
This chapter describes the upgrade and uninstallation procedures of the DTN component. For information about the upgrade and uninstallation procedures of Unified Platform, see H3C Unified Platform Deployment Guide.
Upgrading DTN
Upgrading the DTN component
Restrictions and guidelines
· On Matrix, you can upgrade a component with its configuration retained. Upgrading components might cause service interruption. Please perform this operation with caution.
· After upgrading the DTN component, please check the simulation software version. If there are updates, please upgrade them accordingly.
· Upgrade DC and then DTN. The upgraded version of DTN must be consistent with the version of DC.
· If the webpage for building simulation networks cannot display information correctly after a DTN component upgrade, clear the cache in your Web browser and log in again.
· When upgrading the DTN component from version E6102 or earlier to E6103 or later, the simulation device host must be reinstalled with the operating system and reconfigured. For the steps to install and configure the simulation device host, see "Independent deployment of a simulation host." After the upgrade, the original host in the simulation network must be deleted and re-managed.
· After upgrading the DTN component from version E6202 or earlier to E6203 or later, please uninstall the simulation device host and reconfigure it. For the steps to uninstall and configure the simulation device host, see "Uninstalling a simulation host." After the upgrade, the original host in the simulation network must be deleted and re-managed.
· After upgrading the DTN component from version E6302 or earlier to E6302 or later, all simulation networks in the Fabric must be deleted and then rebuilt.
· The DTN component does not support direct upgrade from versions earlier than E6501 to E6501 or later. The old version must be uninstalled and the DTN component reinstalled.
Procedure
1. Log in to Matrix.
2. On the top navigation bar, click DEPLOY. From the navigation pane, select Convergence Deployment.
3. Click the icon to the left
of the data center scenario to expand the information.
4. Click the icon in the Actions column for the DTN component to access the
upgrade page.
5. Click Upload. In the window that opens, select the target installation package.
Figure 58 Uploading the target installation package
6. Select the uploaded installation package, and then click Upgrade to upgrade the DTN component.
Figure 59 Upgrading the DTN component
7. If the component upgrade fails, click Roll Back to roll back the components to the versions before the upgrade.
Upgrading the DTN component through a hot patch
Restrictions and guidelines
The Matrix page supports patch upgrades for reserved configurations of controllers. Please proceed with caution, as patch upgrades may cause interruptions in the controller's operations.
Procedure
1. Log in to Matrix. On the top navigation bar, click DEPLOY. From the navigation pane, select Convergence Deployment.
2. Click the icon to the left
of the data center scenario to expand the information.
3. Click the icon in the Actions column for the DTN component to access the hot
patch management page.
4. Click Upload, and then select the target hot patch installation package.
Figure 60 Uploading the hot patch installation package
5. Select the DTN hot patch installation package, and then click Upgrade.
6. If the hot patch upgrade fails, you can roll back the component to the version before the upgrade or terminate the upgrade.
Uninstalling DTN
Uninstalling the DTN component
The DTN component supports separate uninstallation. Uninstalling the DTN component will not uninstall SeerEngine-DC.
To uninstall the DTN component:
1. Log in to Matrix. On the top navigation bar, click DEPLOY. From the navigation pane, select Convergence Deployment.
2. Click the icon to the left
of the data center scenario to expand the information.
3. Select the checkbox on the left of the DTN component, and click Uninstall to uninstall the component.
Figure 61 Uninstalling the DTN component
Uninstalling the DTN hot patch
1. Log in to Matrix. On the top navigation bar, click DEPLOY. From the navigation pane, select Convergence Deployment.
2. Click the icon to the left
of the data center scenario to expand the information.
3. Click the icon in the Actions column for the DTN component to access the hot
patch management page.
Figure 62 Accessing the hot patch management page
4. Click Uninstall, select the baseline version to which the component will be rolled back and verify that the baseline installation package exists.
5. Click OK.
Upgrading a simulation host
1. Obtain the SeerEngine_DC_DTN_HOST installation package, upload it to the server, and then decompress it. The SeerEngine_DC_DTN_HOST installation package name is SeerEngine_DC_DTN_HOST-version.zip, where version represents the software version number. In this example, the version number is E7101.
[root@host01 root]# unzip SeerEngine_DC_DTN_HOST-E7101.zip
2. Execute the chmod command to assign permissions to the user.
[root@host01 root]# chmod +x -R SeerEngine_DC_DTN_HOST-E7101
3. Access the SeerEngine_DC_DTN_HOST-version/ directory decompressed from the SeerEngine_DC_DTN_HOST installation package, and then execute the ./upgrade.sh command.
[root@host01 SeerEngine_DC_DTN_HOST-E7101]# ./upgrade.sh
check network service ok.
check libvirtd service ok.
check management bridge ok.
check sendip ok.
check vlan interface ok.
Complete!
|
NOTE: After upgrading the DTN component from version E6202 or earlier to version E6203 or later, uninstall the simulation host and reconfigure it. After the upgrade, you must delete the original host from the simulation network and add it again. |
Uninstalling a simulation host
IMPORTANT: · Executing the uninstallation script will restart the network service and tear down the SSH connection. To avoid service interruption, perform the operations through the remote console of the server/VM. · To uninstall a simulation host of E6202 or an earlier version, execute the ./uninstall.sh management_nic service_nic command in the specified directory. |
Enter the SeerEngine_DC_DTN_HOST-version/ directory, and then execute the ./uninstall.sh command.
[root@host01 SeerEngine_DC_DTN_HOST-E7101]# ./uninstall.sh
Uninstalling ...
Bridge rollback succeeded.
Restarting network,please wait.
Complete!
Network changes
After the controller is deployed, if IP address conflicts occur in the network or if the overall network plan must be changed (such as data center relocation and subnet mask changes), you can edit the network of the component. This chapter introduces the network changes for the DTN component.
IMPORTANT: · In a remote disaster recovery scenario, before making network changes to the DTN component, you must first remove the disaster recovery system. · Editing component network settings will cause service interruption. Please be cautious. |
Editing network settings
1. Log in to Matrix. On the top navigation bar, click DEPLOY. From the navigation pane, select Convergence Deployment.
2. Click the icon to the left
of the data center scenario.
3. Click the icon in the Actions column for the DTN component.
4. Select the target network.
5. Click Next to access the network binding page. Select the target network and subnet.
6. Click Next to verify the configuration.
7. Verify the configuration, and then click OK to deploy the configuration. The page will display the network change progress.
8. If the network change fails, you can roll back the network to its previous state, or exit the network change process.
Tasks after network changes
After network changes, some IP address-related configuration must be edited manually.
Network check
After the DTN network changes, make sure the DTN component is reachable from the simulation hosts.