H3C SeerEngine-DC Controller Simulation Network Deployment Guide-E62xx-5W200

HomeSupportAD-NET(SDN)H3C SeerEngine-DCConfigure & DeployConfiguration GuidesH3C SeerEngine-DC Controller Simulation Network Deployment Guide-E62xx-5W200
01-Text
Title Size Download
01-Text 3.59 MB

Introduction

In the DC scenario, the SeerEngine-DC services are complicated, and hard to operate. After complicated operations, you might fail to achieve the expected results. As a result, a large number of human and material resources are wasted. Therefore, it is necessary to perform a rehearsal before deploying actual services. During the rehearsal process, you can learn and avoid risks, so that you can reduce the risk possibilities for the production environment to the maximum extent. The simulation function is introduced for this purpose. The simulation function simulates a service and estimates resource consumption before you deploy the service. It helps users to determine whether the current service orchestration can achieve the expected effect and affect existing services, and estimate the device resources to be used.

The simulation function provides the following features:

·     Simulation network—The simulation network model is built in a 1:1 ratio to the real network through vSwitches. The simulation system is built based on the simulation network model, which needs highly automated management.

·     Tenant service simulation—This function mainly orchestrates and configures the logical network and application network. This function simulates a service and estimates resource consumption before you deploy the service. It helps users to determine whether the current service orchestration can achieve the expected effect and affect existing services. This function includes capacity simulation, connectivity simulation, and network-wide impact analysis. You can deploy the service configuration to real devices when the simulation evaluation result is as expected.

·     Simulation records—Displays the simulation records of users and provides the advanced search function.

This document describes how to deploy the DTN hosts and build a simulation network on the controller.

 


Environment setup workflow

Table 1 shows the workflow to set up a simulation environment.

Table 1 Environment deployment workflow

Step

Tasks

Deploy the Unified Platform

See H3C Unified Platform Deployment Guide.

Deploy the SeerEngine-DC and DTN components

See H3C SeerEngine-DC Installation Guide (Unified Platform).

Plan the network

Plan network topology

Plan the IP address assignment scheme

Deploy DTN hosts

Install the operating system

Virtualize network ports

Deploy the simulation service on the controller

Preconfigure the simulation network

Build a simulation network

Simulate the tenant service

 

 


Plan the network

Plan network topology

A simulation network includes four types of networks, including node management network, controller management network, simulation management network, and simulated device service network.

·     Node management networkNetwork over which you can log in to servers to perform routine maintenance.

·     Controller management networkNetwork for cluster communication between controllers and device management.

·     Simulation management networkNetwork over which the digital twin network (DTN) microservice component and DTN hosts exchange management information.

·     Simulated device service networkNetwork over which the DTN hosts exchange service data.

Before you deploy the simulation system, plan the simulation management network and simulated device service network.

Figure 1 Typical simulation network topology design for the Cloud DC scenario in non-remote disaster recovery mode

 

CAUTION

CAUTION:

·     If the controller management network and simulation management network use the same management switch, to isolate the simulation network and the production environment, configure VPN instances on the management switch to prevent IP address conflicts from affecting services. If the controller management network and simulation management network use different management switches, physically isolate these switches.

·     Configure routes to provide Layer 3 connectivity between simulation management IPs and simulated device management IPs.

 

Plan the IP address assignment scheme

As a best practice, use Table 2 to calculate the minimum number of IP addresses on subnets in each network for deployment of a SeerEngine-DC controller cluster and DTN Manager.

Table 2 Number of addresses in subnet IP address pools

Component/Node name

Network name (type)

Max number of cluster members

Default number of cluster members

Calculation method

Remarks

SeerEngine-DC

Controller management network (MACVLAN)

32

3

1 x cluster member count + 1 (cluster IP)

N/A

DTN component

Simulation management network (MACVLAN)

1

1

Single node deployment, which requires only one IP.

Used by the simulation microservice deployed on the controller node

DTN host

Simulation management network

2×the number of DTN hosts

2×the number of DTN hosts

2×the number of DTN hosts

Used by DTN hosts

 

This document uses the IP address plan in Table 3 for example.

Table 3 IP address plan example

Component/node name

Network name (type)

IP address

SeerEngine-DC

Controller management network (MACVLAN)

Subnet: 192.168.12.0/24 (gateway address: 192.168.12.1)

Network address pool: 192.168.12.101 to 192.168.12.132

DTN component

Simulation management network  (MACVLAN)

Subnet: 192.168.12.0/24 (gateway address: 192.168.12.1)

Network address pool: 192.168.12.133 to 192.168.12.133

DTN host

Simulation management network

Network address pool: 192.168.12.134 to 192.168.12.144

Node management network

Network address pool: 192.168.10.110 to 192.168.10.120

 

 


Deploy DTN hosts

Server requirements

Hardware requirements

For the hardware requirements for the DTN hosts and DTN components, see H3C SeerEngine-DC Installation Guide (Unified Platform).

Software requirements

The simulation hosts must install an operating system that meets the requirements in Table 4.

Table 4 Operating systems and versions supported by the host

OS name

Version number

Kernel version

H3Linux

V1.3.1

5.10

 

Install the operating system

CAUTION

CAUTION:

·     Before you install H3Linux on a server, back up server data. H3Linux will replace the original OS (if any) on the server with data removed.

·     Make sure the server supports installing operating system CentOS 7.6 or later.

 

The H3Linux_K510_version.iso (where version is the version number) image is the H3Linux operating system installation package. The following information uses a server without an OS installed for example to describe the installation procedure of the H3Linux_K510_version.iso image.

1.     Obtain the required H3Linux_K510_version.iso image in ISO format.

2.     Access the remote console of the server, and then mount the ISO image as a virtual optical drive.

3.     Configure the server to boot from the virtual optical drive, and then restart the sever.

After the ISO image is loaded, the INSTALLATION SUMMARY page opens.

Figure 2 INSTALLATION SUMMARY page

 

4.     In the LOCALIZATION area, perform the following steps:

¡     Click DATE & TIME to modify the date and time settings.

¡     Click KEYBOARD to modify keyboard settings as needed.

¡     Click LANGUAGE SUPPORT to select your preferred language.

 

IMPORTANT

IMPORTANT:

Make sure you select the same time zone across the hosts. In this document, [Asia/Shanghai] is selected for example.

 

Figure 3 INSTALLATION SUMMARY page

 

5.     Click SOFTWARE SELECTION in the SOFTWARE area to enter the page for selecting software. Select the Server with GUI base environment and the File and Storage Server, Virtualization Client, Virtualization Hypervisor, and Virtualization Tools add-ons. Then, click Done to return to the INSTALLATION SUMMARY page.

Figure 4 SOFTWARE SELECTION page (1)

 

Figure 5 SOFTWARE SELECTION page (2)

 

6.     In the SYSTEM area, click INSTALLATION DESTINATION.

Figure 6 INSTALLATION DESTINATION dialog box

 

7.     In the dialog box that opens, perform the following operations:

a.     Select a local disk from the Local Standard Disks area.

b.     In the Other Storage Options area, select I will configure partitioning.

c.     Click Done.

8.     In the MANUAL PARTITIONING dialog box, click Click here to create them automatically to automatically generate recommended partitions.

Figure 7 MANUAL PARTITIONING dialog box

 

The list of automatically created partitions opens. Figure 8 shows the list of automatically created partitions when the disk size is 600 GiB.

 

IMPORTANT

IMPORTANT:

The /boot/efi partition is available only if UEFI mode is enabled for OS installation.

 

Figure 8 Automatically created partition list

 

9.     Set the device type and file system of a partition. As a best practice, set the device type to Standard Partition to improve system stability. Table 5 shows the device type and file system of each partition used in this document.

Table 5 Partition settings

Partition name

Device type

File system

/boot

Standard Partition

xfs

/boot/efi (UEFI mode)

Standard Partition

EFI System Partition

/

Standard Partition

xfs

/swap

Standard Partition

swap

 

10.     Edit the device type and file system of a partition as shown in Figure 9. Take the /boot partition for example. Select a partition on the left, and select Standard Partition from the Device Type list and xfs from the File System list. Then, click Update Settings.

Figure 9 Configuring partitions

 

11.     After you finish the partitioning task, click Done in the upper left corner. In the dialog box that opens, select Accept Changes.

Figure 10 Accepting changes

 

12.     In the INSTALLATION SUMMARY window that opens, click NETWORK & HOSTNAME in the SYSTEM area to configure the host name and network settings.

13.     In the Host name field, enter the host name (for example, host01) for this server, and then click Apply.

Figure 11 Setting the host name

 

14.     Configure the network settings:

 

IMPORTANT

IMPORTANT:

Configure network ports as planned. The server requires a minimum of three network ports. The "Virtualize network ports" task will virtualize two of the network ports connected to the simulation network into network bridges mge_bridge and up_bridge. You must assign an IP address to the mge_bridge bridge for communication with the DTN component. You must assign the management IP address of the DTN host to the third network port for server maintenance.

 

a.     Select a network port and then click Configure.

b.     In the dialog box that opens, configure basic network port settings on the General tab:

-     Select the Automatically connect to this network when it is available option .

-     Verify that the All users may connect to this network option is selected. By default, this option is selected.

Figure 12 General settings for a network port

 

15.     Configure IP address settings:

a.     Click the IPv4 Settings or IPv6 Settings tab.

b.     From the Method list, select Manual.

c.     Click Add, assign a simulation management IP address to the DTN host, and then click Save.

d.     Click Done in the upper left corner of the dialog box.

 

IMPORTANT

IMPORTANT:

DTN service supports IPv4 and IPv6. In the current software version, only a single stack is supported.

 

Figure 13 Configuring IPv4 address settings for a network port

 

16.     Repeat Step 14 and Step 15 to configure the management IP addresses for other DTN hosts. The IP addresses must be in the network address pool containing IP addresses 192.168.10.110 to 192.168.10.120, for example, 192.168.10.110.

17.     Click Begin Installation to install the OS.

18.     During the installation, configure the root password as prompted:

 

IMPORTANT

IMPORTANT:

You must configure a root password before you can continue with the installation.

 

a.     In the USER SETTINGS area, click ROOT PASSWORD.

b.     In the dialog box that opens, set the root password for the system, and then click Done in the upper left corner.

Figure 14 Configuration window for H3Linux OS installation

 

Figure 15 Setting the root password

 

Then, the system automatically reboots to finish OS installation.

Virtualize network ports

IMPORTANT

IMPORTANT:

On the port connecting the switch to the service interface of a simulation host, execute the port link-type trunk command to configure the link type of the port as trunk, and execute the port trunk permit vlan vlan-id-list command to assign the port to 150 continuous VLAN IDs. Among these VLAN IDs, the start ID is the VLAN ID specified for network port virtualization, and the end VLAN ID is the start VLAN ID+149. For example, if the start VLAN ID is 11, the permitted VLAN ID range is 11 to 160. When you plan the network, do not use any VLAN ID permitted by the port.

 

About network port virtualization

Each host node must have two network ports: one connected to the simulation management network and one connected to the simulation service network.

Virtualize two network ports of the host to generate the bridge named mge_bridge and 150 bridges named brvlan-vlanId. When virtualizing a network port, you must specify the start VLAN ID at the CLI, and the system will generate bridges corresponding to 150 continuous VLAN IDs. For example, if the start VLAN ID is 11, the system will generate bridges brvlan-11 through brvlan-160.

Prerequisites: Configure IP address settings for the network port connected to the mge_bridge bridge

IMPORTANT

IMPORTANT:

IP addresses of simulated devices are the same as their twin devices on the product network. To avoid IP conflict, make sure the IP address of a DTN host on the simulation management network is different from the IP address of any simulated device.

 

Before you create bridges for the network ports connected to the simulation network, you must assign an IP address to the network port mapped to the mge_bridge bridge. The IP address is used for communication with the DTN microservice component and will be used when you add hosts to the simulation network.

Skip this section if you have assigned an IP address to the network port mapped to the mge_bridge bridge during OS installation, as described in "Install the operating system."

To configure IP address settings for the network port connected to the mge_bridge bridge:

1.     Open the /etc/sysconfig/network-scripts/ directory.

[root@host01 ~]# cd /etc/sysconfig/network-scripts/

2.     Press I to access the edit mode, configure IP settings, save the configuration, and then exit. This step uses network port ens1f0 for example.

[root@host01 network-scripts]# vi ifcfg-ens1f0

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=ens1f0

UUID=2e5b13dc-bd05-4d65-93c7-c0d9228e1b72

DEVICE=ens1f0

ONBOOT=yes

IPADDR=192.168.12.134

PREFIX=24

GATEWAY=192.168.12.1

IPV6_PRIVACY=no

3.     Restart the network service.

[root@host01 network-scripts]# service network restart

Install dependency packages

IMPORTANT

IMPORTANT:

In the current system, only the root user can install dependency packages.

 

1.     Obtain the latest dependency packages, and upload them to the server and compress them. The dependency package format is SeerEngine_DC_DTN_HOST-version.zip, where the version field represents the software version number. In this example, use E6103.

[root@host01 root]# unzip SeerEngine_DC_DTN_HOST-E6103.zip

2.     Execute the chmod command to assign permissions to the user.

[root@host01 root]# chmod +x -R SeerEngine_DC_DTN_HOST-E6103

3.     Enter the SeerEngine_DC_DTN_HOST-version/ directory of the decompressed dependency package, and execute the ./install.sh command to install the dependency package.

[root@host01 SeerEngine_DC_DTN_HOST-E6103]# ./install.sh

Redirecting to /bin/systemctl restart libvirtd.service

Libvirt configuration succeeded

Install succeeded.

4.     Execute the virsh -c qemu+tcp://127.0.0.1:16509/system command to identify whether dependency package is successfully installed. If the following information is output, the installation succeeds. Execute the exit command to exit.

[root@host01]# virsh –c qemu+tcp://127.0.0.1:16509/system

Type:  'help' for help with commands

       'quit' to quit

virsh #

Create Linux network bridges

CAUTION

CAUTION:

Execution of the Linux bridge script will cause network service to restart. If you establish an SSH connection to the network port attached to the simulation management or service network, the SSH connection will be interrupted on network service restart. To avoid this situation, establish the SSH connection to the management port of the server.

 

1.     Access the SeerEngine_DC_DTN_HOST-version/bridge directory after decompression, and then execute the ./bridge-init.sh param1 param2 param3 command to configure the Linux network bridges. The param1 argument represents the name of the network port mapped to the mge_bridge bridge. The param2 argument represents the name of the network port mapped to the brvlan-vlanId bridge. The param3 argument represents the start VLAN ID of the brvlan-vlanId bridge.

[root@host01 ~]# cd /root/SeerEngine_DC_DTN_HOST-E6103/bridge

[root@host01 bridge]# ./bridge-init.sh ens1f0 ens1f1 11

Bridge initialization succeeded.

Restarting network,please wait.

2.     Verify that bridges mge_bridge and up_bridge have been successfully created for the network ports.

[root@host01 bridge]# brctl show

bridge name bridge id       STP enabled interfaces

brvlan-100 8000.c4346bb8d139   no      ens1f1.100

brvlan-101 8000.c4346bb8d139   no      ens1f1.101

brvlan-99 8000.c4346bb8d139   no      ens1f1.99

mge_bridge 8000.c4346bb8d138   no      ens1f0

virbr0     8000.000000000000   yes

3.     Verify that the generated network-scripts configuration file contains the correct configuration. Take network port ens1f0 and bridge mge_bridge for example.

[root@host01 bridge]# cat /etc/sysconfig/network-scripts/ifcfg-mge_bridge

DEVICE=mge_bridge

TYPE=Bridge

BOOTPROTO=static

ONBOOT=yes

IPADDR=192.168.12.134

PREFIX=24

[root@host01 bridge]# cat /etc/sysconfig/network-scripts/ifcfg-ens1f0

DEVICE=ens1f0

HWADDR=c4:34:6b:b8:d1:38

BOOTPROTO=none

ONBOOT=yes

BRIDGE=mge_bridge

[root@host01 bridge]# ifconfig mge_bridge

mge_bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 192.168.12.134  netmask 255.255.255.0  broadcast 192.168.12.255

        inet6 2002:6100:2f4:b:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x0<global>

        inet6 fec0::5:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x40<site>

        inet6 fec0::b:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x40<site>

        inet6 fe80::c634:6bff:feb8:d138  prefixlen 64  scopeid 0x20<link>

        inet6 2002:aca8:284d:5:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x0<global>

        inet6 2002:6200:101:b:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x0<global>

        ether c4:34:6b:b8:d1:38  txqueuelen 0  (Ethernet)

        RX packets 29465349  bytes 7849790528 (7.3 GiB)

        RX errors 0  dropped 19149249  overruns 0  frame 0

        TX packets 4415  bytes 400662 (391.2 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

4.     Identify whether the IP addresses of network ports have been update. Take network port ens1f0 for example.

[root@host01 ~]# ifconfig ens1f0

ens1f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet6 fe80::c634:6bff:feb8:d138  prefixlen 64  scopeid 0x20<link>

        ether c4:34:6b:b8:d1:38  txqueuelen 1000  (Ethernet)

        RX packets 31576735  bytes 8896279718 (8.2 GiB)

        RX errors 0  dropped 7960  overruns 0  frame 0

        TX packets 4461  bytes 464952 (454.0 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

        device interrupt 16 

Table 6 Parameters

Parameter

Description

DEVICE

Interface name, which must be the same as the name obtained through the ifconfig command.

TYPE

Interface type. This parameter exists only in the bridge configuration file and must be Bridge.

BOOTPROTO

Options are none, dhcp, and static.

·     none—No protocol is used to obtain IP addresses when the network service is enabled.

·     dhcp—DHCP is used to obtain IP addresses.

·     static—IP addresses are manually configured. This parameter must be none in a physical interface configuration file and bridge in a bridge configuration file.

ONBOOT

Options are yes and no.

If this field is set to yes, the device is activated when the system starts.

If this field is set to no, the device is not activated when the system starts. This parameter is set to yes in this example.

IPADDR

IP address. The IP address of a physical interface is moved to its bridge, so this parameter does not exist in the physical interface configuration file. In the bridge configuration file, this parameter is the IP address of the original physical interface, which is the same as the IP address obtained by using the ifconfig command.

NETMASK

Subnet mask of an IP address. For more information, see the IPADDR parameter.

HWADDR

Interface MAC address. This parameter exists only in physical interface configuration files and must be the same as the value for the ether field in the ifconfig command output.

BRIDGE

Name of the bridge bound to the physical interface. This parameter exists only in the physical interface configuration files.

 

Configure the MTU of a Linux bridge network port

CAUTION

CAUTION:

The setMtu.sh script in the bridge directory can only set MTU for a physical network port. If the specified device is not a physical network port, the system displays "xxx: Device not found."

 

By default, the MTU of a physical network port is 1500 bytes. In some network scenarios, you must change this MTU to a higher value to avoid packet drops. For example, you must change the interface MPU if the network has VXLAN traffic. VXLAN adds an extra 8-byte VXLAN header, 8-byte UDP header, and 20-byte IP header to the original Layer 2 frame. To avoid packet drops, you must set the interface MTU to a higher value than the default 1500 bytes.

To set the MTU of a network port and its network bridge:

1.     Execute the ./setMtu.sh phyNic mtuSize command.

The phyNic argument represents the physical network port name, and the mtuSize argument represents the MTU value to be set. This example sets the MTU of network port ens1f0 to 1600 bytes.

[root@host01 bridge]# ./setMtu.sh ens1f0 1600

ens1f0 mtu set to 1600 complete.

2.     Verify that the MTU has been successfully set.

[root@host01 bridge]# ifconfig ens1f0| grep mtu

ens1f0: flags=4355<UP,BROADCAST,PROMISC,MULTICAST>  mtu 1600

[root@host01 bridge]# ifconfig mge_bridge| grep mtu

mge_bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1600

[root@host01 bridge]# cat /etc/sysconfig/network-scripts/ifcfg- ens1f0 | grep -i mtu

MTU=1600

Delete Linux bridges

To delete the Linux bridge configuration, execute the ./bridge-rollback.sh param1 param2 command. The param1 argument represents the name of the network port mapped to the mge_bridge bridge. The param2 argument represents the name of the network port mapped to the up_bridge bridge.

[root@host01 bridge]# ./bridge-rollback.sh ens1f0 ens1f1

Bridge fallback succeeded.

Restarting network,please wait.

 

 


Deploy the simulation service on the controller

CAUTION

CAUTION:

·     Make sure SeerEngine-DC and DTN have been deployed. For the deployment procedure, see H3C SeerEngine-DC Installation Guide (Unified Platform).

·     In the current software version, the system administrator and tenant administrator can perform tenant service simulation.

 

Preconfigure the simulation network

Preconfiguring a simulation network includes adding simulation hosts and uploading simulation images.

·     Adding simulation hosts—A host refers to a physical server installed with the H3Linux system and configured with related settings. The simulated devices in the simulation network model are created on the host. If multiple hosts are available, the controller selects a host with optimal resources for creating simulated devices.

·     Uploading simulation images—Physical devices on the real network can build the corresponding simulated devices based on the uploaded simulation images.

Add simulation hosts

1.     Log in to the controller. Navigate to the Automation > Simulation > Build Simulation Network page. Click Preconfigure. The Manage Simulation Hosts page opens.

2.     Click Add. In the dialog box that opens, configure the host name, IP address, username, and password.

Figure 16 Adding simulation hosts

 

3.     Click Apply.

 

 

NOTE:

·     A host can be incorporated by only one DC controller cluster.

·     The controller allows you to incorporate DTN hosts as a root user or non-root user. To incorporate DTN hosts as a non-root user, first add the non-root user permission by executing the ./addPermission.sh username command in the SeerEngine_DC_DTN_HOST-version/ directory of the decompressed dependency package.

 

Upload simulation images

1.     Log in to the controller. Navigate to the Automation > Simulation > Build Simulation Network page. After clicking Preconfigure, click the Manage Simulation Images tab. The page for uploading simulation images opens.

2.     Click Upload Image. In the dialog box that opens, select the type of the image to be uploaded and image of the corresponding type, and then click Upload.

Figure 17 Uploading simulation images

 

Build a simulation network

After preconfiguring the simulation network and controller, you can build a simulation network.

 

CAUTION

CAUTION:

·     To rebuild a simulation network, you must reinstall licenses for all simulated devices.

·     If the webpage for building simulation networks cannot display information correctly after a DTN component upgrade, clear the cache in your Web browser and log in again.

 

Follow these steps to build a simulation network:

1.     Log in to the controller. Navigate to the Automation > Simulation > Build Simulation Network page.

Figure 18 Building a simulation network

 

2.     Click Build Simulation Network. In the dialog box that opens, select fabric f1. In the current software version, you can build a simulation network for multiple fabrics. In this example, build a simulation network for a single fabric.

Figure 19 Selecting a fabric

 

3.     Click OK to start building a simulation network. During the process of building a simulation network, you can view the building workflow and result.

Figure 20 Built a simulation network successfully

 

4.     After the simulation network is built successfully, you can view the simulated device information:

¡     The simulated device running state is Active.

¡     The device model is displayed correctly on the real network and the simulation network.

The VMs in the simulation network model are created on the host created. If multiple hosts are available, the controller selects a host with optimal resources for creating VMs.

Figure 21 Viewing simulated devices

 

Simulate the tenant service

After the simulation network is successfully built, you can perform tenant service simulation. Tenant service simulation involves the following steps:

1.     Enable the design mode for the specified tenant

To perform tenant simulation service orchestration and simulation service verification, make sure the design mode is enabled for the specified tenant.

The services orchestrated in design mode are deployed only to simulated devices rather than real devices. To deploy the orchestrated services to real devices, click Deploy Configuration.

After you disable the design mode for a tenant, service data that has not been deployed or failed to be deployed in the tenant service simulation will be cleared.

2.     Configure tenant service simulation

This feature allows you to orchestrate and configure logical network and application network resources, including vRouters, vNetworks, subnets, EPGs, and application policies. After the configuration is completed, evaluate the simulation.

3.     Evaluate the simulation and view the simulation result

The simulation evaluation function allows you to evaluate the configured resources. After simulation evaluation is completed, you can view the simulation evaluation results, including the capacity and configuration changes, connectivity simulation results, and network-wide impact analysis results.

4.     Deploy configuration and view deployment details

You can deploy the service configuration to real devices when the simulation evaluation result is as expected.

Enable the design mode for the tenant

1.     Navigate to the Automation > Simulation > Tenant Service Simulation page. Click the design mode icon for a tenant to enable or disable design mode for the tenant. After design mode is enabled for a tenant, the tenant icon becomes , which means that the tenant is editable. After design mode is disabled for a tenant, the tenant icon becomes , which indicates that the tenant is not editable.

2.     Click the icon for the tenant to enter the Tenant Service Simulation (Tenant Name) > Logical Networks page. On this page, you can perform tenant service simulation.

 

 

NOTE:

You can enable the design mode and then perform tenant service simulation only when the simulation network is built normally.

 

Figure 22 Enabling the design mode for the tenant

 

Configure tenant service simulation

1.     On the logical network page, you can perform the following operations:

¡     Drag a resource icon in the Resources area to the canvas area. Then, a node of this resource is generated in the canvas area, and the configuration panel for the resource node opens on the right.

¡     In the canvas area, you can adjust node locations, bind/unbind resource, and zoom in/out the topology.

Figure 23 Logical networks

 

2.     On the application network page, configure EPGs and application policies in a graphical way.

Figure 24 Application networks > EPGs

 

Figure 25 Application networks > Application policies

 

Evaluate the simulation and view the simulation result

Simulate and evaluate services

After resource configuration, click Simulate & Evaluate. In the dialog box that opens, select Network-Wide Impact Analysis, and click Start. In the left area of the Simulate & Evaluate dialog box, the progress in percentage is displayed. In the right area, the corresponding resource changes are displayed.

Figure 26 Simulating and evaluating services (1)

 

Figure 27 Simulating and evaluating services (2)

 

View the simulation result

After simulation evaluation is completed, click Simulation Results to enter the simulation result page. On this page, you can view the following simulation results:

·     Capacity & Configuration Changes—This page displays resource usages and the configuration changes before and after simulation in a list or block diagrams.

Figure 28 Capacity and configuration changes

 

·     Connectivity Simulation—Perform this task to detect connectivity between source addresses and destination addresses. When specifying the source/destination addresses, you can input IP addresses or click Select and configure filter conditions in the dialog box that opens. Then, all the specified IP addresses are displayed on the source or destination IP address list. After completing the configuration, click Test to detect connectivity.

Figure 29 Connectivity detection

 

·     Network-Wide Impact Analysis—From this tab, you can view details of network information and perform a detection again. A single tenant supports performing network-wide impact analysis for up to 254 ports.

Figure 30 Network-wide impact analysis

 

Deploy configuration and view deployment details

You can click Deploy Configuration to deploy the service configuration to real devices when the simulation evaluation result is as expected. Additionally, you can view details on the deployment details page.

Figure 31 Viewing deployment details

 

Delete a simulation network

1.     Log in to the controller. Navigate to the Automation > Simulation > Build Simulation Network page.

2.     Click Delete. In the dialog box that opens, the fabric for which a simulation network has been built is selected by default.

Figure 32 Deleting a simulation network

 

3.     Click OK to start deleting the simulation network. When all operation results are displayed as Succeeded and the progress is 100%, the simulation network is deleted completely.

 


Upgrade the DTN hosts and dependency packages

Upgrade operations include full upgrade and dependency package upgrade. To upgrade the system image, select full upgrade and upgrade both the image and dependency packages, and redeploy the simulation service on the controller. In any other cases, upgrade only the dependency package.

Full upgrade

In full upgrade, you need to install the image and dependency packages of the new version, and redeploy the simulation service on the controller. For how to install the operating system, see “Install the operating system.” For how to install the dependency packages, see “Install dependency packages.”

 

 

NOTE:

If you upgrade from an ISO image file (with the DTN Manager application embedded) of an old version to the ISO image file (without the DTN Manager embedded) of the new version, after the DTN component is upgraded on the Unified Platform, you must delete old hosts on the simulation network and then incorporate them again.

 

Upgrading the dependency packages

1.     Obtain the latest dependency packages, and upload them to the server and compress them. The dependency package format is SeerEngine_DC_DTN_HOST-version.zip, where the version field represents the software version number. In this example, use E6103.

[root@host01 root]# unzip SeerEngine_DC_DTN_HOST-E6103.zip

2.     Execute the chmod command to assign permissions to the user.

[root@host01 root]# chmod +x -R SeerEngine_DC_DTN_HOST-E6103

3.     Enter the SeerEngine_DC_DTN_HOST-version/ directory of the decompressed dependency package, and execute the ./install.sh command to upgrade the dependency package.

[root@host01 SeerEngine_DC_DTN_HOST-E6103]# ./upgrade.sh

Redirecting to /bin/systemctl restart libvirtd.service

Libvirt configuration succeeded

Upgrade succeeded

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网