H3C SeerEngine-DC Simulation Network Environment Deployment Guide-5W701

HomeSupportResource CenterSDNH3C SeerEngine-DCH3C SeerEngine-DCTechnical DocumentsInstallSimulation Network Environment Deployment GuideH3C SeerEngine-DC Simulation Network Environment Deployment Guide-5W701
01-Text
Title Size Download
01-Text 1.29 MB

Introduction

The DC simulation system verifies and simulates a service and estimates resource consumption before you deploy the service. It helps users to determine whether the current service orchestration can achieve the expected effect and will affect existing services, and estimate the device resources to be used. The simulation system is built based on the simulation network model, which needs highly automated management. This feature provides the DTN Manager management, host management, and network model management functions.

 


Environment deployment workflow

Table 1 shows the workflow of deploying the simulation network environment.

Table 1 Environment deployment workflow

Step

Tasks

Remarks

Deploy the unified platform

Configure cluster parameters

Required.

See H3C Unified Platform Deployment Guide.

Create a cluster

Deploy the unified platform

Deploy the SeerEngine-DC component

Upload the installation package

Required.

Select a component

Configure the simulation network

NOTE: The network configuration for the simulation function is different from the DC component. This document describes only the network configuration requirements of the simulation network. For the network configuration of the DC component, see H3C SeerEngine-DC Installation Guide (Unified Platform).

Bind it to networks

Confirm parameters

Deploy hosts

Install the H3Linux operating system

Required.

Configure NIC virtualization

Required.

Remotely access the libvirtd process

Required.

Create the host script file directory

Required.

Deploy the DTN Manager

Install DTN Manager

Required.

Preconfigure the simulation device image

Required.

 

 


Configure the simulation network

For the operations except network configuration, see H3C SeerEngine-DC Installation Guide (Unified Platform).

Plan the network

Plan the networking

The simulation network includes the simulation management network and the simulation service network. The simulation management network is a MACVLAN network. Each network functions as follows:

·     Simulation management networkNetwork used for exchanging management information among the Simulation-app component, DTN Manager, and hosts.

·     Simulation service networkNetwork used for exchanging service information among the hosts carrying simulation devices. You need to configure this network only when you deploy multiple hosts.

Before deploying the simulation feature, you must first plan the simulation management network and simulation service network.

The network plan is as shown in Figure 1.

Figure 1 Cloud DC scenario (only simulation network)

 

 

NOTE:

·     You must configure VLANs and VPN instances to isolate the controller management network, simulation management network, and simulation service network. Additionally, you must configure routes to enable the simulation management IPs, simulation host IPs, and simulation device IPs to reach each other at Layer 3.

·     If the controller management network, simulation management network, and simulation service network are physically isolated by using different management network switches, you must configure routes to enable the simulation management IPs, simulation host IPs, and simulation device IPs to reach each other at Layer 3.

·     The simulation networks of different DC controller clusters must be isolated.

 

Plan IP addresses

As a best practice, calculate the number of IP addresses on subnets in a MACVLAN network as shown in Table 2.

Table 2 Number of addresses in subnet IP address pools

Component name

Max members in cluster

Default members in cluster

Calculation method

SeerEngine-DC

32

3

1*Cluster member count + 1 (cluster IP)

Simulation-app

1

1

Single node deployment, which needs only one IP

 

In this example, each component uses the default cluster member count. Calculate the number of IP addresses needed as follows:

Number of IP addresses on the MACVLAN network: (1*3+1) + 1=5. That is, when the SeerEngine-DC cluster has three members, the number of IP addresses on the MACVLAN network must be 5 at least.

This document uses the IP address planning in Table 3 as an example.

Table 3 IP planning

Type

IP address

MAC-VLAN subnet (SeerEngine-DC component)

10.0.234.0/24 (gateway address: 10.0.234.254)

MAC-VLAN network address pool (SeerEngine-DC component)

10.0.234.6 to 10.0.234.38

MAC-VLAN subnet (Simulation-app component)

10.0.234.0/24 (gateway address: 10.0.234.254)

MAC-VLAN network address pool (Simulation-app component)

10.0.234.39 to 10.0.234.39

 

 

NOTE:

Make sure the host node management network IP addresses and the simulation device IP addresses (the same as device IP addresses on the production network) are different to avoid conflicts.

 

 


Deploy hosts

Server requirements

Hardware requirements

Table 4 shows the recommended hardware requirements for host servers.

Table 4 Hardware requirements

CPU architecture

CPU

Memory

Available disk space

NIC speed

Remarks

VT-X/VT-D x86-64 (Intel64/AMD64) architecture

24 cores or more, 2.0 GHz or higher

256 GB or more

2 TB or more for the partition that contains the root directory

·     1 to 10 Gbps

·     Two or more NICs

Recommended configuration

 

 

NOTE:

If the server CPU supports hyper-threading, as a best practice, enable hyper-threading.

 

Software requirements

The host supports the H3Linux operating system, as shown in Table 5.

Table 5 Operating systems and versions supported by the host

Operating system name

Version number

Kernel version

H3Linux

V1.3.0

5.10

 

Install the H3Linux operating system

IMPORTANT

IMPORTANT:

Installing the H3Linux operating system on a server that already has an operating system installed replaces the existing operating system. To avoid data loss, back up data before you install the H3Linux operating system.

 

This section describes the procedure for installing the H3Linux operating system on a server without an operating system installed.

Install the H3Linux operating system

To install the H3Linux operating system, first obtain the H3Linux ISO image of the required version.

1.     Use the remote console on the server to load the ISO image through the virtual optical drive.

2.     Configure the server to boot from the virtual optical drive and then restart the sever.

3.     Select the installation language and then click Continue.

Figure 2 Selecting the installation language

 

4.     In the LOCALIZATION area, click DATE & TIME to modify the date and time settings. Click KEYBOARD to modify keyboard settings as needed.

Figure 3 INSTALLATION SUMMARY page

 

5.     In the SOFTWARE area, click SOFTWARE SELECTION. Select the Server with GUI base environment and the File and Storage Server, Java Platform, Virtualization Client, Virtualization Hypervisor, and Virtualization Tools add-ons. Then, click Done to return to the INSTALLATION SUMMARY page.

Figure 4 SOFTWARE SELECTION page

 

Figure 5 SOFTWARE SELECTION page

 

6.     In the SYSTEM area, click INSTALLATION DESTINATION. Select a local disk from the Local Standard Disks area and then select I will configure partitioning. Then, click Done.

Figure 6 INSTALLATION DESTINATION page

 

7.     On the MANUAL PARTITIONING page, click Click here to create them automatically to automatically generate recommended partitions.

Figure 7 MANUAL PARTITIONING page

 

8.     The partition list after automatic partitioning contains the /boot/efi partitions only when the operating system is installed on the server through UEFI.

Figure 8 Automatically created partition list

 

9.     Modify the device type and file system for a partition. As a best practice to improve the system stability and reduce the probability of destroying the VM image files upon sudden server power-down, do not set the device type to LVM. Table 6 shows the device type and file system of each partition after modifications.

Table 6 Partition settings

Partition name

Device type

File system

/boot

Standard partition

xfs

/boot/efi (UEFI mode)

Standard partition

EFI System Partition

/

Standard partition

xfs

/swap

Standard partition

swap

 

10.     Edit the device type and file system of a partition as shown in Figure 9. Take a /boot partition as an example. Select a partition on the left, and select Standard Partition from the Device Type list and xfs from the File System list as shown in the table above. Then, click Update Settings.

Figure 9 Configuring partitions

 

11.     After the modification, click Done in the upper left corner. In the dialog box that opens, select Accept Changes as shown in Figure xxx to return to the INSTALLATION SUMMARY page. Continue to install the operating system.

Figure 10 Accepting changes

 

12.     In the H3Linux operating system, the network parameters configured on the GUI are managed through the NetworkManager service. However, the NetworkManager service must be disabled when you create bridges later. Therefore, if you have configured network parameters on the GUI, a conflict might occur when you disable the NetworkManager service. For this reason, do not configure network parameters during the H3Linux installation process.

Figure 11 Forbidding network parameter configuration on the GUI

 

13.     After the configuration above, click Begin Installation to start the installation. During the installation process, you will be promoted to configure USER SETTINGS. Set the root password for the system. Also, you can click USER CREATION. On the page that opens for creating a non-root user, enter a username and password, and click Done to return to the installation page.

Figure 12 Setting the root password

 

Figure 13 Creating a non-root user

 

14.     After the installation is complete, the system automatically reboots to finish the installation of the operating system.

Figure 14 Installation completed

 

Configure network parameters for the H3Linux system

In the /etc/sysconfig/network-scripts/ directory, each NIC corresponds to a configuration file. For example, NIC eno1 corresponds to configuration file ifcfg-eno1. Modify the network settings of a NIC through modifying its configuration file.

To modify the network settings of a NIC:

1.     At the CLI, switch the working directory to the network-scripts directory.

[root@localhost ~]# cd /etc/sysconfig/network-scripts/

2.     Open the configuration file of a NIC by using the Vim editor, and edit the following shaded parameters. If a parameter does not exist in the configuration file, manually add the parameter to the end of the configuration file.

[root@localhost network-scripts]# vim ifcfg-eno1

HWADDR=EC:B1:D7:80:50:54

TYPE=Ethernet

BOOTPROTO=static # The default value is dhcp. Modify it to static, which means that the NIC IP address is manually configured.

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

IPV6_FAILURE_FATAL=no

NAME=eno1

UUID=cbb80618-065f-4272-9fde-39ff9b06e474

ONBOOT=yes  # The default value is no. Modify it to yes, which means that the device is activated when the system is started.

IPADDR=192.168.16.33 # NIC IP address

GATEWAY=192.168.1.1  # Gateway IP address

NETMASK=255.255.0.0  # Subnet mask for the gateway IP address

3.     Save the configuration, and restart the network service to make the change take effect.

[root@localhost network-scripts]# systemctl restart network.service

4.     Check the configuration of the NIC and verify that the configuration is successfully modified.

[root@localhost network-scripts]# ifconfig eno1

eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 192.168.16.33  netmask 255.255.0.0  broadcast 192.168.255.255

        inet6 2002:6f01:102:5:eeb1:d7ff:fe80:5054  prefixlen 64  scopeid 0x0<global>

Disable the SELinux service and firewall

1.     Edit the file /etc/selinux/config, and manually disable the SELinux service.

[root@localhost ~]# vim /etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux security policy is enforced.

#     permissive - SELinux prints warnings instead of enforcing.

#     disabled - No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

#     targeted. - Targeted processes are protected,

#     mls - Multi Level Security protection.

SELINUXTYPE=targeted

[root@localhost ~]# /usr/sbin/setenforce 0

2.     To avoid access exceptions later, disable the firewall.

[root@localhost ~]# systemctl stop firewalld.service

Preconfigure the operating system

Unzip the SeerEngine_DC_DTN_Manager-version.zip package (where version indicates the version number). Upload check-env.sh in the check_env file to the server. Execute the script to preconfigure the operating system. The key operations in the script include:

·     Create the VM image mount directory /opt/h3c/NFV-DAM/mountDir.

·     Create the VM file storage directory /opt/h3c/NFV-DAM/vm.

·     Identify whether the firewall has been disabled. If the firewall has not been disabled, disable the firewall again.

Execute the following command to execute the script file:

[root@localhost ~]# chmod +x check-env.sh

[root@localhost ~]# ./check-env.sh

Configure NIC virtualization

Use the Linux bridge technology to virtualize NICs

Configure NIC IPs

For the simulation network, you must separately virtualize two NICs of the host to generate two bridges mge_bridge and up_bridge. Before virtualizing bridges, you must first configure the IP address of the NIC bound to bridge mge_bridge. The IP address is used for communication with the simulation microservice component and will be used when you add hosts on the simulation network configuration page. If DTN Manager is installed on the host, you must also enter the IP address when incorporating DTN Manager on the simulation network configuration page. Therefore, make sure the host and the simulation microservice can reach each other. If the IP address of the NIC bound to mge_bridge has been configured in "Configure network parameters for the H3Linux system," skip this section.

To configure NIC IPs:

1.     Use NIC enp61s0f0 as an example. Enter the NIC configuration directory.

[root@localhost ~]# cd /etc/sysconfig/network-scripts/

2.     Press I to enter the edit mode, configure IP information, save the configuration, and exit.

[root@localhost network-scripts]# vi ifcfg-enp61s0f0

DEVICE=enp61s0f0

HWADDR=34:6b:5b:e9:48:be

TYPE=Ethernet

BOOTPROTO=static

ONBOOT=yes

IPADDR=192.158.16.207

NETMASK=255.255.0.0

3.     Restart the network service.

[root@localhost network-scripts]# service network restart

Configure NIC virtualization

CAUTION

CAUTION:

When the Linux bridge script is executed, the network service will be restarted and the SSH connections will be disconnected. To avoid connection interruption, as a best practice, perform these tasks through the management interface.

 

1.     Unzip the SeerEngine_DC_DTN_Manager-version.zip package (where version indicates the version number), and upload the script in the bridge file to the server.

2.     Execute the ./bridge-init.sh param1 param2 command to configure the Linux bridge, where param1 is the name of the NIC corresponding to the management interface bridge to be created and param2 is the name of the NIC corresponding to the service interface bridge to be created.

[root@localhost ~]# chmod +x bridge-init.sh

[root@localhost ~]# ./bridge-init.sh enp61s0f0 enp61s0f1

Network default destroyed

 

Network default unmarked as autostarted

 

network config enp61s0f0 to bridge mge_bridge complete.

network config enp61s0f1 to bridge up_bridge complete.

3.     After executing the script, perform the following tasks to verify that the script is successfully executed.

In the command output, if each bridge except the default bridge virbr0 corresponds to a physical NIC, the bridges are created successfully.

[root@localhost ~]# brctl show

bridge name bridge id         STP enabled  interfaces

mge_bridge 8000.c4346bb8d138 no       enp61s0f0

up_bridge   8000.c4346bb8d139 no       enp61s0f1

virbr0      8000.000000000000 yes

4.     Identify whether the network-scripts configuration file generated is correct. Take eno1 and br0 as an example.

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-mge_bridge

DEVICE=mge_bridge

TYPE=Bridge

BOOTPROTO=static

ONBOOT=yes

IPADDR=192.168.1.196

NETMASK=255.255.0.0

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp61s0f0

DEVICE=enp61s0f0

HWADDR=c4:34:6b:b8:d1:38

BOOTPROTO=none

ONBOOT=yes

BRIDGE=mge_bridge

[root@localhost ~]# ifconfig mge_bridge

mge_bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 2000

        inet 192.168.1.196  netmask 255.255.0.0  broadcast 192.168.255.255

        inet6 2002:6100:2f4:b:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x0<global>

        inet6 fec0::5:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x40<site>

        inet6 fec0::b:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x40<site>

        inet6 fe80::c634:6bff:feb8:d138  prefixlen 64  scopeid 0x20<link>

        inet6 2002:aca8:284d:5:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x0<global>

        inet6 2002:6200:101:b:c634:6bff:feb8:d138  prefixlen 64  scopeid 0x0<global>

        ether c4:34:6b:b8:d1:38  txqueuelen 0  (Ethernet)

        RX packets 29465349  bytes 7849790528 (7.3 GiB)

        RX errors 0  dropped 19149249  overruns 0  frame 0

        TX packets 4415  bytes 400662 (391.2 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost ~]# ifconfig enp61s0f0

enp61s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 2000

        inet6 fe80::c634:6bff:feb8:d138  prefixlen 64  scopeid 0x20<link>

        ether c4:34:6b:b8:d1:38  txqueuelen 1000  (Ethernet)

        RX packets 31576735  bytes 8896279718 (8.2 GiB)

        RX errors 0  dropped 7960  overruns 0  frame 0

        TX packets 4461  bytes 464952 (454.0 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

        device interrupt 16 

Parameters

·     DEVICE: Interface name, which must be the same as the name obtained through the ifconfig command.

·     TYPE: Interface type. This parameter exists only in the bridge configuration file and must be Bridge.

·     BOOTPROTO: Options include none, dhcp, and static. none indicates that no protocol is used to obtain IP addresses when the network service is enabled. dhcp indicates DHCP is used to obtain IP addresses. static indicates IP addresses are manually configured. This parameter must be none in a physical interface configuration file and bridge in a bridge configuration file.

·     ONBOOT: Options include yes and no. yes indicates the device is activated when the system is started, and no indicates the device is not activated when the system is started. This parameter is yes in this example.

·     IPADDR: IP address. The IP address of a physical interface is moved to its bridge, so this parameter does not exist in the physical interface configuration file. In the bridge configuration file, this parameter is the IP address of the original physical interface, which is the same as the IP address obtained by using the ifconfig command.

·     NETMASK: Subnet mask of an IP address. For more information, see the IPADDR parameter.

·     HWADDR: Interface MAC address. This parameter exists only in physical interface configuration files and must be the same as the value for the ether field in the ifconfig command output.

·     BRIDGE: Name of the bridge bound to the physical interface. This parameter exists only in the physical interface configuration files.

 

 

NOTE:

When you use a Linux bridge, the virtual interfaces on the bridge cannot be isolated through VLANs.

 

Configure the MTU for a Linux bridge NIC

CAUTION

CAUTION:

The setMtu.sh script in the bridge directory can only set the MTU for a physical NIC. If the specified device is not a physical NIC, the system prompts “xxx: Device not found.”

 

In actual applications, you might need to set the MTU for a physical interface. For example, when VXLAN is used on network, a VXLAN packet is encapsulated as follows: a 8-byte VXLAN header, 8-byte UDP header, and 20-byte IP header are added to the original Layer 2 frame. In this case, the default MTU (1500 bytes) cannot meet the requirements. To set the MTU, use the setMtu.sh script and perform the following tasks:

1.     Execute the ./setMtu.sh phyNic mtuSize command to set the MTU for a physical NIC and the corresponding bridge and VNet. In this command, phyNic indicates the physical NIC name, and mtuSize indicates the MTU to be set.

[root@localhost ~]# chmod +x setMtu.sh

[root@localhost ~]# ./setMtu.sh eno2 1600

em2 mtu set to 1600 complete.

2.     Identify whether the MTU is successfully set. Take eno2 and br1 as an example.

[root@localhost bridge]# ifconfig eno2 | grep mtu

eno2: flags=4355<UP,BROADCAST,PROMISC,MULTICAST>  mtu 1600

[root@localhost bridge]# ifconfig br1 | grep mtu

br1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1600

[root@localhost bridge]# cat /etc/sysconfig/network-scripts/ifcfg-eno2 | grep -i mtu

MTU=1600

Delete Linux bridge configuration

To cancel the Linux bridge configuration and use the OVS bridge technology to virtualize NICs, first delete the Linux bridge configuration. Then, execute the ./bridge-rollback param1 param2 command, where param1 indicates the name of the NIC corresponding to the created management interface bridge and param2 is the name of the NIC corresponding to the created service interface bridge.

[root@localhost ~]# chmod +x bridge-rollback.sh

[root@localhost ~]# ./bridge-rollback.sh enp61s0f0 enp61s0f1

network unconfig bridge mge_bridge to enp61s0f0 complete.

network unconfig bridge up_bridge to enp61s0f1 complete.

Network default started

 

Network default marked as autostarted

Remotely access the libvirtd process

Edit configuration

1.     Use the Vim editor to edit the /etc/sysconfig/libvirtd file. Delete the number sings (#) in front of the following two lines to make the configuration take effect and enable the corresponding TCP ports.

[root@localhost ~]# vim /etc/sysconfig/libvirtd

# Override the default config file

# NOTE: This setting is no longer honoured if using

# systemd. Set '--config /etc/libvirt/libvirtd.conf'

# in LIBVIRTD_ARGS instead.

LIBVIRTD_CONFIG=/etc/libvirt/libvirtd.conf

 

# Listen for TCP/IP connections

# NB. must setup TLS/SSL keys prior to using this

LIBVIRTD_ARGS="--listen"

2.     Use the Vim editor to edit the /etc/libvirt/libvirtd.conf file, search for the following lines, and delete the number sings (#) in front of them to make the configuration take effect.

[root@localhost ~]# vim /etc/libvirt/libvirtd.conf

listen_tls = 0

listen_tcp = 1

tcp_port = "16509"

listen_addr = "0.0.0.0"

auth_tcp = "none"

3.     Restart the libvirtd service to make the configuration take effect.

[root@localhost ~]# service libvirtd restart

4.     If the commands above do not take effect, try the following commands.

[root@localhost ~]# libvirtd --daemon --listen --config /etc/libvirt/libvirtd.conf

Identify whether the configuration takes effect

1.     Identify whether the libvirtd process is started.

If information of this process is displayed, the process is successfully started.

[root@localhost ~] ps aux | grep libvirtd

root 16563 1.5 0.1 925880 7056 ? Sl 16:01 0:28 libvirtd -d -l --config /etc/libvirt/libvirtd.conf

2.     Identify whether the TCP port of the libvirtd process is normally enabled and is 16509.

[root@localhost ~]# netstat -apn | grep 16509

tcp        0      0 0.0.0.0:16509           0.0.0.0:*               LISTEN      4438/libvirtd

3.     On the server where DTN Manager is installed, execute the following command to connect to the host, and identify whether the host can be normally connected.

The IP address specified in the command is the IP address of the current host. If the following information is displayed, the host has been remotely accessed successfully through TCP, and DTN Manager can normally manage the host.

[root@VNFM ~] virsh -c qemu+tcp://172.16.105.14:16509/system

Welcome to virsh, the virtualization interactive terminal.

   

Type: 'help' for help with commands

'quit' to quit

Create the host script file directory

1.     Create the host script file directory.

[root@localhost ~]# mkdir –p /opt/sdn/script

2.     Unzip the SeerEngine_DC_DTN_Manager-version.zip package (where version indicates the version number), and upload all script files in the script file to the directory on the server.

These scripts will be used to obtain host information and create VMs.

Incorporate hosts on the controller

After the steps above, you can incorporate the host on the Automation > Simulation > Build Simulation Network page of the controller. A host can be incorporated by only one DC controller cluster.

 

 


Deploy DTN Manager

As a best practice, deploy DTN Manager on a host server. If there are multiple hosts, deploy DTN Manager on one of them.

Install DTN Manager

Before deploying DTN Manager, first access the Internet to install the dependent software packages (for example, OpenJDK 8 JRE and PostgreSQL) of DTN Manager. Perform the following tasks to install the dependent software packages of DTN Manager and install DTN Manager.

 

 

NOTE:

Install the controller and its dependent software packages as a root user. Otherwise, installation might fail due to insufficient permissions.

 

Prerequisites

At the CLI of the H3Linux system, execute the following command to disable the firewall.

[root@localhost ~]# systemctl stop firewalld.service

[root@localhost ~]# systemctl disable firewalld.service

Install the dependent software packages of DTN Manager

·     To install the OpenJDK software package:

[root@localhost ~]# yum install –y java-1.8.0-openjdk-headless-1.8.0.65

[root@localhost ~]# yum install –y java-1.8.0-openjdk-1.8.0.65

[root@localhost ~]# yum install –y java-1.8.0-openjdk-javadoc-1.8.0.65

[root@localhost ~]# yum install –y java-1.8.0-openjdk-devel-1.8.0.65

·     To install the PostgreSQL software package and configure necessary settings:

[root@localhost ~]# yum install –y postgresql-9.2.7

[root@localhost ~]# yum install -y postgresql-server-9.2.7

[root@localhost ~]# yum install -y postgresql-jdbc-9.2.1002

[root@localhost ~]# yum install -y postgresql-odbc-09.03.0100

[root@localhost ~]# yum install -y postgresql-contrib-9.2.7

[root@localhost ~]# service postgresql initdb

[root@localhost ~]# systemctl start postgresql.service

[root@localhost ~]# systemctl enable postgresql.service

·     To install the Redhat-lsb-core software package:

[root@localhost ~]# yum install -y redhat-lsb-core-4.1

·     To install the Unzip software package:

[root@localhost ~]# yum install -y unzip-6.0

Install the software package of DTN Manager.

IMPORTANT

IMPORTANT:

Before installing DTN Manager, make sure the psql tool is disabled. Otherwise, errors might occur to DTN Manager installation. Execute the pstree | grep psql command to identify whether the psql process is running. If it is running, exit the psql operation view to disable this process as the user that starts the psql tool.

 

1.     Unzip the SeerEngine_DC_DTN_Manager-version.zip package, and upload the DTN Manager software package (an .rpm file) to the server.

The software package name is SeerEngine_DC_DTN_Manager-version.rpm, where version is the version number.

2.     Enter the path where the DTN Manager software package is installed (for example, /root), and install the software package.

[root@localhost ~] rpm –ivh SeerEngine_DC_DTN_Manager-E3701.rpm

Preparing...                          ################################# [100%]

Setup has detected a compatible jre-headless - 1.8.0_65

Remove hsperfdata

Creating system user 'sdnadmin'...

...done.

Configuring PostgreSQL database...

...done.

Updating / installing...

   1:vnf-manager-3701-1.el7.centos    ################################# [100%]

3.     During the installation process, the system will prompt you whether to enter the team token. Select Y, press Enter, and then enter the team token (for example, AuroraVnfmToken37).

The team token is the authorization certificate for access among the DTN Manager services. Make sure members in the same cluster use the same team token.

Do you want to input a TeamToken? [Y/N]:y

Please enter a TeamToken:AuroraVnfmToken37

Certificate was added to keystore

Creating userinfo table...

...done. 

4.     After you enter the team token, the system prompts you to enter the username and password. Enter the username and password (for example, h3cvnfm and skyline123) to complete the installation. After DTN Manager is successfully installed, it will run automatically.

 

IMPORTANT

IMPORTANT:

When creating an operating system user, do not use any of the usernames reserved by the system and business software. Otherwise, the operating system and the business software might operate abnormally. The reserved usernames (case-sensitive) include root, bin, daemon, adm, ip, sync, shutdown, halt, mail, operator, games, ftp, nobody, systemd-bus-proxy, systemd-network, dbus, polkitd, libstoragemgmt, abrt, rpc, postfix, tss, quagga, sshd, postgres, ntp, chrony, tcpdump, sdn, sdnadmin.

 

Please enter a username:h3cvnfm

Please enter a password:

Please enter the password again:

Identify whether DTN Manager is installed successfully

1.     Identify whether DTN Manager is installed successfully.

If yes, its version information is displayed.

[root@localhost ~]# rpm -qa | grep vnf

vnf-manager-3701-1.el7.centos.x86_64

2.     Identify whether the sdna, sdnc, and handshake services of DTN Manager are enabled.

If each service is in Active (Running) state, DTN Manager is enabled successfully.

[root@localhost ~]# systemctl status sdna.service

sdna.service - sdna systemd conf

   Loaded: loaded (/etc/systemd/system/sdna.service; enabled)

   Active: active (running) since Tue 2015-12-29 19:40:49 CST; 3h 13min ago

  Process: 2506 ExecStart=/etc/systemd/system/startSdna (code=exited, status=0/S

UCCESS)

 Main PID: 2508 (java)

[root@localhost ~]# systemctl status sdnc.service

sdnc.service - sdnc systemd conf

   Loaded: loaded (/etc/systemd/system/sdnc.service; enabled)

   Active: active (running) since Tue 2015-12-29 19:40:49 CST; 3h 22min ago

  Process: 2559 ExecStart=/etc/systemd/system/startSdnc (code=exited, status=0/S

UCCESS)

 Main PID: 2585 (java)

[root@localhost ~]# systemctl status handshake.service

handshake.service - handshake systemd conf

   Loaded: loaded (/etc/systemd/system/handshake.service; enabled)

   Active: active (running) since Tue 2015-12-29 19:40:49 CST; 3h 25min ago

  Process: 2474 ExecStart=/etc/systemd/system/startHandshake (code=exited, statu

s=0/SUCCESS)

 Main PID: 2475 (hsServer.sh)

Preconfigure the simulation device image

After DTN Manager is installed, you must upload the qco image (released together with the software image) used for creating simulation devices to the /opt/sdn/VNFMV2/NFV-DSM/images directory on the server. If the directory does not exist, execute the following command to create the directory. 

[root@localhost ~]# mkdir –p /opt/sdn/VNFMV2/NFV-DSM/images

As a best practice, do not modify the image name. To do that, make sure the image name meets the following requirements:

·     The image name is a case-sensitive string of up to 128 characters.

·     Only letters, digits, underscores (_), dots (.), and hyphens (-) are supported.

·     The image name extension must be qco. Example: v6850-56hf-CMW710-r6607-X64.qco.

Additionally, you cannot modify the characters before the second hyphen (-). For example, you cannot modify v6850 in v6850-56hf. Otherwise, the image does not take effect.

Uninstall DTN Manager

You need to uninstall DTN Manager in either of the following cases:

·     Uninstall and reinstall—To uninstall DTN Manager and then install DTN Manager of a new version, you only need to uninstall DTN Manager, and do not need to uninstall its dependent software packages.

·     Uninstall completely—To completely uninstall DTN Manager, first uninstall DTN Manager and then uninstall its dependent software packages. When uninstalling the dependent software packages of DTN Manager, make sure the system can access the Internet.

 

 

NOTE:

·     If DTN Manager and the host are deployed on the same server, uninstalling DTN Manager without reserving the configuration will delete the preconfigured script files in “Create the host script file directory” and preconfigured image files in “Preconfigure the simulation device image.” To use the host function later, you must upload the script files and image files to the specified directories again.

·     Before uninstalling DTN Manager, make sure the psql tool is disabled. Otherwise, errors might occur when you reinstall DTN Manager. Execute the pstree | grep psql command to identify whether the psql process is running. If it is running, exit the psql operation view to disable this process as the user that starts the psql tool.

·     If you delete the configuration information when uninstalling DTN Manager, you must upload the license files again when you reinstall DTN Manager. Therefore, you must first back up the license files before uninstallation.

 

Uninstall DTN Manager

When you uninstall DTN Manager, you can select one of the following options:

·     Do not reserve configuration—When you uninstall DTN Manager, its configuration file is also deleted.

·     Reserve configuration—When you uninstall DTN Manager, its configuration file is not deleted. When you install DTN Manager of a new version,  the original configuration file is automatically read and restored.

To uninstall DTN Manager:

1.     Execute the following command to uninstall DTN Manager.

[root@localhost ~]# rpm -e vnf-manager

Uninstalling vnf-manager (Version: 3701 )...

Stopping zookeeper ... no zookeeper to stop

Stopping cassandra ... no cassandra to stop

Remove hsperfdata

2.     During the uninstallation process, the system prompts you whether to purge the software package. Select Y to uninstall it without reserving the configuration, or select N to uninstall it and reserve the configuration. More specifically:

¡     Uninstall it without reserving the configuration

Do you want to purge the package? [Y/N]:Y

Removing team ip...

...done.

¡     Uninstall it and reserve its configuration

Do you want to purge the package? [Y/N]:N

Removing team ip...

...done.

Uninstall the dependent software packages of DTN Manager

1.     Uninstall the OpenJDK 8 JRE software package of DTN Manager.

[root@localhost ~]# yum –y remove java-1.8.0-openjdk java-1.8.0-openjdk-headless java-1.8.0-openjdk-javadoc

2.     Uninstall the other dependent software packages of DTN Manager.

[root@localhost ~]# yum –y remove postgresql redhat-lsb-core

3.     Update the source list.

[root@localhost ~]# yum update

Upgrade DTN Manager

To upgrade DTN Manager, first uninstall it and reserve its configuration, and then reinstall it again.

Uninstall DTN Manager and reserve its configuration

In this section, when you uninstall DTN Manager, its configuration file is not deleted. When you install DTN Manager of a new version, the original configuration file is automatically read and restored.

To uninstall DTN Manager and reserve its configuration :

1.     Execute the following command to uninstall DTN Manager.

[root@localhost ~]# rpm -e vnf-manager

Uninstalling vnf-manager (Version: 3701 )...

Stopping zookeeper ... no zookeeper to stop

Stopping cassandra ... no cassandra to stop

Remove hsperfdata

2.     During the uninstallation process, the system prompts you whether to purge the software package. Select N to uninstall it and reserve the configuration.

Do you want to purge the package? [Y/N]:N

Removing team ip...

...done.

Install DTN Manager of the new version

1.     Enter the path where the DTN Manager software package (an .rpm file) of the new version is saved (for example, /root), and install the software package.

The software package name is in the format of SeerEngine_DC_DTN_Manager-version.rpm, where version indicates the software version number.

[root@localhost ~] rpm –ivh SeerEngine_DC_DTN_Manager-E3701.rpm

Preparing...                          ################################# [100%]

Setup has detected a compatible jre-headless - 1.8.0_65

Remove hsperfdata

Creating system user 'sdnadmin'...

...done.

Configuring PostgreSQL database...

...done.

Updating / installing...

   1:vnf-manager-3701-1.el7.centos    ################################# [100%]

2.     During the installation process, the system will prompt you whether to enter the team token. Select Y, press Enter, and then enter the team token (for example, AuroraVnfmToken37).

The team token is the authorization certificate for access among the DTN Manager services. Make sure members in the same cluster use the same team token.

Do you want to input a TeamToken? [Y/N]:y

Please enter a TeamToken:AuroraVnfmToken37

Certificate was added to keystore

Creating userinfo table...

...done. 

3.     After you enter the team token, the system prompts you to enter the username and password. Enter the username and password (for example, h3cvnfm and skyline123) to complete the installation. After DTN Manager is successfully installed, it will run automatically.

 

IMPORTANT

IMPORTANT:

When creating an operating system user, do not use any of the usernames reserved by the system and business software. Otherwise, the operating system and the business software might operate abnormally. The reserved usernames (case-sensitive) include root, bin, daemon, adm, ip, sync, shutdown, halt, mail, operator, games, ftp, nobody, systemd-bus-proxy, systemd-network, dbus, polkitd, libstoragemgmt, abrt, rpc, postfix, tss, quagga, sshd, postgres, ntp, chrony, tcpdump, sdn, sdnadmin.

 

Please enter a username:h3cvnfm

Please enter a password:

Please enter the password again:

Identify whether DTN Manager is installed successfully

1.     Identify whether DTN Manager is installed successfully.

If yes, its version information is displayed.

[root@localhost ~]# rpm -qa | grep vnf

vnf-manager-3701-1.el7.centos.x86_64

2.     Identify whether the sdna, sdnc, and handshake services of DTN Manager are enabled.

If each service is in Active (Running) state, DTN Manager is started successfully.

[root@localhost ~]# systemctl status sdna.service

sdna.service - sdna systemd conf

   Loaded: loaded (/etc/systemd/system/sdna.service; enabled)

   Active: active (running) since Tue 2015-12-29 19:40:49 CST; 3h 13min ago

  Process: 2506 ExecStart=/etc/systemd/system/startSdna (code=exited, status=0/S

UCCESS)

 Main PID: 2508 (java)

[root@localhost ~]# systemctl status sdnc.service

sdnc.service - sdnc systemd conf

   Loaded: loaded (/etc/systemd/system/sdnc.service; enabled)

   Active: active (running) since Tue 2015-12-29 19:40:49 CST; 3h 22min ago

  Process: 2559 ExecStart=/etc/systemd/system/startSdnc (code=exited, status=0/S

UCCESS)

 Main PID: 2585 (java)

[root@localhost ~]# systemctl status handshake.service

handshake.service - handshake systemd conf

   Loaded: loaded (/etc/systemd/system/handshake.service; enabled)

   Active: active (running) since Tue 2015-12-29 19:40:49 CST; 3h 25min ago

  Process: 2474 ExecStart=/etc/systemd/system/startHandshake (code=exited, statu

s=0/SUCCESS)

 Main PID: 2475 (hsServer.sh)

Incorporate DTN Manager on the controller

After the steps above, you can incorporate DTN Manager on the Automation > Simulation > Build Simulation Network page of the controller. For more information, see the simulation network online help.

NOTE: One DTN Manager can be incorporated by only one DC controller cluster.