H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Kolla-E61xx-5W108

HomeSupportAD-NET(SDN)H3C SeerEngine-DCConfigure & DeployInteroperability GuidesH3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Kolla-E61xx-5W108
01-Text
Title Size Download
01-Text 249.47 KB

Contents

Overview·· 1

SeerEngine-DC Neutron plug-ins· 1

SeerEngine-DC Neutron security plug-ins· 1

Preparing for installation· 2

Hardware requirements· 2

Software requirements· 2

Deploying OpenStack by using Kolla Ansible· 3

Preprovisioning basic SeerEngine-DC settings· 4

Installing OpenStack plug-ins· 5

Setting up the basic environment 5

Installing the SeerEngine-DC Neutron plug-ins· 6

Obtaining the SeerEngine-DC Neutron plug-in installation package· 6

Installing the SeerEngine-DC Neutron plug-ins on the OpenStack control node· 6

Parameters and fields· 8

Upgrading the SeerEngine-DC Neutron plug-ins· 9

Installing the SeerEngine-DC Neutron security plug-in on OpenStack· 10

Installing the security plug-in on the controller node· 10

Upgrading the SeerEngine-DC Neutron security plug-in· 20

Upgrading non-converged plug-ins to converged plug-ins· 22

(Optional.) Configuring the metadata service for network nodes· 25

FAQ·· 26

The Python tools cannot be installed using the yum command when a proxy server is used for Internet access. What should I do?· 26

After the plug-ins are installed successfully, what should I do if the controller fails to interconnect with the cloud platform?  26

Live migration of a VM to a specified destination host failed because of a service exception on the destination host. What should I do?· 27

The Inter X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?· 27

The trunk function is unavailable after I upgrade a non-converged OpenStack Mitaka plug-in to a converged one and configure h3c_trunk. What should I do?· 28

 


Overview

This document describes how to install OpenStack plug-ins for interoperability with OpenStack cloud platforms. Then SeerEngine-DC can process requests from the OpenStack cloud platforms.

OpenStack plug-ins include SeerEngine-DC Neutron plug-ins, Nova patch, openvswitch-agent patch, and DHCP failover components.

SeerEngine-DC Neutron plug-ins

Neutron is a type of OpenStack services used to manage all virtual networking infrastructures (VNIs) in an OpenStack environment. It provides virtual network services to the devices managed by OpenStack computing services.

SeerEngine-DC Neutron plug-ins are developed for the SeerEngine-DC controller based on the OpenStack framework.

The SeerEngine-DC Neutron plug-ins allow deployment of the network configuration obtained from OpenStack through REST APIs on the SeerEngine-DC controller, including tenants' networks, subnets, routers, and ports.

 

CAUTION

CAUTION:

To avoid service interruptions, do not modify the settings issued by the cloud platform on the controller, such as the virtual link layer network, vRouter, and vSubnet settings after the plug-ins connect to the OpenStack cloud platform.

 

SeerEngine-DC Neutron security plug-ins

SeerEngine-DC Neutron security plug-ins are developed for the SeerEngine-DC controller based on the OpenStack framework. SeerEngine-DC Neutron security plug-ins can obtain security configuration from OpenStack through REST APIs and synchronize the configuration to the SeerEngine-DC controllers. They can obtain settings for the tenants' FW, LB, or VPN.


Preparing for installation

Hardware requirements

Table 1 shows the hardware requirements for installing the SeerEngine-DC Neutron plug-ins on a server or virtual machine.

Table 1 Hardware requirements

CPU

Memory size

Disk space

Single-core and multicore CPUs

2 GB and above

5 GB and above

 

Software requirements

Table 2 shows the software requirements for installing the SeerEngine-DC Neutron plug-ins.

Table 2 Software requirements

Item

Supported versions

OpenStack deployed by using Kolla-Ansible

·     OpenStack Ocata

·     OpenStack Pike

·     OpenStack Queens

·     OpenStack Rocky

·     OpenStack Stein

·     OpenStack Train

·     OpenStack Ussuri

 

IMPORTANT

IMPORTANT:

Before you install the OpenStack plug-ins, make sure the following requirements are met:

·     Your system has a reliable Internet connection.

·     OpenStack has been deployed correctly. Verify that the /etc/hosts file on all nodes has the host name-IP address mappings, and the OpenStack Neutron extension services (Neutron-FWaas, Neutron-VPNaas, or Neutron-LBaas) have been deployed. For the deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website.

 

 

NOTE:

·     The SeerEngine-DC Neutron security plug-in does not support OpenStack Stein, Train, or Ussuri plug-ins.

·     For the installation of converged version of SeerEngine_DC plug-ins (SeerEngine_DC_PLUGIN-version-py2.7.egg), see H3C SeerEngine-DC OpenStack Converged Plug-Ins Installation Guide.

 

 


Deploying OpenStack by using Kolla Ansible

Before installing the plug-ins, deploy OpenStack by using Kolla Ansible first. For the OpenStack deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website.


Preprovisioning basic SeerEngine-DC settings

This procedure preprovisions only basic SeerEngine-DC settings. For the configuration in a specific scenario, see the SeerEngine-DC configuration guide for that scenario.

Table 3 Preprovisioning basic SeerEngine-DC settings

Item

Configuration directory

Fabrics

Automation > Data Center Networks > Fabrics > Fabrics

VDS

Automation > Data Center Networks > Common Network Settings > Virtual Distributed Switch

IP address pool

Automation > Data Center Networks > Resource Pools > IP Address Pools

VNID pools (VLANs, VXLANs, and VLAN-VXLAN mappings)

Automation > Data Center Networks > Resource Pools > VNID Pools > VLANs

Automation > Data Center Networks > Resource PoolsVNID Pools > VXLANs

Automation > Data Center Networks > Resource PoolsVNID Pools > VLAN-VXLAN Mappings

Add access devices and border devices to a fabric

Automation > Data Center Networks > Fabrics > Fabrics

L4-L7 device, physical resource pool, and template

Automation > Data Center Networks > Resource Pools > Devices > Physical Devices

Automation > Data Center Networks > Resource Pools > Devices > L4-L7 Physical Resource Pools

Border gateway

Automation > Data Center Networks > Common Network Settings > Gateways

Domains and hosts

Automation > Data Center Networks > Fabrics > Domains

Automation > Data Center Networks > Fabrics > Domains > Hosts

Interoperability with OpenStack

Automation > Virtual Networking > OpenStack

NOTE:

·     Make sure the cloud platform name (case sensitive) is the same as the value for the cloud_region_name parameter in the ml2_conf.ini file of the Neutron plug-in.

·     Make the VNI range is the same as the VXLAN VNI range on the cloud platform.

 


Installing OpenStack plug-ins

The SeerEngine-DC Neutron plug-ins can be installed on different OpenStack versions. The installation package varies by OpenStack version. However, you can use the same procedure to install the Neutron plug-ins on different OpenStack versions. This document uses OpenStack Ocata as an example.

The SeerEngine-DC Neutron plug-ins are installed on the OpenStack control node.

Setting up the basic environment

Before installing SeerEngine-DC Neutron plug-ins on the OpenStack control node, set up the basic environment on the node.

To set up the basic environment:

1.     Update the software source list, and then download and install the Python tools.

¡     CentOS 8 operating system:

[root@localhost ~]# yum clean all

[root@localhost ~]# yum makecache

[root@localhost ~]# yum install –y python3-pip python3-setuptools

¡     Other CentOS operating systems:

[root@localhost ~]# yum clean all

[root@localhost ~]# yum makecache

[root@localhost ~]# yum install –y python-pip python-setuptools

2.     Install runlike.

¡     CentOS 8 operating system:

[root@localhost ~]# pip3 install runlike

¡     Other CentOS operating systems:

[root@localhost ~]# pip install runlike

3.     Log in to the controller node and edit the /etc/hosts file. Add the following information to the file.

¡     IP and name mappings of all hosts in this OpenStack environment. To obtain this information, access the SeerEngine-DC controller and select Automation > Data Center Networks > Fabrics > Domains > Hosts.

¡     IP and name mappings of all leaf, spine, and border devices in this scenario. To obtain this information, access the SeerEngine-DC controller and select Automation > Data Center Networks > Resource Pools > Devices > Physical Devices.

[root@localhost ~]# vim /etc/hosts

127.0.0.1 localhost

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

99.0.83.75 controller

99.0.83.76 compute1

99.0.83.77 compute2

99.0.83.78 nfs-server

99.0.83.79 compute3

99.0.83.74 compute4

Installing the SeerEngine-DC Neutron plug-ins

Obtaining the SeerEngine-DC Neutron plug-in installation package

The SeerEngine-DC Neutron plug-ins are included in the SeerEngine-DC OpenStack package. Obtain the SeerEngine-DC OpenStack package of the required version and then save the package to the target installation directory on the server or virtual machine.

Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit.

Installing the SeerEngine-DC Neutron plug-ins on the OpenStack control node

1.     Generate the startup script for the neutron-server container.

[root@localhost ~]# runlike neutron_server>docker-neutron-server.sh

2.     Modify the neutron.conf configuration file.

a.     Use the vi editor to open the neutron.conf configuration file.

[root@localhost ~]# vi /etc/kolla/neutron-server/neutron.conf

b.     Configure the neutron.conf configuration file based on the operating system running in the Kolla environment.

-     If a CentOS operating system runs in the Kolla environment, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS to configure the neutron.conf configuration file.

-     If a Ubuntu operating system runs in the Kolla environment, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu to configure the neutron.conf configuration file.

3.     Modify the ml2_conf.ini configuration file.

a.     Use the vi editor to open the ml2_conf.ini configuration file.

[root@localhost ~]# vi /etc/kolla/neutron-server/ml2_conf.ini

b.     Press I to switch to insert mode, and set the parameters in the ml2_conf.ini configuration file. For information about the parameters, see "ml2_conf.ini."

[ml2]

type_drivers = vxlan,vlan

tenant_network_types = vxlan,vlan

mechanism_drivers = ml2_h3c

extension_drivers = ml2_extension_h3c,qos

[ml2_type_vlan]

network_vlan_ranges = physicnet1:1000:2999,port_security

[ml2_type_vxlan]

vni_ranges = 1:500

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.

4.     Add plug-ins configuration items to the ml2_conf.ini configuration file.

a.     Use the vi editor to open the ml2_conf.in configuration file.

[root@localhost ~]# vi /etc/kolla/neutron-server/ml2_conf.ini

b.     Configure the ml2_conf.ini file based on the operating system running in the Kolla environment.

-     If a CentOS operating system runs in the Kolla environment, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS to configure the [SDNCONTROLLER] configuration group items for the ml2_conf.ini file.

-     If a Ubuntu operating system runs in the Kolla environment, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu to configure the [SDNCONTROLLER] configuration group items for the ml2_conf.ini file.

5.     Copy the plug-ins installation package to the neutron_server container.

[root@localhost ~]# docker cp SeerEngine_DC_PLUGIN-E3608-py2.7.egg neutron_server:/

6.     Access the file folder on the neutron_server container where the plug-ins installation package resides and install the websocket-client and plug-in package.

[root@localhost ~]# docker exec -it -u root neutron_server  bash

CentOS 8:

(neutron-server) [root@localhost ~]# yum install –y python3-websocket-client

Other CentOS versions:

(neutron-server) [root@localhost ~]# yum install –y python-websocket-client

(neutron-server) [root@localhost ~]# easy_install SeerEngine_DC_PLUGIN-E3608-py2.7.egg

(neutron-server) [root@localhost ~]# h3c-sdnplugin controller install

 

IMPORTANT

IMPORTANT:

·     Make sure the version of python-websocket-client is 0.56.

·     Before executing the h3c-sdnplugin controller install command, make sure no neutron.con file exists in the /root directory. If such a file exists, delete it or move it to another location.

 

 

NOTE:

An error might be reported when the h3c-sdnplugin controller install command is executed. Just ignore it.

 

7.     Create neutron-server container images.

[root@localhost ~]# neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)

[root@localhost ~]# docker ps | grep $neutron_server_image

16d60524b8b3        kolla/centos-source-neutron-server:rocky              "dumb-init --single-?   16 months ago       Up 2 weeks                              neutron_server

[root@localhost ~]# docker commit 16d60524b8b3 kolla/neutron-server-h3c (use the UUID obtained in the preceding command)

[root@localhost ~]# docker rm -f neutron_server

[root@localhost ~]# docker tag $neutron_server_image kolla/neutron-server-origin

[root@localhost ~]# docker rmi $neutron_server_image

[root@localhost ~]# docker tag kolla/neutron-server-h3c $neutron_server_image

[root@localhost ~]# docker rmi kolla/neutron-server-h3c

8.     Start the neutron-server container.

[root@localhost ~]# source docker-neutron-server.sh

9.     View the startup status of the containers. If their status is Up, they have been started up correctly.

[root@localhost ~]# docker ps --filter "name=neutron_server"

CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES

289e4e132a9b        kolla/centos-source-neutron-server:ocata   "dumb-init --single-?   1 minutes ago        Up 1 minutes                              neutron_server

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

ml2_conf.ini

Parameter

Required value

Description

type_drivers

vxlan,vlan

Driver type.

vxlan must be specified as the first driver type.

tenant_network_types

vxlan,vlan

Type of the networks to which the tenants belong. For intranet, only vxlan is available. For extranet, only vlan is available.

·     In the host overlay scenario and network overlay with hierarchical port binding scenario, vxlan must be specified as the first network type.

·     In the network overlay without hierarchical port binding scenario, vlan must be specified as the first network type.

·     In the host overlay, network overlay with hierarchical port binding, and network overlay without hierarchical port binding hybrid scenario, vxlan must be specified as the first network type. In this scenario, you can create a VLAN only from the background CLI, REST API, or Web administration interface.

mechanism_drivers

ml2_h3c

Name of the ml2 driver.

To create SR-IOV instances for VLAN networks, set this parameter to sriovnicswitch, ml2_h3c.

To create hierarchy-supported instances, set this parameter to ml2_h3c,openvswitch.

extension_drivers

ml2_extension_h3c,qos

Names of the ml2 extension drivers. Available names include ml2_extension_h3c, qos, and port_security. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this parameter. To not enable port security on OpenStack, you do not need to specify the port_security value for this parameter (OpenStack Ocata 2017.1 does not support the port_security value.)

network_vlan_ranges

N/A

Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999.

vni_ranges

N/A

Value range for the VXLAN ID of the intranet, for example, 1:500.

 

Upgrading the SeerEngine-DC Neutron plug-ins

CAUTION

CAUTION:

·     Services might be interrupted during the SeerEngine-DC Neutron plug-ins upgrade procedure. Make sure you understand the impact of the upgrade before performing it on a live network.

·     The plug-ins settings will not be restored automatically after an upgrade in the Kolla environment. Before an upgrade, back up the settings in the /etc/kolla/neutron-server/neutron.conf and /etc/kolla/neutron-server/ml2_conf.ini configuration files. After the upgrade, modify the parameter settings according to the configuration files to ensure configuration consistency before and after the upgrade.

 

Upgrade with the neutron_server container removed

1.     Remove the container installed with the old version of the plug-ins and the container image.

neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)

a.     If no docker-neutron-server.sh file exists, execute the following command. If such a file exists, skip this step.

[root@controller ~]# runlike neutron_server>docker-neutron-server.sh

b.     Remove the container installed with the old version of the plug-ins and the container image.

[root@controller ~]# docker rm -f neutron_server

[root@controller ~]# docker rmi  $neutron_server_image

c.     Restore the default container and image in the Kolla environment.

[root@localhost ~]# docker tag kolla/neutron-server-origin $neutron_server_image

[root@localhost ~]# docker rmi kolla/neutron-server-origin

[root@controller ~]# source docker-neutron-server.sh

 

IMPORTANT

IMPORTANT:

Before restarting the neutron_server container, you must restore the configurations in the neutron.conf and neutron.conf files and remove the plug-ins-related configuration.

 

2.     Install the new version of plug-ins. For the installation procedure, see "Installing the SeerEngine-DC Neutron plug-ins".

Upgrade with the neutron_server container retained

To upgrade the plug-ins with the neutron_server container retained, you are required to remove the old version of the plug-ins and then install the view version of the plug-ins on the neutron_server container.

1.     Access the neutron_server container and remove the old version of the plug-ins.

[root@localhost ~]# docker exec -it -u root neutron_server bash

(neutron-server) [root@localhost ~]# h3c-sdnplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

(neutron-server) [root@localhost ~]# pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3608:

/usr/bin/h3c-agent

/usr/bin/h3c-sdnplugin

……

2.     Install the new version of the plug-ins, and then examine and configure the plug-in items based on the document for the new version of the plug-ins.

[root@localhost ~]# docker cp SeerEngine-DC-PLUGIN-E3608-py2.7.egg neutron_server:/

[root@localhost ~]# docker exec -it -u root neutron_server bash

(neutron-server) [root@localhost ~]# easy_install SeerEngine-DC-PLUGIN-E3608-py2.7.egg

(neutron-server) [root@localhost ~]# h3c-sdnplugin controller install

 

IMPORTANT

IMPORTANT:

Before executing the h3c-sdnplugin controller install command, make sure no neutron.con file exists in the /root directory. If such a file exists, delete it or move it to another location.

 

 

NOTE:

An error might be reported when the h3c-sdnplugin controller install command is executed. Just ignore it.

 

3.     Exit and then restart the neutron_server container.

(neutron-server)[root@controller01 ~]# exit

[root@controller01 ~]# docker restart neutron_server

Installing the SeerEngine-DC Neutron security plug-in on OpenStack

The SeerEngine-DC Neutron security plug-in can be installed on multiple versions of OpenStack. This section uses OpenStack Pike as an example to describe the security plug-in installation.

The SeerEngine-DC Neutron security plug-in is installed on the OpenStack controller node. Before installation, set up the base environment on the node.

Installing the security plug-in on the controller node

Obtaining the installation package

Obtain and copy the security plug-in installation package of the required version to the target installation directory on the server or virtual machine.

Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP.

 

IMPORTANT

IMPORTANT:

To avoid damaging the installation packages, select binary mode if you are to transfer the package through FTP or TFTP.

 

Installing the security plug-in on the OpenStack controller node

1.     Generate startup scripts for the neutron-server containers.

[root@localhost ~]# runlike neutron_server>docker-neutron-server.sh

2.     Generate startup scripts for the h3c-sec-agent containers.

[root@localhost ~]# cp docker-neutron-server.sh docker-h3c-sec-agent.sh

[root@localhost ~]# sed –i 's/neutron-server/h3c-sec-agent/g' docker-h3c-sec-agent.sh

 

 

NOTE:

For the firewall plug-in not in agent mode, the installation and upgrade do not need to configure the steps (2, 11, 13, 15, and 17) related to the h3c-sec-agent container.

 

3.     Edit the neutron.conf configuration file.

a.     Use the vi editor to open the neutron.conf configuration file.

[root@localhost ~]# sudo vi /etc/kolla/neutron-server/neutron.conf

b.     Press I to switch to insert mode, and then edit the configuration file. For more information about the parameters, see "Parameters and fields."

For OpenStack Pike and Rocky, edit the neutron.conf configuration file as follows:

[DEFAULT]

service_plugins = firewall, h3c_security_core,lbaasv2,vpnaas

 

[service_providers]

service_provider=FIREWALL:H3C:networking_sec_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C:networking_sec_h3c.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

 

IMPORTANT

IMPORTANT:

For OpenStack Pike, when the load balancer supports multiple resource pools of the Context type, you must preprovision a resource pool named dmz or core on the controller, and then change the value of the service provider parameter to LOADBALANCERV2:DMZ:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDMZDriver:default or LOADBALANCERV2:CORE:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDMZDriver:default accordingly.

 

¡     For OpenStack Ocata, edit the configuration file as follows:

[DEFAULT]

service_plugins = firewall,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,vpnaas

 

[service_providers]

service_provider=FIREWALL:H3C:networking_sec_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasPluginDriver:default

service_provider=VPN:H3C:networking_sec_h3c.vpn.h3c_vpnplugin_ko_driver.H3CVpnPluginDriver:default

 

IMPORTANT

IMPORTANT:

The service_provider parameter value for the VPN services is different between OpenStack Pike and OpenStack Rocky and OpenStack Ocata. Be clear about the differences.

 

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

4.     Edit the ml2_conf.ini configuration file.

a.     Use the vi editor to open the ml2_conf.ini configuration file.

[root@localhost ~]# vi /etc/kolla/neutron-server/ml2_conf.ini

b.     Press I to switch to insert mode and configure the parameters in the configuration file as follows. For more information about the parameters, see "Parameters and fields."

[ml2]

type_drivers = vxlan,vlan

tenant_network_types = vxlan,vlan

mechanism_drivers = ml2_h3c

extension_drivers = ml2_extension_h3c,qos,port_security

[ml2_type_vlan]

network_vlan_ranges = physicnet1:1000:2999

[ml2_type_vxlan]

vni_ranges = 1:500

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

5.     Edit the neutron.conf configuration file.

a.     Use the vi editor to open the neutron.conf configuration file.

[root@localhost ~]# vi /etc/kolla/neutron-server/neutron.conf

b.     Press I to switch to insert mode, and then edit the configuration file. For more information about the parameters, see "Parameters and fields."

[SEC_SDNCONTROLLER]

url = https://127.0.0.1:30000

username = sdn

password = skyline

domain = sdn

timeout = 1800

retry = 10

white_list = False

firewall_type = CGSR

fw_share_by_tenant = False

lb_type = CGSR

resource_mode = CORE_GATEWAY

resource_share_count = 1

auto_create_resource = True

nfv_ha = True

use_neutron_credential = False

firewall_force_audit = False

sec_output_json_log = False

lb_enable_snat = False

vendor_rpc_topic = VENDOR_PLUGIN

enable_https = False

neutron_plugin_ca_file =

neutron_plugin_cert_file =

neutron_plugin_key_file =

cgsr_fw_context_limit = 0

enable_iam_auth = False

enable_firewall_metadata = False

lb_member_slow_shutdown = False

enable_multi_gateways = False

enable_multi_segments = False

tenant_gateway_name = None

tenant_gw_selection_strategy = match_first

enable_router_nat_without_firewall = False

directly_external = OFF

directly_external_suffix = DMZ

lb_resource_mode = SP

enable_lb_xff = False

enable_lb_certchain = True

enable_firewall_object_group = False

6.     If you have set the white_list parameter to True, perform the following tasks:

¡     Delete the username, password, and domain parameters for SEC_SDNCONTROLLER in the ml2_sec_conf_h3c.ini configuration file.

¡     Add an authentication-free user to the controller.

-     Enter the IP address of the host where the Neutron server resides.

-     Specify the role as Admin.

7.     If you have set the use_neutron_credential parameter to True, perform the following steps:

a.     Modify the neutron.conf configuration file.

# Use the vi editor to open the neutron.conf configuration file.

# Press I to switch to insert mode, and add the following configuration. For information about the parameters, see "neutron.conf".

[keystone_authtoken]

admin_user = neutron

admin_password = 123456

# Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

b.     Add an admin user to the controller.

# Configure the username as neutron.

# Specify the role as Admin.

# Enter the password of the neutron user in OpenStack.

8.     Copy the installation package to the neutron_server container.

[root@localhost ~]# docker cp SeerEngine_DC_SEC_PLUGIN-E3603P01-py2.7.egg neutron_server:/

9.     Install the package.

[root@localhost ~]# docker exec –it –u root –name neutron_server bash

[root@localhost ~]# easy_install SeerEngine_DC_SEC_PLUGIN-E3603P01-py2.7.egg

[root@localhost ~]# h3c-sdnplugin controller install

 

IMPORTANT

IMPORTANT:

Before executing the h3c-sdnplugin controller install command, make sure no neutron.con file exists in the /root directory. If such a file exists, delete it or move it to another location.

 

IMPORTANT

IMPORTANT:

The system might prompts an error message when you execute the h3c-sdnplugin controller install command. You can ignore this message.

 

10.     Generate the images for the neutron-server containers.

[root@localhost ~]# neutron_server_image=$(docker ps –format {{.Image}} –filter name=neutron_server)

[root@localhost ~]# docker commit $neutron_server_image kolla/neutron-server-h3c

[root@localhost ~]# docker tag $neutron_server_image kolla/neutron-server-origin

11.     Generate the images for the h3c-sec-agent containers.

[root@localhost ~]# h3c_sec_agent_image=$(echo $neutron_server_image |sed 's/neutron-server/h3c-sec-agent/')

[root@localhost ~]# docker tag kolla/neutron-server-h3c $h3c_sec_agent_image

12.     Delete the intermediate generated configuration for the neutron-server.

[root@localhost ~]# docker rm –f neutron_server

[root@localhost ~]# docker rmi $neutron_server_image

[root@localhost ~]# docker tag kolla/neutron-server-h3c $neutron_server_image

[root@localhost ~]# docker rmi kolla/neutron-server-h3c

13.     Copy the configuration of neutron-server to the h3c-sec-agent directory, and edit the configuration.

[root@localhost ~]# cp –pR /etc/kolla/neutron-server /etc/kolla/h3c-sec-agent

[root@localhost ~]# sed –i 's/neutron-server/h3c-sec-agent/g' /etc/kolla/h3c-sec-agent/config.json

14.     Start the neutron-server services.

[root@localhost ~]# source docker-neutron-server.sh

15.     Start the h3c-sec-agent services.

[root@localhost ~]# source docker-h3c-sec-agent.sh

16.     Verify the status of the neutron-server services.

[root@localhost ~]# #  docker ps –filter "name=neutron_server"

CONTAINER ID    IMAGE       COMMAND           CREATED   STATUS   PORTS  NAMES

289e4e132a9b  kolla/centos-source-neutron-server:ocata   "dumb-init –single-?

1 minutes ago  Up 1 minutes    neutron_server

17.     Verify the status of the h3c-sec-agent services.

[root@localhost ~]# # docker ps –filter "name=h3c_sec_agent"

CONTAINER ID    IMAGE       COMMAND           CREATED   STATUS   PORTS  NAMES

C334f7ec9857  kolla/centos-source-h3c-sec-agent:ocata   "dumb-init –single-?

1 minutes ago  Up 1 minutes    h3c_sec_agent

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

neutron.conf

 

Parameter

Description

service_plugins

Extension plug-ins loaded to OpenStack.

The security plug-in supports the following firewall services, and you can change the values as follows:

·     For the open-source firewall plug-in agent mode, change firewall in the value to firewall.

·     If deployment of firewall policies and rules takes a long time, change firewall in the value to fwaas_h3c.

·     For the open-source firewall plug-in not in agent mode, change firewall in the value to firewall_h3c.

To configure firewall services, add h3c_security_core to the value.

In the /etc/kolla/neutron-server/ directory of neutron_server, you can configure the service_provider only once for the same service. Do not configure the service_provider parameter in fwaas_driver.ini after you configure it in neutron.conf. This rule applies also to Lbaas and Vpnaas.

To ensure that h3c-sec-agent can load the driver successfully, change the value of the driver field for [fwaas] in the /etc/kolla/neutron-server/fwaas_driver.ini directory to networking_sec_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver.

service_provider

Directory where the extension plug-ins are saved.

admin_user

Admin username for Keystone authentication in OpenStack, for example, neutron.

admin_password

Admin password for Keystone authentication in OpenStack, for example, 123456.

 

ml2_conf.ini

 

Parameter

Description

type_drivers

Driver type.

vxlan must be specified as the first driver type.

tenant_network_types

Type of the networks to which the tenants belong. For intranet, only vxlan is available. For extranet, only vlan is available.

·     In the host overlay scenario and network overlay with hierarchical port binding scenario, vxlan must be specified as the first network type.

·     In the network overlay without hierarchical port binding scenario, vlan must be specified as the first network type.

·     In the host overlay, network overlay with hierarchical port binding, and network overlay without hierarchical port binding hybrid scenario, vxlan must be specified as the first network type. In this scenario, you can create a VLAN only from the background CLI, REST API, or Web administration interface.

mechanism_drivers

Name of the ml2 driver.

To create SR-IOV instances for VLAN networks, set this parameter to sriovnicswitch, ml2_h3c.

To create hierarchy-supported instances, set this parameter to ml2_h3c,openvswitch.

extension_drivers

Names of the ml2 extension drivers. Available names include ml2_extension_h3c, qos, and port_security. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this parameter. To not enable port security on OpenStack, you do not need to specify the port_security value for this parameter (OpenStack Ocata 2017.1 does not support the port_security value.)

network_vlan_ranges

Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999.

vni_ranges

Value range for the VXLAN ID of the intranet, for example, 1:500.

 

neutron_conf

 

Parameter

Description

url

URL address for accessing Unified Platform.

username

Username for logging in to Unified Platform, for example, sdn. You do not need to configure a username when the use_neutron_credential parameter is set to True.

password

Password for logging in to Unified Platform, for example, skyline. You do not need to configure a password when the use_neutron_credential parameter is set to True. If the password contains a dollar sign ($), enter a backward slash (\) before the dollar sign.

domain

Name of the domain where the SeerEngine-DC controller resides, for example, sdn.

timeout

The amount of time that the Neutron server waits for a response from the SeerEngine-DC controller in seconds, for example, 1800 seconds.

As a best practice, set the waiting time greater than or equal to 1800 seconds.

retry

Number of connection request attempts, for example, 10.

white_list

Whether to enable or disable the authentication-free user feature on OpenStack.

·     True—Enable.

·     False—Disable.

firewall_type

Type of the firewalls created on the controller:

·     CGSR—Context-based gateway service type firewall, each using an independent context. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE—Context-based gateway service type firewall, all using the same context even if they belong to different tenants. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE_BY_COUNT—Context-based gateway service type firewall, all using the same context when the number of contexts reaches the threshold set by the cgsr_fw_context_limit parameter. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY. Only OpenStack Pike supports this firewall type.

·     NFV_CGSR—VNF-based gateway service type firewall, each using an independent VNF. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

fw_share_by_tenant

Whether to enable exclusive use of a gateway service type firewall context by a single tenant and allow the context to be shared by service resources of the tenant when the firewall type is CGSR_SHARE.

lb_type

Type of the load balancers created on the controller.

·     CGSRGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, CGSR type load balancers that belong to one tenant use the same context. CGSR type load balancers that belong to different tenants use different contexts. When the value of the lb_resource_mode parameter is MP, CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same context. CGSR type load balancers that belong to different tenants use different contexts.

·     CGSR_SHAREGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, all CGSR_SHARE type load balancers use the same context even if they belong to different tenants. When the value of the lb_resource_mode parameter is MP, CGSR_SHARE type load balancers that belong to different tenants and are bound to the same gateway use the same context.

·     NFV_CGSRGateway service type load balancer on a VNF. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, NFV_CGSR type load balancers that belong to one tenant use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs. When the value of the lb_resource_mode parameter is MP, NFV_CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs.

resource_mode

Type of the resource created on the controller. The available values are as follows:

·     CORE_GATEWAY—Gateway service resource.

·     NFV—VNF resource. This parameter has been obsoleted.

resource_share_count

Maximum times that the resource node can be shared by resources.

The value is in the range of 1 to 65535. The default value is 1, indicating that the resources cannot be shared.

auto_create_resource

Whether to enable or disable the automatic resources creation feature.

·     True—Enable.

·     False—Disable.

nfv_ha

Whether the NFV and NFV_SHARE resources support stack.

·     True—Support.

·     False—Do not support.

use_neutron_credential

Whether to use the OpenStack Neutron username and password to communicate with the SeerEngine-DC controller.

·     True—Use.

False—Do not use.

firewall_force_audit

Whether to audit firewall policies synchronized to the controller by OpenStack. The default value is True for OpenStack Kilo 2015.1 and False for other OpenStack versions.

·     TrueAudits firewall policies synchronized to the controller by OpenStack. The auditing state of the synchronized policies on the controller is True (audited).

·     FalseDoes not audit firewall policies synchronized to the controller by OpenStack. The synchronized policies on the controller retain their previous auditing state.

sec_output_json_log

Whether to output REST API messages between the SeerEngine-DC Neutron security plugins and SeerEngine-DC controller to the OpenStack operating logs in JSON format.

·     True—Enable.

·     False—Disable.

lb_enable_snat

Whether to enable or disable Source Network Address Translation (SNAT) for load balancers on the SeerEngine-DC controller.

·     True—Enable.

False—Disable.

vendor_rpc_topic

RPC topic of the vendor. This parameter is required when the vendor needs to obtain Neutron data from the SeerEngine-DC Neutron plug-ins. The available values are as follows:

·     VENDOR_PLUGIN—Default value, which means that the parameter does not take effect.

·     DP_PLUGIN—RPC topic of DPtech.

The value of this parameter must be negotiated by the vendor and H3C.

enable_https

Whether to enable HTTPS bidirectional authentication. The default value is False.

·     True—Enable.

·     False—Disable.

Only OpenStack Pike supports this parameter.

neutron_plugin_ca_file

Save location for the CA certificate of the controller. As a best practice, save the CA certificate in the /usr/share/neutron directory.

Only OpenStack Pike supports this parameter.

neutron_plugin_cert_file

Save location for the Cert certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

Only OpenStack Pike supports this parameter.

neutron_plugin_key_file

Save location for the Key certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

Only OpenStack Pike supports this parameter.

cgsr_fw_context_limit

Context threshold for context-based gateway service type firewalls. The value is an integer. When the threshold is reached, all the context-based gateway service type firewalls use the same context.

This parameter takes effect only when the value of the firewall_type parameter is CGSR_SHARE_BY_COUNT.

Only OpenStack Pike supports this parameter.

enable_iam_auth

Whether to enable IAM interface authentication.

·     True—Enable.

·     False—Disable.

When connecting to Unified Platform, you can set the value to True to use the IAM interface for authentication.

The default value is False.

Only OpenStack Newton supports this parameter.

This parameter is obsolete.

enable_firewall_metadata

Whether to allow the CloudOS platform to issue firewall-related fields such as the resource pool name to the controller.

This parameter is used only for communication with the CloudOS platform.

Only OpenStack Pike supports this parameter.

lb_member_slow_shutdown

Whether to enable slow shutdown when creating an LB real server.

·     True—Enable.

·     False—Disable.

The default value is False.

enable_multi_gateways

Whether to enable the multi-gateway mode for the tenant.

·     True—Enable the multi-gateway mode for the tenant. In an OpenStack environment without the Segments configuration, this setting enables different vRouters to access the external network over different gateways.

·     False—Not enable the multi-gateway mode for the tenant.

The default value is False.

Only OpenStack Pike, Queens, and Rocky support this parameter.

For this parameter to take effect, add h3c_security_core to the value of the service_plugins parameter.

enable_multi_segments

Whether to enable multiple outbound interfaces, allowing the vRouter to access the external network from multiple outbound interfaces. The default value is False.

To enable multiple outbound interfaces, configure the following settings:

·     Set the value of this parameter to True.

·     Set the value of the network_force_flat parameter to False.

·     Access the /etc/neutron/plugins/ml2/ml2_conf.ini file on the controller node and specify the controller's gateway name for the network_vlan_ranges parameter.

Only OpenStack Pike supports this parameter.

For this parameter to take effect, add h3c_security_core to the value of the service_plugins parameter.

tenant_gateway_name

Name of the gateway to which the tenant is bound. The default value is None.

When the value of the tenant_gw_selection_strategy parameter is match_gateway_name. You must specify the name of an existing gateway on the controller side.

Only the Pike, Queens, and Rocky plug-ins support this parameter.

tenant_gw_selection_strategy

Gateway selection strategy for the tenant.

·     match_first—Select the first gateway.

·     match_gateway_name—Take effect together with the tenant_gateway_name parameter.

Only OpenStack Pike, Queens, and Rocky support this parameter.

enable_router_nat_without_firewall

Whether to enable NAT when no firewall is configured for the tenant.

·     True—Enable NAT when no firewall is configured. This setting automatically creates default firewall resources to implement NAT if the vRouter has been bound to an external network.

·     False—Not enable NAT when no firewall is configured.

The default value is False.

Only OpenStack Pike supports this parameter.

directly_external

Whether traffic to the external network is directly forwarded by the gateway. The default value is OFF.

The available values are as follows:

·     ANY—Traffic to the external network is directly forwarded by the gateway to the external network.

·     OFF—Traffic to the external network is forwarded by the gateway to the firewall and then to the external network.

·     SUFFIX—Traffic that matches the vRouter name suffix is forwarded by the gateway to the firewall and then to the external network.

directly_external_suffix

vRouter name suffix (DMZ for example). This parameter is available only when you set the value of the directly_external parameter to SUFFIX.

When you change the vRouter name, make sure you understand the impact on this parameter.

Only OpenStack Pike, Queens, and Rocky support this parameter.

lb_resource_mode

Resource pool mode of LB service resources.

·     SP—All gateways share the same LB resource pool.

·     MP—Each gateway uses an LB resource pool.

The default value is SP.

enable_lb_xff

Whether to enable XFF transparent transmission for LB listeners.

·     True—Enable.

·     False—Disable.

When the value is True and the listener protocol is HTTP or TERMINATED_HTTPS, a newly created listener is enabled with XFF transparent transmission by default, and the client's IP address is transparently transmitted to the server encapsulated in the X-Forward-For field of the HTTP header.

Only OpenStack Pike supports this parameter.

enable_lb_certchain

Whether to enable the SSL server end to send the complete certificate chain for SSL negotiation.

·     true—Enable.

·     false—Disable.

The default value is true.

enable_firewall_object_group

Whether to enable the firewall object group feature of the plug-in. The default value is False. If you set the value to True, a firewall object group can be created on the cloud platform through the plug-in.

Only OpenStack Rocky supports this parameter.

To use this feature, configure compatibility settings on the cloud platform. For information about how to configure the compatibility settings, contact Technical Support.

 

Upgrading the SeerEngine-DC Neutron security plug-in

To upgrade the SeerEngine-DC Neutron security plug-in, first remove the old version and then install the new version. For more information, see "Installing the security plug-in on the controller node."

 

CAUTION

CAUTION:

Service might be interrupted during the upgrade. Before performing an upgrade, be sure you fully understand its impact on services.

 

IMPORTANT

IMPORTANT:

The default parameter settings vary depending on the version of SeerEngine-DC Neutron security plug-in. Modify the default parameter settings for SeerEngine-DC Neutron security plug-in to ensure that the plug-ins have the same parameter settings before and after the upgrade.


Upgrading non-converged plug-ins to converged plug-ins

1.     Upgrade the controller to a version that supports converged plug-ins.

2.     Remove non-converged plug-ins:

a.     Access the neutron-server container:

[root@neutron_server ~]# docker exec -itu root neutron_server bash

b.     Remove the plug-ins on the controller node:

-     Versions earlier than E3702

[root@localhost ~]# h3c-vcfplugin controller uninstall

-     E3702 and its later versions

[root@localhost ~]# h3c-sdnplugin controller uninstall

c.     Remove the software packages from all nodes:

CentOS 8:

[root@localhost ~]# pip3 uninstall seerengine-dc-plugin

Other CentOS operating systems:

[root@localhost ~]# pip uninstall seerengine-dc-plugin

 

IMPORTANT

IMPORTANT:

Commands for removing plug-ins vary depending on the software version.

 

3.     Install converged plug-ins:

a.     Install converged plug-ins and security plug-ins as shown in "Installing OpenStack plug-ins."

b.     Use the vi editor to open the ml2_conf.ini configuration file.

[root@localhost ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

c.     Press I to switch to insert mode, and set the parameters in the ml2_conf.ini configuration file. Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.

[SDNCONTROLLER]

sdnc_rpc_url = ws://127.0.0.1:30000

sdnc_rpc_ping_interval = 60

websocket_fragment_size = 102400

cloud_region_name = default

ml2_conf.ini

 

sdnc_rpc_url

Set the value to the IP address and WebSocket type interface number of Unified Platform when Metadata is enabled or DHCP fail-safe is supported.

Configure this parameter based on the URL of the Unified Platform. For example, if the URL of the Unified Platform is http://127.0.0.1:30000, set this parameter to ws://127.0.0.1: 30000.

cloud_region_name

If one cloud platform is connected to the controller, you can modify this parameter when the cloud platform is connected to the controller for the first time after the upgrade and no tenant resources are newly created on the controller. Make sure the value of this parameter is the same as the name configured on the controller and configure the cloud platform as the default platform.

If multiple cloud platforms are connected to the controller, the rules for the single cloud platform interoperability scenario applies for the first cloud platform. For the other cloud platforms, you must change the value of this parameter to be the same as the value for these cloud platforms, and make sure they are the same as those configured on the controller.

This parameter cannot be modified after the cloud platforms are connected to the controller. You must specify different values for the vxlan vni_ranges parameter for different cloud platforms.

 

4.     Configure parameters on the controller:

Some parameters in the ml2_conf_h3c.ini configuration file for non-converged plug-ins have been moved to the Web interface on the controller. After installing converged plug-ins, you must change the values of the parameters to the values before upgrade.

a.     Save the ml2_conf_h3c.ini.bak or ml2_conf_h3c.ini.h3c_bak file in the /etc/neutron/plugins/ml2 directory of the controller node.

b.     Log in to the controller, click Automation on the top navigation bar, and then select Virtual Networking > OpenStack from the left navigation pane. Click Add OpenStack-Based Cloud Platform, and then click the Parameter Settings tab to edit the parameters based on the information in the ml2_conf_h3c.ini.bak or ml2_conf_h3c.ini.h3c_bak file.

Table 4 Mapping between parameters on the controller and in the configuration file

Parameters in the ml2_conf_h3c.ini file before upgrade

Parameters on the controller after upgrade

cloud_region_name

Name

hybrid_vnic

VLAN to VXLAN Conversion

enable_metadata: True

enable_dhcp_hierarchical_port_binding: True

Network Node Access Policy: VLAN

enable_metadata: True

enable_dhcp_hierarchical_port_binding: False

Network Node Access Policy: VXLAN

enable_metadata: False

enable_dhcp_hierarchical_port_binding: False

Network Node Access Policy: No Access

ip_mac_binding

IP-MAC Anti-Spoofing

directly_external: OFF

Firewall: On for All

directly_external: ANY

Firewall: Off for All

directly_external: SUFFIX

directly_external_suffix: name where name represents the suffix of the name of the vRouter.

Firewall: Off for vRouters Matching Suffix

tenant_gw_selection_strategy: match_gateway_name

tenant_gateway_name: name, where name represents the name of the boarder gateway.

External Connectivity Settings: Single-Segment

Tenant Border Gateway Policy: Match Boarder Gateway Name

enable_multi_gateways: True

External Connectivity Settings: Single-Segment

Tenant Border Gateway Policy: Match External Network Name of vRouter

enable_multi_segments: True

External Connectivity Settings: Multi-Segment

Tenant Border Gateway Policy: Match Physical Network Name of vRouter External Network

network_force_flat

Forcibly Convert External Network to Flat Network

enable_network_l3vni: False

Automatic Allocation of L3VNIs for External Networks: Off

dhcp_lease_time

DHCP Lease Duration

generate_vrf_based_on_router_name: False

VRF Name Generation Method on vRouter: Auto

generate_vrf_based_on_router_name: True

VRF Name Generation Method on vRouter: Use vRouter Name

vds_name

Default VDS name.

empty_rule_action

Empty Rule Action of Security Policy

enable_network_l3vni

Automatic Allocation of L3VNIs for External Networks

deploy_network_resource_gateway

Preconfigure Border Gateway for External Network

 

IMPORTANT

IMPORTANT:

Make sure a VXLAN pool exists on the controller after the upgrade. If no VXLAN pools exist or the VXLAN pool resources are insufficient, add a new VXLAN pool and make sure the VXLAN pool range does not contain the segment IDs of the existing vRouters.

 

5.     Restart the neutron-server service.

[root@localhost ~]# docker restart neutron_server


(Optional.) Configuring the metadata service for network nodes

OpenStack supports obtaining metadata from network nodes for VMs through DHCP or L3 gateway. H3C supports only the DHCP method. To configure the metadata service for network nodes:

1.     Download the OpenStack installation guide from the OpenStack official website and follow the installation guide to configure the metadata service for the network nodes.

2.     Configure the network nodes to provide metadata service through DHCP.

a.     Use the vi editor to open configuration file dhcp_agent.ini.

[root@network ~]# vi /etc/kolla/neutron-dhcp-agent/dhcp_agent.ini

b.     Press I to switch to insert mode, and modify configuration file dhcp_agent.ini as follows:

[DEFAULT]

force_metadata = True

Set the value to True for the force_metadata parameter to force the network nodes to provide metadata service through DHCP.

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the dhcp_agent.ini configuration file.

3.     Restart the dhcp-agent container.

[root@network ~]# docker restart neutron_dhcp_agent


FAQ

The Python tools cannot be installed using the yum command when a proxy server is used for Internet access. What should I do?

Configure HTTP proxy by performing the following steps:

1.     Make sure the server or the virtual machine can access the HTTP proxy server.

2.     At the CLI of the CentOS system, use the vi editor to open the yum.conf configuration file. If the yum.conf configuration file does not exist, this step creates the file.

[root@localhost ~]# vi /etc/yum.conf

3.     Press I to switch to insert mode, and provide HTTP proxy information as follows:

¡     If the server does not require authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport

¡     If the server requires authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport
proxy_username=username
proxy_password=password

Table 5 describes the arguments in HTTP proxy information.

Table 5 Arguments in HTTP proxy information

Field

Description

username

Username for logging in to the proxy server, for example, sdn.

password

Password for logging in to the proxy server, for example, 123456.

yourproxyaddress

IP address of the proxy server, for example, 172.25.1.1.

proxyport

Port number of the proxy server, for example, 8080.

 

proxy = http://172.25.1.1:8080

proxy_username = sdn

proxy_password = 123456

4.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the yum.conf file.

After the plug-ins are installed successfully, what should I do if the controller fails to interconnect with the cloud platform?

Follow these steps to resolve the interconnection failure with the cloud platform:

1.     Make sure you have strictly followed the procedure in this document to install and configure the plug-ins.

2.     Contact the cloud platform vendor to determine whether a configuration issue exists on the cloud platform side.

3.     If the issue persists, contact after-sales engineers.

Live migration of a VM to a specified destination host failed because of a service exception on the destination host. What should I do?

To resolve the issue:

1.     View the VM state. If the live migration operation has been rolled back, the VM is in normal state, and services are not affected, you can perform live migration again after the destination host recovers.

2.     Compare resource information to identify whether residual configuration exists on the destination host. If residual configuration exists, determine whether services will be affected.

¡     If services will not be affected, retain the residual configuration.

¡     If services will be affected, contact the technical support to delete the residual configuration.

The Inter X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?

Use the following procedure to resolve the issue. An enp61s0f3 Ethernet network adapter is used as an example.

1.     View detailed information about the Ethernet network adapter and record the value for the bus-info field.

sdn@ubuntu:~$ ethtool -i enp61s0f3

driver: i40e

version: 2.8.20-k

firmware-version: 3.33 0x80000f0c 1.1767.0

expansion-rom-version:

bus-info: 0000:3d:00.3

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

2.     Use one of the following solutions:

¡     Solution 1. If this solution fails, use solution 2.

# Execute the following command:

sdn@ubuntu:~$ sudo ethtool --set-priv-flags enp61s0f3  disable-fw-lldp on

# Identify whether the value for the disable-fw-lldp field is on.

sdn@ubuntu:~$ ethtool --show-priv-flags enp61s0f3  | grep lldp

disable-fw-lldp       : on

If the value is on, the network adapter then can receive LLDP messages. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file.

sdn@ubuntu:~$ sudo vi /etc/rc.local

# Press I to switch to insert mode, and add this command to the file. Then press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

ethtool --set-priv-flags enp61s0f3  disable-fw-lldp on

Make sure this command line is configured before the exit 0 line.

¡     Solution 2.

# Execute the echo "lldp stop" > /sys/kernel/debug/i40e/bus-info/command command. Enter the recorded bus info value for the network adapter, and add a backslash (\) before each ":".

sdn@ubuntu:~$ sudo -i

sdn@ubuntu:~$ echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

The network adapter can receive LLDP messages after this command is executed. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file.

sdn@ubuntu:~$ sudo vi /etc/rc.local

# Press I to switch to insert mode, and add this command to the file. Then Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

Make sure this command line is configured before the exit 0 line.

The trunk function is unavailable after I upgrade a non-converged OpenStack Mitaka plug-in to a converged one and configure h3c_trunk. What should I do?

To resolve the issue:

1.     Access the database.

[root@controller ~]# mysql u<useraname> -p

Enter password:

MariaDB [(none)]> USE neutron;

2.     Disable foreign key constraints.

MariaDB [neutron]> SET FOREIGN_KEY_CHECKS=0;

Query OK, 0 rows affected (0.00 sec)

3.     Move the data in h3c_trunk to h3c_trunks.

MariaDB [neutron]> INSERT INTO h3c_trunks (SELECT * FROM h3c_trunk);

Query OK, 1 row affected (0.01 sec)

Records: 1 Duplicates: 0 Warnings: 0

4.     Verify that the data in h3c_trunk has been added to h3c_trunks.

MariaDB [neutron]> SELECT * FROM h3c_trunks;

5.     Delete the foreign key in h3c_subports.

ALTER TABLE h3c_subports DROP FOREIGN KEY h3c_subports_ibfk_1;

6.     Add a new foreign key in h3c_subports.

MariaDB [neutron]> ALTER TABLE `h3c_subports` ADD CONSTRAINT `h3c_subports_ibfk_1`  FOREIGN KEY  (`trunk_id`) REFERENCES `h3c_trunks`(`id`) ON DELETE CASCADE;

Query OK, 0 rows affected (0.04 sec)

Records: 0 Duplicates: 0 Warnings: 0

7.     Enable foreign key constraints.

MariaDB [neutron]> SET FOREIGN_KEY_CHECKS=1;

Query OK, 0 rows affected (0.00 sec)

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网