- Table of Contents
- Related Documents
-
| Title | Size | Download |
|---|---|---|
| 01-Text | 533.00 KB |
Contents
Preprovisioning basic controller settings
Deploying OpenStack plug-ins by using Kolla Ansible
Setting up the basic environment
Installing and upgrading the controller Neutron plug-ins
Installing the controller Neutron plug-ins
Upgrading the controller Neutron plug-ins
Installing and upgrading the controller security plug-in
Installing the controller security plug-in
Upgrading the controller security plug-in
Deploying OpenStack plug-ins for Kubernetes
Installing and upgrading the controller Neutron plug-ins
Installing the controller Neutron plug-ins
Upgrading the controller Neutron plug-in
Installing and upgrading the controller security plug-in
Installing the controller security plug-in
Upgrading the controller Neutron security plug-in
Upgrading non-converged plug-ins to converged plug-ins
(Optional.) Configuring the metadata service for network nodes
Comparing and synchronizing resource information between the controller and cloud platform
The Intel X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?
VM instances fail to be created in a normal environment. What should I do?
In what scenarios do I need to install the Nova patch
In what scenarios do I need to install the openvswitch-agent patch
Overview
This document describes how to install OpenStack plug-ins for interoperability with OpenStack cloud platforms. Then SeerEngine-DC controller (referred to as controller below) can process requests from the OpenStack cloud platforms.
OpenStack plug-ins include controller Neutron plug-ins, security plug-ins, Nova patch, openvswitch-agent patch, and DHCP failover components.
Controller Neutron plug-ins
Neutron is a type of OpenStack services used to manage all virtual networking infrastructures (VNIs) in an OpenStack environment. It provides virtual network services to the devices managed by OpenStack computing services.
The controller Neutron plug-ins are developed for the controller based on the OpenStack framework.
The controller Neutron plug-ins allow deployment of the network configuration obtained from OpenStack through REST APIs on the controller, including tenants' networks, subnets, routers, and ports.
|
CAUTION: To avoid service interruptions, do not modify the settings issued by the cloud platform on the controller, such as the virtual link layer network, vRouter, and vSubnet settings after the plug-ins connect to the OpenStack cloud platform. |
Controller security plug-ins
The controller security plug-ins are Neutron and Octavia plug-ins developed for the controller based on the OpenStack framework. The controller security plug-ins can obtain security configuration from OpenStack through REST APIs and synchronize the configuration to the controller. They can obtain settings for the tenants' FW, LB, or VPN.
Preparing for installation
Hardware requirements
The following table shows the hardware requirements for installing the controller OpenStack plug-ins on a server or virtual machine.
|
CPU |
Memory size |
Disk space |
|
Single-core and multicore CPUs |
2 GB and above |
5 GB and above |
Software requirements
The following table shows the software requirements for installing the controller OpenStack plug-ins.
|
Item |
Supported versions |
|
OpenStack deployed by using Kolla-Ansible |
· OpenStack Ocata · OpenStack Pike · OpenStack Queens · OpenStack Rocky · OpenStack Stein · OpenStack Train · OpenStack Ussuri · OpenStack Victoria · OpenStack Wallaby · OpenStack Xena · OpenStack Yoga |
|
IMPORTANT: Before you install the OpenStack plug-ins, make sure the following requirements are met: · Your system has a reliable Internet connection. · OpenStack has been deployed correctly. Verify that the /etc/hosts file on all nodes has the host name-IP address mappings, and the OpenStack Neutron extension services (Neutron-FWaas, Neutron-VPNaas, or Neutron-LBaas) have been deployed. For the deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website. |
|
|
NOTE: · The controller security plug-in cannot be installed on OpenStack Victoria, Wallaby, Xena, or Yoga. · For the installation of converged version of SeerEngine_DC plug-ins (SeerEngine_DC_PLUGIN-version-py2.7.egg), see H3C SeerEngine-DC OpenStack Converged Plug-Ins Installation Guide. |
Deploying OpenStack
Before installing the plug-ins, deploy OpenStack by using Kolla Ansible first. For the OpenStack deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website.
Preprovisioning basic controller settings
This procedure preprovisions only basic controller settings. For the configuration in a specific scenario, see the controller configuration guide for that scenario.
Table 3 Preprovisioning basic controller settings
|
Item |
Configuration directory |
|
Fabrics |
Automation > Data Center Networks > Fabrics > Fabrics |
|
VDS |
Automation > Data Center Networks > Common Network Settings > Virtual Distributed Switch |
|
IP address pool |
Automation > Data Center Networks > Resource Pools > IP Address Pools |
|
VNID pools (VLANs, VXLANs, and VLAN-VXLAN mappings) |
Automation > Data Center Networks > Resource Pools > VNID Pools > VLANs Automation > Data Center Networks > Resource Pools > VNID Pools > VXLANs [Automation > Data Center Networks > Resource Pools > VNID Pools > VLAN-VXLAN Mappings |
|
Add access devices and border devices to a fabric |
Automation > Data Center Networks > Fabrics > Fabrics |
|
L4-L7 device, physical resource pool, and template |
Automation > Data Center Networks > Resource Pools > Devices > Physical Devices Automation > Data Center Networks > Resource Pools > Devices > L4-L7 Physical Resource Pools |
|
Border gateway |
Automation > Data Center Networks > Common Network Settings > Gateways |
|
Domains and hosts |
Automation > Data Center Networks > Fabrics > Domains Automation > Data Center Networks > Fabrics > Domains > Hosts |
|
Interoperability with OpenStack |
Automation > Virtual Networking > OpenStack NOTE: · You must specify the cloud platform name. The name is case sensitive and must be the same as the value for the cloud_region_name parameter in the ml2_conf.ini file of the Neutron plug-in. · Make the VNI range is the same as the VXLAN VNI range on the cloud platform. |
Deploying OpenStack plug-ins by using Kolla Ansible
The controller plug-ins can be installed on different OpenStack versions. The installation package varies by OpenStack version. However, you can use the same procedure to install the controller plug-ins on different OpenStack versions. This document uses OpenStack Ocata as an example.
The controller plug-ins are installed on the OpenStack control node.
Setting up the basic environment
Before installing controller Neutron plug-ins on the OpenStack control node, set up the basic environment on the node.
To set up the basic environment:
1. Update the software source list, and then download and install the Python tools.
¡ CentOS 8 operating system:
[root@localhost ~]# yum clean all
[root@localhost ~]# yum makecache
[root@localhost ~]# yum install -y python3-pip python3-setuptools
¡ Other CentOS operating systems:
[root@localhost ~]# yum clean all
[root@localhost ~]# yum makecache
[root@localhost ~]# yum install -y python-pip python-setuptools
2. Install runlike.
¡ CentOS 8 operating system:
[root@localhost ~]# pip3 install runlike
¡ Other CentOS operating systems:
[root@localhost ~]# pip install runlike
3. Log in to the controller node and edit the /etc/hosts file. Add the following information to the file.
¡ IP and name mappings of all hosts in this OpenStack environment. To obtain this information, access the controller and select Automation > Data Center Networks > Fabrics > Domains > Hosts.
¡ IP and name mappings of all leaf, spine, and border devices in this scenario. To obtain this information, access the controller and select Automation > Data Center Networks > Resource Pools > Devices > Physical Devices.
[root@localhost ~]# vim /etc/hosts
127.0.0.1 localhost
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
99.0.83.75 controller
99.0.83.76 compute1
99.0.83.77 compute2
99.0.83.78 nfs-server
99.0.83.79 compute3
99.0.83.74 compute4
Installing and upgrading the controller Neutron plug-ins
Installing the controller Neutron plug-ins
Obtaining the controller Neutron plug-in installation package
The controller Neutron plug-ins are included in the controller OpenStack package. Obtain the controller OpenStack package of the required version and then save the package to the target installation directory on the server or virtual machine.
Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit.
Installing the controller Neutron plug-ins on the OpenStack control node
1. Generate the startup script for the neutron-server container.
[root@localhost ~]# runlike neutron_server>docker-neutron-server.sh
2. Modify the neutron.conf configuration file.
a. Use the vi editor to open the neutron.conf configuration file.
[root@localhost ~]# vi /etc/kolla/neutron-server/neutron.conf
b. Configure the neutron.conf configuration file based on the operating system running in the Kolla environment.
- If a CentOS operating system runs in the Kolla environment, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS and Kylin to configure the neutron.conf configuration file.
- If a Ubuntu operating system runs in the Kolla environment, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu to configure the neutron.conf configuration file.
|
|
NOTE: · If you need to configure api_extensions_path, please refer to the corresponding plugin installation guide based on the operating system of the Kolla environment after installing the Neutron plugin. |
3. Modify the ml2_conf.ini configuration file.
a. Use the vi editor to open the ml2_conf.ini configuration file.
[root@localhost ~]# vi /etc/kolla/neutron-server/ml2_conf.ini
b. Press I to switch to insert mode, and set the parameters in the ml2_conf.ini configuration file. For information about the parameters, see "ml2_conf.ini."
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers = ml2_h3c
extension_drivers = ml2_extension_h3c,qos,port_security
[ml2_type_vlan]
network_vlan_ranges = physicnet1:1000:2999
[ml2_type_vxlan]
vni_ranges = 1:500
c. Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.
4. Add plug-ins configuration items to the ml2_conf.ini configuration file.
a. Use the vi editor to open the ml2_conf.ini configuration file.
[root@localhost ~]# vi /etc/kolla/neutron-server/ml2_conf.ini
b. Configure the ml2_conf.ini file based on the operating system running in the Kolla environment.
- If a CentOS operating system runs, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS and Kylin to configure the [SDNCONTROLLER] configuration block for the ml2_conf.ini file.
- If a Ubuntu operating system runs, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu to configure the [SDNCONTROLLER] configuration block for the ml2_conf.ini file.
5. Copy the plug-ins installation package to the neutron_server container. Select a package to install based on your Python version and operating system.
¡ .egg file (in Python2.7 environment)
[root@localhost ~]# docker cp SeerEngine_DC_PLUGIN-E3608-py2.7.egg neutron_server:/
¡ .whl file (in Python3 environment)
[root@localhost ~]# docker cp SeerEngine_DC_PLUGIN-E6401-py3-none-any.whl neutron_server:/
6. Access the file folder on the neutron_server container where the plug-ins installation package resides and install the plug-in package and websocket-client.
[root@localhost ~]# docker exec -it -u root neutron_server bash
¡ To install the plug-in package (select a package to install based on your Python version and operating system):
- .egg file (in Python2.7 environment):
(neutron-server) [root@localhost ~]# easy_install SeerEngine_DC_PLUGIN-E3608-py2.7.egg
(neutron-server) [root@localhost ~]# h3c-sdnplugin controller install
- .whl file (in Python3 environment)
(neutron-server) [root@localhost ~]# pip3 install SeerEngine_DC_PLUGIN-E6401-py3-none-any.whl
(neutron-server) [root@localhost ~]# h3c-sdnplugin controller install
¡ To install websocket-client:
- CentOS 8:
(neutron-server) [root@localhost ~]# yum install -y python3-websocket-client
- Other CentOS versions:
(neutron-server) [root@localhost ~]# yum install -y python-websocket-client
After installation of websocket-client, verify that WebSocket is installed in the same directory as neutron. websocket-client is usable only when they are in the same path. If they are not in the same path, see the instructions in "I find that python3-websocket-client is not in the same path as neutron when I am to install the Neutron plug-in on open-source OpenStack Wallaby, Xena, and Yoga with Kolla. What should I do?"
|
IMPORTANT: · For a Python version earlier than 3.8, make sure the python-websocket-client version is 0.56. As a best practice, use python-websocket-client version 0.58 in a cloud environment running Python 3.8 or later. As a best practice, use the offline .whl package in offline environments. · Before executing the h3c-sdnplugin controller install command, make sure no neutron.conf file exists in the /root directory. If such a file exists, delete it or move it to another location. |
|
|
NOTE: An error might be reported when the h3c-sdnplugin controller install command is executed. Just ignore it. |
7. Create neutron-server container images.
[root@localhost ~]# neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)
[root@localhost ~]# docker ps | grep $neutron_server_image
16d60524b8b3 kolla/centos-source-neutron-server:rocky "dumb-init --single-? 16 months ago Up 2 weeks neutron_server
[root@localhost ~]# docker commit 16d60524b8b3 kolla/neutron-server-h3c (use the UUID obtained in the preceding command)
kolla/neutron-server-h3c
[root@localhost ~]# docker rm -f neutron_server
[root@localhost ~]# docker tag $neutron_server_image kolla/neutron-server-origin
[root@localhost ~]# docker rmi $neutron_server_image
[root@localhost ~]# docker tag kolla/neutron-server-h3c $neutron_server_image
[root@localhost ~]# docker rmi kolla/neutron-server-h3c
8. Start the neutron-server container.
[root@localhost ~]# source docker-neutron-server.sh
9. View the startup status of the containers. If their status is Up, they have been started up correctly.
[root@localhost ~]# docker ps --filter "name=neutron_server"
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
289e4e132a9b kolla/centos-source-neutron-server:ocata "dumb-init --single-? 1 minutes ago Up 1 minutes neutron_server
Parameters and fields
This section describes parameters in configuration files and fields included in parameters.
ml2_conf.ini
|
Parameter |
Required value |
Description |
|
type_drivers |
vxlan,vlan |
Driver type. vxlan must be specified as the first driver type. |
|
tenant_network_types |
vxlan,vlan |
Type of the networks to which the tenants belong. For intranet, only vxlan is available. For extranet, only vlan is available. · In the host overlay scenario and network overlay with hierarchical port binding scenario, vxlan must be specified as the first network type. · In the network overlay without hierarchical port binding scenario, vlan must be specified as the first network type. · In the host overlay, network overlay with hierarchical port binding, and network overlay without hierarchical port binding hybrid scenario, vxlan must be specified as the first network type. In this scenario, you can create a VLAN only from the background CLI, REST API, or Web administration interface. |
|
mechanism_drivers |
ml2_h3c |
Name of the ml2 driver. To create SR-IOV instances for VLAN networks, set this parameter to sriovnicswitch,ml2_h3c,openvswitch. To create hierarchy-supported instances, set this parameter to ml2_h3c,openvswitch. |
|
extension_drivers |
ml2_extension_h3c,qos |
Names of the ml2 extension drivers. Available names include ml2_extension_h3c, qos, and port_security. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this parameter. To not enable port security on OpenStack, you do not need to specify the port_security value for this parameter (OpenStack Ocata 2017.1 does not support the port_security value.) |
|
network_vlan_ranges |
N/A |
Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999. |
|
vni_ranges |
N/A |
Value range for the VXLAN ID of the intranet, for example, 1:500. |
Upgrading the controller Neutron plug-ins
|
CAUTION: · Services might be interrupted during the controller Neutron plug-ins upgrade procedure. Make sure you understand the impact of the upgrade before performing it on a live network. · The plug-ins settings will not be restored automatically after an upgrade in the Kolla environment. Before an upgrade, back up the settings in the /etc/kolla/neutron-server/neutron.conf and /etc/kolla/neutron-server/ml2_conf.ini configuration files. After the upgrade, modify the parameter settings according to the configuration files to ensure configuration consistency before and after the upgrade. |
Upgrade with the neutron_server container removed
1. Remove the container installed with the old version of the plug-ins and the container image.
neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)
a. If no docker-neutron-server.sh script file exists, execute the following command. If such a file exists, skip this step.
[root@controller ~]# runlike neutron_server>docker-neutron-server.sh
b. Remove the container installed with the old version of the plug-ins and the container image.
[root@controller ~]# docker rm -f neutron_server
[root@controller ~]# docker rmi $neutron_server_image
c. Restore the default container and image in the Kolla environment.
[root@localhost ~]# docker tag kolla/neutron-server-origin $neutron_server_image
[root@localhost ~]# docker rmi kolla/neutron-server-origin
[root@controller ~]# source docker-neutron-server.sh
|
IMPORTANT: Before restarting the neutron_server container, you must restore the configurations in the neutron.conf and ml2_conf.ini files and remove the plug-ins-related configuration. |
2. Install the new version of plug-ins. For the installation procedure, see "Installing the controller Neutron plug-ins."
Upgrade with the neutron_server container retained
To upgrade the plug-ins with the neutron_server container retained, you are required to remove the old version of the plug-ins and then install the view version of the plug-ins on the neutron_server container.
1. Access the neutron_server container and remove the old version of the plug-ins.
[root@localhost ~]# docker exec -it -u root neutron_server bash
(neutron-server) [root@localhost ~]# h3c-sdnplugin controller uninstall
Remove service
Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.
Restore config files
Uninstallation complete.
(neutron-server) [root@localhost ~]# pip uninstall seerengine-dc-plugin
Uninstalling SeerEngine-DC-PLUGIN-E3608:
/usr/bin/h3c-agent
/usr/bin/h3c-sdnplugin
……
2. Install the new version of the plug-ins. Select a package to install based on your Python version and operating system.
¡ .egg file
[root@localhost ~]# docker cp SeerEngine-DC-PLUGIN-E3608-py2.7.egg neutron_server:/
[root@localhost ~]# docker exec -it -u root neutron_server bash
(neutron-server) [root@localhost ~]# easy_install SeerEngine-DC-PLUGIN-E3608-py2.7.egg
(neutron-server) [root@localhost ~]# h3c-sdnplugin controller install
¡ .whl file
[root@localhost ~]# docker cp SeerEngine_DC_PLUGIN-E6401-py3-none-any.whl neutron_server:/
[root@localhost ~]# docker exec -it -u root neutron_server bash
(neutron-server) [root@localhost ~]# pip3 install SeerEngine_DC_PLUGIN-E6401-py3-none-any.whl
(neutron-server) [root@localhost ~]# h3c-sdnplugin controller install
|
IMPORTANT: Before executing the h3c-sdnplugin controller install command, make sure no neutron.conf file exists in the /root directory. If such a file exists, delete it or move it to another location. |
|
|
NOTE: An error might be reported when the h3c-sdnplugin controller install command is executed. Just ignore it. |
3. After installation, follow the latest plug-in installation guide to check configuration file ml2_conf.ini and add new configuration items introduced in the latest version.
a. Use the vi editor to open configuration file ml2_conf.ini.
[root@localhost ~]# vi /etc/kolla/neutron-server/ml2_conf.ini
b. According to the operating system running in the Kolla environment, modify configuration file ml2_conf.ini.
- If a CentOS operating system runs in the Kolla environment, edit configuration file ml2_conf.ini according to the [SEC_SDNCONTROLLER] configuration block of configuration file ml2_conf.ini in H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS and Kylin.
- If a Ubuntu operating system runs in the Kolla environment, edit configuration file ml2_conf.ini according to the [SEC_SDNCONTROLLER] configuration block of configuration file ml2_conf.ini in H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu.
4. Exit and then restart the neutron_server container.
(neutron-server)[root@controller01 ~]# exit
[root@controller01 ~]# docker restart neutron_server
5. Create the latest neutron-server container image. For more information, see "Create neutron-server container images."
To upgrade the Neutron security plug-in while retaining the neutron_server container, skip this step. After the security plugin upgrade is completed, create the latest neutron-server container image. To upgrade the Neutron security plugin while retaining the neutron_server container, see "Upgrade with the neutron_server container retained."
Installing and upgrading the controller security plug-in
The controller security plug-in can be installed on multiple versions of OpenStack.
The controller security plug-in is installed on the OpenStack controller node. Before installation, set up the base environment on the node.
Installing the controller security plug-in
Obtaining the installation package
Obtain and copy the security plug-in installation package of the required version to the target installation directory on the server or virtual machine.
Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP.
|
IMPORTANT: To avoid damaging the installation packages, select binary mode if you are to transfer the package through FTP or TFTP. |
Installing the security plug-in on the OpenStack controller node (neutron-server container)
1. Generate startup scripts for the neutron-server containers. If you have already completed "Installing the controller Neutron plug-ins.", you may choose to skip this step.
[root@localhost ~]# runlike neutron_server>docker-neutron-server.sh
2. Edit the neutron.conf configuration file.
a. Use the vi editor to open the neutron.conf configuration file.
[root@localhost ~]# sudo vi /etc/kolla/neutron-server/neutron.conf
b. Press I to switch to insert mode, and then edit the configuration file. For more information about the parameters, see "Parameters and fields."
For OpenStack Pike and Queens, it supports firewall services such as firewall, fwaas_h3c, and firewall_h3c. Edit firewall as an example in the neutron.conf configuration file as follows:
[DEFAULT]
service_plugins = firewall,h3c_security_core,lbaasv2,vpnaas
[service_providers]
service_provider=FIREWALL:H3C:networking_sec_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default
service_provider=LOADBALANCERV2:H3C:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default
service_provider=VPN:H3C:networking_sec_h3c.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default
|
IMPORTANT: For OpenStack Pike, when the load balancer supports multiple resource pools of the Context type, you must preprovision a resource pool named dmz or core on the controller, and then change the value of the service_provider parameter to LOADBALANCERV2:DMZ:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDMZDriver:default or LOADBALANCERV2:CORE:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginCOREDriver:default accordingly. |
¡ For OpenStack Rocky, the firewall, fwaas_h3c, firewall_h3c, and firewall_v2 firewall services are supported.
Taking firewall as an example, edit the neutron.conf configuration file as follows:
[DEFAULT]
service_plugins = firewall,h3c_security_core,lbaasv2,vpnaas
[service_providers]
service_provider=FIREWALL:H3C:networking_sec_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default
service_provider=LOADBALANCERV2:H3C:networking_sec_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default
service_provider=VPN:H3C:networking_sec_h3c.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default
Taking firewall_v2 as an example, edit the neutron.conf configuration file as follows:
[DEFAULT]
service_plugins = h3c_l3_router,firewall_v2,segments,h3c_security_core
[service_providers]
service_provider=FIREWALL_V2:H3C:networking_sec_h3c.fw2.h3c_fwpluginv2_driver.H3CFwaasV2Driver:default
¡ For OpenStack Kilo2015.1, Liberty, and Mitaka, configure the neutron.conf configuration file as follows when Load balancer V1 is configured in OpenStack (to configure the VPN service for Newton and Ocata, edit service_provider for the VPN as follows):
[DEFAULT]
service_plugins = firewall,lbaas,vpnaas
[service_providers]
service_provider=FIREWALL:H3C:networking_sec_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default
service_provider=LOADBALANCER:H3C:networking_sec_h3c.lb.h3c_lbplugin_driver.H3CLbaasPluginDriver:default
service_provider=VPN:H3C:networking_sec_h3c.vpn.h3c_vpnplugin_ko_driver.H3CVpnPluginDriver:default
¡ For OpenStack Stein, Train, and Ussuri, the security plug-ins only supports configuring the firewall service firewall_v2. Please configure the neutron.conf configuration file as follows:
[DEFAULT]
service_plugins = firewall_v2
[service_providers]
service_provider=FIREWALL_V2:H3C:networking_sec_h3c.fw2.h3c_fwpluginv2_driver.H3CFwaasV2Driver:default
|
IMPORTANT: · When the OpenStack version is Newton and Ocata, only Load Balancer V2 service is supported. · When the OpenStack version is Pike, Queens, and Rocky, if you want to configure the VPN service, you need to pay attention to the difference between the service_provider in such versions and the service_provider in Kilo 2015.1, Liberty, Mitaka, Newton, and Ocata versions. |
c. Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.
3. Edit the neutron.conf configuration file.
a. Use the vi editor to open the neutron.conf configuration file.
[root@localhost ~]# vi /etc/kolla/neutron-server/neutron.conf
b. Press I to switch to insert mode, and then edit the configuration file. For more information about the parameters, see "Parameters and fields."
[SEC_SDNCONTROLLER]
url = https://127.0.0.1:30000
username = admin
password = Pwd@12345
domain = sdn
timeout = 1800
retry = 10
white_list = False
use_neutron_credential = False
firewall_force_audit = False
sec_output_json_log = False
vendor_rpc_topic = VENDOR_PLUGIN
enable_https = False
neutron_plugin_ca_file =
neutron_plugin_cert_file =
neutron_plugin_key_file =
enable_iam_auth = False
enable_firewall_metadata = False
enable_router_nat_without_firewall = False
enable_firewall_object_group = False
cloud_region_name = default
4. If you have set the white_list parameter to True, perform the following tasks:
¡ Delete the username, password, and domain parameters for SEC_SDNCONTROLLER in the ml2_sec_conf_h3c.ini configuration file.
¡ Add an authentication-free user to the controller.
- Enter the IP address of the host where the Neutron server resides.
- Specify the role as Admin.
5. If you have set the use_neutron_credential parameter to True, perform the following steps:
a. Modify the neutron.conf configuration file.
# Use the vi editor to open the neutron.conf configuration file.
# Press I to switch to insert mode, and add the following configuration. For information about the parameters, see "neutron.conf."
[keystone_authtoken]
admin_user = neutron
admin_password = KEYSTONE_PASS
# Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.
b. Add an admin user to the controller.
# Configure the username as neutron.
# Specify the role as Admin.
# Enter the password of the neutron user in OpenStack.
6. Copy the installation package to the neutron_server container. Select a software package for installation based on the Python version and operating system in your actual environment.
¡ .egg file (Python 2.7 environment)
[root@localhost ~]# docker cp SeerEngine_DC_SEC_PLUGIN-E3603P01-py2.7.egg neutron_server:/
¡ .whl file (Python 3 environment)
[root@localhost ~]# docker cp SeerEngine_DC_SEC_PLUGIN-E7201-py3-none-any.whl neutron_server:/
7. Enter the neutron_server container and install the plug-in package. Select a software package for installation based on the Python version and operating system in your actual environment.
¡ .egg file (Python 2.7 environment)
[root@localhost ~]# docker exec -it -u root -name neutron_server bash
[root@localhost ~]# easy_install SeerEngine_DC_SEC_PLUGIN-E3603P01-py2.7.egg
[root@localhost ~]# h3c-secplugin controller install
¡ .whl file (Python 3 environment)
[root@localhost ~]# docker exec -it -u root -name neutron_server bash
[root@localhost ~]# pip3 install SeerEngine_DC_SEC_PLUGIN-E7201-py3-none-any.whl
[root@localhost ~]# h3c-secplugin controller install
|
IMPORTANT: Before executing the h3c-secplugin controller install command, make sure no neutron.conf file exists in the /root directory. If such a file exists, delete it or move it to another location. |
|
|
NOTE: An error might be reported when the h3c-secplugin controller install command is executed. Just ignore it. |
8. Generate the images for the neutron-server containers.
[root@localhost ~]# neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)
[root@localhost ~]# docker ps | grep $neutron_server_image
16d60524b8b3 kolla/centos-source-neutron-server:rocky "dumb-init --single-? 16 months ago Up 2 weeks neutron_server
[root@localhost ~]# docker commit 16d60524b8b3 kolla/neutron-server-h3c (use the UUID obtained in the preceding command)
[root@localhost ~]# docker rm -f neutron_server
[root@localhost ~]# docker tag $neutron_server_image kolla/neutron-server-origin
[root@localhost ~]# docker rmi $neutron_server_image_name
[root@localhost ~]# docker tag kolla/neutron-server-h3c $neutron_server_image_name
[root@localhost ~]# docker rmi kolla/neutron-server-h3c
|
|
NOTE: If you have already completed "Installing the controller Neutron plug-ins.", it is unnecessary to execute the docker tag $neutron_server_image kolla/neutron-server-origin command. |
9. Start the neutron-server services.
[root@localhost ~]# source docker-neutron-server.sh
10. Verify the status of the services.
[root@localhost ~]# docker ps -filter "name=neutron_server"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
289e4e132a9b kolla/centos-source-neutron-server:ocata "dumb-init -single-?
1 minutes ago Up 1 minutes neutron_server
Parameters and fields
This section describes parameters in configuration files and fields included in parameters.
neutron.conf
|
Parameter |
Description |
|
service_plugins |
Extension plug-ins loaded to OpenStack. Options include firewall, LB, and VPN. Add the service plug-ins based on the network and security requirements. The security plug-in supports the following firewall services, and you can change the values as follows: · For the open-source firewall plug-in in agent mode, set the value to firewall. · To resolve the issue that the deployment time of the firewall policies and rules is too long in agent mode, set the value to fwaas_h3c. · For the open-source firewall plug-in not in agent mode, change firewall in the value to firewall_h3c. · Based on H3C's self-developed firewall 2.0 service: firewall_v2 (only supports OpenStack Rocky, Stein, Train, and Ussuri). To configure firewall services, add h3c_security_core to the value. In the /etc/kolla/neutron-server/ directory of neutron_server, you can configure the service_provider only once for the same service. Do not configure the service_provider parameter in fwaas_driver.ini after you configure it in neutron.conf. This rule applies also to Lbaas and Vpnaas. When configuring the firewall service as firewall_h3c, please change the value of the driver field for [fwaas] in the /etc/kolla/neutron-server/fwaas_driver.ini directory to networking_sec_h3c.fw.h3c_fwplugin_driver.H3CfwaasDriver. When configuring the firewall service as firewall_v2, please change the value of the driver field for [fwaas] in the /etc/kolla/neutron-server/fwaas_driver.ini directory to networking_sec_h3c.fw2.h3c_fwpluginv2_driver.H3CFwaasV2Driver. |
|
service_provider |
Directory where the extension plug-ins are saved. |
|
admin_user |
Admin username for Keystone authentication in OpenStack, for example, neutron. |
|
admin_password |
Admin password for Keystone authentication in OpenStack, for example, 123456. |
ml2_conf.ini
|
Parameter |
Description |
|
type_drivers |
Driver type. vxlan must be specified as the first driver type. |
|
tenant_network_types |
Type of the networks to which the tenants belong. For intranet, only vxlan is available. For extranet, only vlan is available. · In the host overlay scenario and network overlay with hierarchical port binding scenario, vxlan must be specified as the first network type. · In the network overlay without hierarchical port binding scenario, vlan must be specified as the first network type. · In the host overlay, network overlay with hierarchical port binding, and network overlay without hierarchical port binding hybrid scenario, vxlan must be specified as the first network type. In this scenario, you can create a VLAN only from the background CLI, REST API, or Web administration interface. |
|
mechanism_drivers |
Name of the ml2 driver. To create SR-IOV instances for VLAN networks, set this parameter to sriovnicswitch,ml2_h3c,openvswitch. To create hierarchy-supported instances, set this parameter to ml2_h3c,openvswitch. |
|
extension_drivers |
Names of the ml2 extension drivers. Available names include ml2_extension_h3c, qos, and port_security. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this parameter. To not enable port security on OpenStack, you do not need to specify the port_security value for this parameter (OpenStack Ocata 2017.1 does not support the port_security value.) |
|
network_vlan_ranges |
Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999. |
|
vni_ranges |
Value range for the VXLAN ID of the intranet, for example, 1:500. |
neutron_conf
|
Parameter |
Description |
|
url |
URL address for accessing Unified Platform. |
|
username |
Username for logging in to Unified Platform, for example, admin. You do not need to configure a username when the use_neutron_credential parameter is set to True. |
|
password |
Password for logging in to Unified Platform, for example, Pwd@12345. You do not need to configure a password when the use_neutron_credential parameter is set to True. If the password contains a dollar sign ($), enter a backward slash (\) before the dollar sign. |
|
domain |
Name of the domain where the controller resides, for example, sdn. This parameter has been deprecated. |
|
timeout |
The amount of time that the Neutron server waits for a response from the controller in seconds, for example, 1800 seconds. As a best practice, set the waiting time greater than or equal to 1800 seconds. |
|
retry |
Number of connection request attempts, for example, 10. |
|
white_list |
Whether to enable or disable the authentication-free user feature on OpenStack. · True—Enable. · False—Disable. |
|
use_neutron_credential |
Whether to use the OpenStack Neutron username and password to communicate with the controller. · True—Use. False—Do not use. |
|
firewall_force_audit |
Whether to audit firewall policies synchronized to the controller by OpenStack. The default value is True for OpenStack Kilo 2015.1 and False for other OpenStack versions. · True—Audits firewall policies synchronized to the controller by OpenStack. The auditing state of the synchronized policies on the controller is True (audited). · False—Does not audit firewall policies synchronized to the controller by OpenStack. The synchronized policies on the controller retain their previous auditing state. |
|
sec_output_json_log |
Whether to output REST API messages between the controller Neutron security plugins and controller to the OpenStack operating logs in JSON format. · True—Enable. · False—Disable. |
|
vendor_rpc_topic |
RPC topic of the vendor. This parameter is required when the vendor needs to obtain Neutron data from the controller Neutron plug-ins. The available values are as follows: · VENDOR_PLUGIN—Default value, which means that the parameter does not take effect. · DP_PLUGIN—RPC topic of DPtech. The value of this parameter must be negotiated by the vendor and H3C. |
|
enable_https |
Whether to enable HTTPS bidirectional authentication. The default value is False. · True—Enable. · False—Disable. Only OpenStack Pike supports this parameter. |
|
neutron_plugin_ca_file |
Save location for the CA certificate of the controller. As a best practice, save the CA certificate in the /usr/share/neutron directory. Only OpenStack Pike supports this parameter. |
|
neutron_plugin_cert_file |
Save location for the Cert certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory. Only OpenStack Pike supports this parameter. |
|
neutron_plugin_key_file |
Save location for the Key certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory. Only OpenStack Pike supports this parameter. |
|
cgsr_fw_context_limit |
Context threshold for context-based gateway service type firewalls. The value is an integer. When the threshold is reached, all the context-based gateway service type firewalls use the same context. This parameter takes effect only when the value of the firewall_type parameter is CGSR_SHARE_BY_COUNT. Only OpenStack Pike supports this parameter. |
|
enable_iam_auth |
Whether to enable IAM interface authentication. · True—Enable. · False—Disable. When connecting to the Unified Platform, you can set the value to True to use the IAM interface for authentication. The default value is False. Only OpenStack Newton supports this parameter. This parameter is obsolete. |
|
enable_firewall_metadata |
Whether to allow the CloudOS platform to issue firewall-related fields such as the resource pool name to the controller. This parameter is used only for communication with the CloudOS platform. Only OpenStack Pike supports this parameter. |
|
enable_router_nat_without_firewall |
Whether to enable NAT when no firewall is configured for the tenant. · True—Enable NAT when no firewall is configured. This setting automatically creates default firewall resources to implement NAT if the vRouter has been bound to an external network. · False—Not enable NAT when no firewall is configured. The default value is False. Only OpenStack Pike supports this parameter. |
|
cloud_region_name |
If one cloud platform is connected to the controller, you can modify this parameter only when the following requirements are met: · The cloud platform is connected to the controller for the first time after upgrade. · No tenant resources are newly created on the controller. Make sure the value for this parameter is the same as the name configured on the controller and configure the cloud platform as the default platform. If multiple cloud platforms are connected to the controller, the rules for the single cloud platform interoperability scenario applies for the first cloud platform. For the other cloud platforms, you must change the value of this parameter to be the same as the value for these cloud platforms, and make sure they are the same as those configured on the controller. This parameter cannot be modified after the cloud platforms are connected to the controller. |
Installing the security plug-in on the OpenStack controller node (octavia_api container)
Octavia is the OpenStack community's Load Balancing as a Service (LBaaS) component. Use it to dynamically manage load balancing resources in cloud environments, such as traffic distribution and health check.
When the OpenStack version is Ussuri, deploy the controller security plug-in in the octavia-api container to deeply integrate load balancing with the controller. This enhances network automation, flexibility, and scalability in the cloud environment.
Perform the following tasks.
1. Generate the startup script for the octavia-api container.
[root@localhost ~]# runlike octavia_api>docker-octavia-api.sh
2. Edit configuration file neutron.conf.
Open the neutron.conf configuration file by using the vi editor.
[root@localhost ~]# sudo vi /etc/kolla/neutron-server/neutron.conf
Press I to switch to insert mode, and then edit the configuration file. After editing the file, press ESC to exit the insert mode. Then enter :wq to save the neutron.conf configuration file and exit the vi editor.
For OpenStack Ussuri, the security plug-in supports firewall and load balancing services. The firewall service supports only firewall_v2 and the load balancing service supports only Octavia. Edit the neutron.conf configuration file as follows:
[DEFAULT]
service_plugins=firewall_v2,h3c_security_core
[service_providers]
service_provider=FIREWALL_V2:H3C:networking_sec_h3c.fw2.h3c_fwpluginv2_driver.H3CFwaasV2Driver:default
service_provider=LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default
3. Edit the octavia.conf configuration file.
Open the octavia.conf configuration file by using the vi editor.
[root@localhost ~]# sudo vi /etc/kolla/octavia/octavia.conf
Press I to switch to insert mode, and then edit the configuration file. After editing the file, press ESC to exit the insert mode. Then enter :wq to save the octavia.conf configuration file and exit the vi editor.
[api_settings]
enabled_provider_drivers=h3clb:The H3C lb driver
h3clb=octavia.api.drivers.h3clb_driver.driver:H3cProviderDriver
[controller_worker]
network_driver=allowed_address_pairs_driver
[oslo_concurrency]
lock_path=/var/lib/octavia/tmp
[SEC_SDNCONTROLLER]
url=http://127.0.0.1:30000
username=admin
password=Pwd@12345
domain=sdn
timeout=1800
retry=10
white_list=False
use_neutron_credential=False
sec_output_json_log=True
enable_iam_auth=False
enable_https=False
cloud_region_name=default
For the parameter descriptions of the [SEC_SDNCONTROLLER] configuration block, see 3 through 5.
4. Copy the plug-in package to the octavia_api container.
[root@localhost ~]# docker cp SeerEngine_DC_SEC_PLUGIN-R7106-py3-none-any.whl ocatvia_api:/
5. Enter the octavia_api container and install the plug-in package.
[root@localhost ~]# docker exec –it –u root –name octavia_api bash
[root@localhost ~]# pip3 install SeerEngine_DC_SEC_PLUGIN-R7106-py3-none-any.whl
[root@localhost ~]# h3c-secplugin controller install
|
TIP: · An error might be reported when the h3c-secplugin controller install command is executed. Just ignore it. · Before executing the h3c-secplugin controller install command, make sure no octavia.conf file exists in the /root directory. If such a file exists, delete it or move it to another location. |
6. Generate the images for the octavia_api containers.
[root@localhost ~]# octavia_api_image=$(docker ps --format {{.Image}} --filter name=octavia_api)
[root@localhost ~]# docker ps | grep $n octavia_api_image
16d60524b8b3 kolla/centos-source-octavia-api:ussuri "dumb-init --single-? 16 months ago Up 2 weeks octavia_api
[root@localhost ~]# docker commit 16d60524b8b3 kolla/octavia-api-h3c (use the UUID obtained in the preceding command)
[root@localhost ~]# docker rm -f octavia_api
[root@localhost ~]# docker tag $octavia_api_image kolla/octavia-api-origin
[root@localhost ~]# docker rmi $ octavia_api_image_name
[root@localhost ~]# docker tag kolla/octavia-api-h3c $octavia_api_image_name
[root@localhost ~]# docker rmi kolla/octavia-api-h3c
7. Start the octavia-api container.
[root@localhost ~]# source docker-octavia-api.sh
8. Check the octavia-api container status. If it displays Up, the container is running.
[root@localhost ~]# # docker ps –filter "name=octavia_api"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
289e4e132a9b kolla/centos-source-octaiva-api:ussuri "dumb-init –single-?
1 minutes ago Up 1 minutes octavia_api
Upgrading the controller security plug-in
|
CAUTION: · Services might be interrupted during the upgrade. Make sure you understand the impact of the upgrade before performing it on a live network. · Before upgrading the plug-in version, because the Kolla environment cannot automatically inherit the plug-in configuration, back up the settings in configuration files /etc/kolla/neutron-server/neutron.conf and /etc/kolla/octavia/octavia.conf (if any). After the upgrade, modify the parameter settings based on the backup configuration to ensure configuration consistency before and after the upgrade. |
After you upgrade the version of converged plug-ins to from E6503 earlier to E6503 and later (or E6601 and later), some parameters in the ml2_sec_conf_h3c.ini configuration file are moved to the Web interface of the controller. After installing the new converged plug-ins version and the new controller version, you must change the values of those parameters to their original values before upgrade.
1. Save the ml2_sec_conf_h3c.ini.bak or ml2_sec_conf_h3c.ini.h3c_bak file in the /etc/neutron/plugins/ml2 directory of the controller node.
2. Log in to the controller, click Automation on the top navigation bar, and then select Virtual Networking > OpenStack from the left navigation pane. Click Add OpenStack-Based Cloud Platform, and then click the Parameter Settings tab to edit the parameters based on the information in the ml2_sec_conf_h3c.ini.bak or ml2_sec_conf_h3c.ini.h3c_bak file. Table 4 shows mappings between parameters on the controller and in the configuration file. Table 5 shows the parameters that become obsolete after upgrade.
|
CAUTION: If OpenStack-based cloud platform has been configured on the controller before the upgrade, after upgrading, you need to click Edit in the corresponding Actions section to enter the Edit OpenStack-Based Cloud Platform page. Confirm the parameters under the Security Settings tab, and then click OK. |
Table 4 Mappings between parameters on the controller and in the configuration file
|
Parameters in the ml2_sec_conf_h3c.ini file before upgrade |
Parameters on the Web interface of the controller after upgrade |
|
directly_external: OFF |
Firewall: On for All |
|
directly_external: ANY |
Firewall: Off for All |
|
directly_external: SUFFIX directly_external_suffix: name (The name argument represents the suffix of the name of the vRouter.) |
Firewall: Off for vRouters Matching Suffix |
|
tenant_gw_selection_strategy: match_gateway_name tenant_gateway_name: name (The name argument represents the name of the border gateway.) |
External Connectivity Settings: Single-Segment Tenant Border Gateway Policy: Match Border Gateway Name |
|
enable_multi_gateways: True |
External Connectivity Settings: Single-Segment Tenant Border Gateway Policy: Match Physical Network Name of vRouter External Network |
|
enable_multi_segments: True |
External Connectivity Settings: Multi-Segment Tenant Border Gateway Policy: Match Segmented Physical Network Name of External Network |
|
auto_create_resource: True |
Auto Create Resource: On |
|
resource_mode: CORE_GATEWAY firewall_type: CGSR |
Firewall Resource Type: Service Gateway Firewall Resource Mode: Exclusive |
|
resource_mode: CORE_GATEWAY firewall_type: CGSR_SHARE |
Firewall Resource Type: Service Gateway Firewall Resource Mode: Shared |
|
resource_mode: CORE_GATEWAY firewall_type: NFV_CGSR |
Firewall Resource Type: Service Gateway Firewall Resource Mode: NFV |
|
resource_mode: SERVICE_LEAF firewall_type: ACSR |
Firewall Resource Type: Service Leaf Firewall Resource Mode: Exclusive |
|
resource_mode: SERVICE_LEAF firewall_type: ACSR_SHARE |
Firewall Resource Type: Service Leaf Firewall Resource Mode: Shared |
|
lb_type: CGSR |
LB Resource Mode: Exclusive |
|
lb_type: CGSR_SHARE |
LB Resource Mode: Shared |
|
lb_type: NFV_CGSR |
LB Resource Mode: NFV |
|
resource_share_count: 1 |
Shared Resource Nodes: 1 |
|
lb_resource_mode: SP |
LB Resource Pool Mode: Single Resource Pool |
|
lb_resource_mode: MP |
LB Resource Pool Mode: Multiple Resource Pools |
|
lb_enable_snat: True |
SNAT for Loadbalancer: On |
|
lb_member_slow_shutdown: True |
Real Service Slow Shutdown: On |
|
enable_lb_xff: True |
XFF for Loadbalancer: On |
|
enable_lb_certchain: True |
Send Full Certificate Chain on SSL Server: On |
Table 5 Obsolete parameters after upgrade
|
Parameter |
Description |
|
firewall_type |
Type of the firewalls created on the controller. The following firewall type is no longer supported: CGSR_SHARE_BY_COUNT—Context-based gateway service type firewall, all using the same context when the number of contexts reaches the threshold set by the cgsr_fw_context_limit parameter. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY. Only OpenStack Pike supports this firewall type. |
|
fw_share_by_tenant |
Whether to enable exclusive use of a gateway service type firewall context by a single tenant and allow the context to be shared by service resources of the tenant when the firewall type is CGSR_SHARE or ACSR_SHARE. |
|
cgsr_fw_context_limit |
Context threshold for context-based gateway service type firewalls. The value is an integer. When the threshold is reached, all the context-based gateway service type firewalls use the same context. This parameter takes effect only when the value of the firewall_type parameter is CGSR_SHARE_BY_COUNT. Only OpenStack Pike supports this parameter. |
|
nfv_ha |
Whether the NFV and NFV_SHARE resources support stack. · True—Support. · False—Do not support. |
Upgrade with the container removed
1. Remove the neutron_server container installed with the old version of the plug-ins and the container image.
neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)
a. If no docker-neutron-server.sh script file exists, execute the following command. If such a file exists, skip this step.
[root@localhost ~]# runlike neutron_server>docker-neutron-server.sh
b. Remove the container installed with the old version of the plug-ins and the container image.
[root@localhost ~]# docker rm -f neutron_server
[root@localhost ~]# docker rmi $neutron_server_image
c. Restore the default container and image in the Kolla environment.
[root@localhost ~]# docker tag kolla/neutron-server-origin $neutron_server_image
[root@localhost ~]# docker rmi kolla/neutron-server-origin
[root@controller ~]# source docker-neutron-server.sh
|
IMPORTANT: Before restarting the neutron_server container, you must restore the configurations in the neutron.conf file and remove the plug-ins-related configuration. |
2. Remove the octavia_api container installed with the old version of the plug-ins and the container image.
octavia_api_image=$(docker ps --format {{.Image}} --filter name=octavia_api)
a. If no docker-octavia-api. sh script file exists, execute the following command. If such a file exists, skip this step.
[root@localhost ~]# runlike octavia_api>docker-octavia-api.sh
b. Remove the container installed with the old version of the plug-ins and the container image.
[root@localhost ~]# docker rm -f octavia_api
[root@localhost ~]# docker rmi $octavia_api_image
c. Restore the default container and image in the Kolla environment.
[root@localhost ~]# docker tag kolla/octavia-api-origin $octavia_api_image
[root@localhost ~]# docker rmi kolla/octavia-api-origin
[root@controller ~]# source docker-octavia-api.sh
|
IMPORTANT: Before restarting the octavia_api container, you must restore the configurations in the octavia.conf file and remove the plug-ins-related configuration. |
3. Install the new version of plug-ins. For the installation procedure, see "Installing the controller security plug-in".
Upgrade with the container retained
To upgrade the plug-ins with the container retained, you are required to remove the old version of the plug-ins and then install the new version of the plug-ins on the neutron_server container and ocatvia_api container (if any).
Upgrade the plug-in in the neutron_server container
1. Access the neutron_server container and remove the old version of the plug-ins.
[root@controller ~]# docker exec -it -u root neutron_server bash
(neutron-server) [root@controller ~]# h3c-secplugin controller uninstall
Remove service
Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-sec-agent.service.
Restore config files
Uninstallation complete.
(neutron-server) [root@controller ~]# pip uninstall seerengine-dc-sec-plugin
Uninstalling SeerEngine-DC-SEC-PLUGIN-E3608:
/usr/bin/h3c-sec-agent
/usr/bin/h3c-secplugin
……
2. Install the new version of the plug-ins.
¡ .egg file:
[root@controller ~]# docker cp SeerEngine_DC_SEC_PLUGIN-D3609-py2.7.egg neutron_server:/
[root@controller ~]# docker exec -it -u root neutron_server bash
(neutron-server) [root@controller ~]# easy_install SeerEngine_DC_SEC_PLUGIN-D3609-py2.7.egg
(neutron-server) [root@controller ~]# h3c-secplugin controller install
¡ .whl file:
[root@localhost ~]# docker cp SeerEngine_DC_SEC_PLUGIN-E6401-py3-none-any.whl neutron_server:/
[root@localhost ~]# docker exec -it -u root neutron_server bash
(neutron-server) [root@localhost ~]# pip3 install SeerEngine_DC_SEC_PLUGIN-E6401-py3-none-any.whl
(neutron-server) [root@localhost ~]# h3c-secplugin controller install
3. Update the configuration file.
4. After installation, check the [SEC_SDNCONTROLLER] configuration block in configuration file neutron.conf and add new configuration items introduced in the latest version. For more information, see “3.”
5. Exit and then restart the neutron_server container.
(neutron-server)[root@controller01 ~]# exit
[root@controller01 ~]# docker restart neutron_server
6. Create the latest neutron-server container image. For more information, see “8.”
Upgrade the plug-in in the octavia_api container (if any)
1. Access the octavia_api container and remove the old version of the plug-ins.
[root@controller ~]# docker exec -it -u root octavia_api bash
(octavia-api) [root@controller ~]# h3c-secplugin controller uninstall
Remove service
Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-sec-agent.service.
Restore config files
Uninstallation complete.
(octavia-api) [root@controller ~]# pip uninstall seerengine-dc-sec-plugin
Uninstalling SeerEngine-DC-SEC-PLUGIN-E7106:
/usr/bin/h3c-sec-agent
/usr/bin/h3c-secplugin
……
2. Install the new version of the plug-ins.
[root@controller ~]# docker cp SeerEngine_DC_SEC_PLUGIN-R7106-py3-none-any.whl octavia_api:/
[root@controller ~]# docker exec -it -u root octavia_api bash
(octavia-api) [root@controller ~]# pip3 install SeerEngine_DC_SEC_PLUGIN-R7106-py3-none-any.whl
(octavia-api) [root@controller ~]# h3c-secplugin controller install
3. Update the configuration file.
4. After installation, check the [SEC_SDNCONTROLLER] configuration block in configuration file octavia.conf and add new configuration items introduced in the latest version. For more information, see “3.”
5. Exit and then restart the octavia_api container.
(octavia-api)[root@controller01 ~]# exit
[root@controller01 ~]# docker restart octavia_api
6. Create the latest octavia_api container image. For more information, see “6.”
|
IMPORTANT: · An error might be reported when the h3c-sdnplugin controller install command is executed. Just ignore it. · Before executing the h3c-sdnplugin controller install command, make sure no neutron.conf file exists in the /root directory. If such a file exists, delete it or move it to another location. |
Deploying OpenStack plug-ins for Kubernetes
Installing and upgrading the controller Neutron plug-ins
Installing the controller Neutron plug-ins
Obtaining the controller Neutron plug-in installation package
The controller Neutron plug-ins are included in the controller OpenStack package. Obtain the controller OpenStack package of the required version and then save the package to the target installation directory on the server or virtual machine.
|
IMPORTANT: Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit. |
Obtaining the containerized plug-in installation script
Obtain the containerized plug-in installation script of the required version. The software package is named SeerEngine_DC_PLUGIN_SHELL_INSTALL-version.zip, where version refers to the software package version number. After decompressing the software package, neutron_h3c_plugin_init.sh is the script used for deploying the Neutron plug-in for Kubernetes.
Installing the controller Neutron plug-in in neutron-server of in Kubernetes
1. On all neutron-server running nodes, create folder /neutron_plugin_dir/net_packages, and make sure the path of this folder is at the same level as /root.
[root@localhost ~]# mkdir /neutron_plugin_dir/net_packages
2. Obtain the basic environment offline package in .rpm or .whl format from the Internet. Based on the operating system and Python version in the neutron-server Pod, choose the appropriate basic environment offline package, which includes the following contents.
¡ Python2: python-pip, python-setuptools
¡ python-websocket-client of the 0.56 version: As a best practice, use the basic environment offline package in .whl format.
3. Place the previous plug-in installation package and basic environment package into each neutron_plugin_dir/net_packages directory, and place the shell script in the neutron_plugin_dir directory.
|
|
NOTE: Modify the shell script permissions to 755. |
[root@neutron_server]# chmod 755 neutron_h3c_plugin_init.sh
4. Edit configuration files ml2_conf.ini and neutron.conf.
See the plug-in installation guide according to the host operating system.
¡ For the CentOS operating system, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS and Kylin.
¡ For the Ubuntu operating system, H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu.
5. Edit the deployment yaml file of neutron-server.
a. Edit the command and args settings in the yaml file.
command: [“/bin/bash”, “-c”, “--“]
args: [ “/neutron_plugin_dir/neutron_h3c_plugin_init.sh; /bin/bash /root/start_neutron_server.sh” ]
b. Add the mountPath setting to the yaml file.
mountPath: /neutron_plugin_dir/
name: neutron-plugin-dir
c. Add the hostPath setting to the yaml file.
hostPath:
path: /neutron_plugin_dir/
name: neutron-plugin-dir
|
|
NOTE: · Because of the deployment flexibility of the containerized environment, the locations of the ml2_conf.ini and neutron.conf files on the host might vary. You need to edit the corresponding configuration files according to the specific environment. · Because of the various deployment methods for the neutron-server Pod, the edited yaml file might vary when deployment is not used. You need to edit it according to the specific Kubernetes environment. |
Deleting the native neutron-server Pod
If neutron-server is started in deployment mode, execute the following command to delete the neutron-server Pod.
[root@localhost ~]# kubectl delete deployments neutron-server-deployment -n namespace
Restarting the neutron-server Pod
[root@localhost ~]# kubectl create -f neutron-server-deployment.yaml
|
|
NOTE: Because of the deployment flexibility of the Kubernetes environment, the content displayed in gray in the previous commands needs to be edited according to the specific environment. |
Verifying the controller node state
Log in to the controller, and navigate to the Automation > Virtual Networking > OpenStack page to view the controller node state. If the controller node is displayed in green, the plug-in is normal for user.
|
|
NOTE: To avoid container restart failure, do not delete files and dependency packages from the /neutron_plugin_dir/ directory. |
Upgrading the controller Neutron plug-in
1. Replace the Neutron plug-in installation package in the neutron_plugin_dir/net_packages path of the corresponding node.
2. For more information about deleting the native neutron-server Pod, see "Deleting the native neutron-server Pod."
3. For more information about restarting the neutron-server Pod, see "Restarting the neutron-server Pod."
Installing and upgrading the controller security plug-in
Installing the controller security plug-in
Obtaining the installation package for controller Neutron security plug-in
The controller Neutron plug-ins are included in the controller OpenStack package. Obtain the controller OpenStack package of the required version and then save the package to the target installation directory on the server or virtual machine.
|
IMPORTANT: Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit. |
Obtaining the containerized security plug-in installation script
Obtain the containerized security plug-in installation script of the required version. The software package is named SeerEngine_DC_SEC_PLUGIN_SHELL_INSTALL-version.zip, where version refers to the software package version number. After decompressing the software package, neutron_h3c_plugin_init.sh is the script used for deploying the Neutron security plug-in for Kubernetes.
Installing the controller Neutron security plug-in in Kubernetes neutron-server
1. On all neutron-server running nodes, create folder /neutron_plugin_dir/sec_packages, and make sure the path of this folder is at the same level as /root/.
[root@localhost ~]# mkdir /neutron_plugin_dir/sec_packages
2. Place the previous plug-in installation package into each /neutron_plugin_dir/sec_packages directory, and place the shell script in the /neutron_plugin_dir/ directory.
|
|
NOTE: Modify the shell script permissions to 755. |
[root@neutron_server]# chmod 755 neutron_h3c_sec_plugin_init.sh
3. Edit the neutron.conf configuration file. For more information, see the associated installation guide.
¡ For the CentOS operating system, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS and Kylin.
¡ For the Ubuntu operating system, H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu.
4. Edit the ml2_conf.ini configuration file.
Add configuration items related with [SEC_SDNCONTROLLER] to ml2_conf.ini based on the host operating system. For more information, see the modification of the ml2_sec_conf_h3c.ini file in the associated installation guide.
¡ For the CentOS operating system, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS and Kylin.
¡ For the Ubuntu operating system, H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu.
5. Edit the deployment yaml file for neutron-server.
a. Edit the command and args settings in the yaml file as follows.
command: [“/bin/bash”, “-c”, “--“]
args: [“/neutron_plugin_dir/neutron_h3c_plugin_init.sh; /bin/bash /neutron_plugin_dir/neutron_h3c_sec_plugin_init.sh; /bin/bash /root/start_neutron_server.sh” ]
|
|
NOTE: If the task in "Installing and upgrading the controller Neutron plug-ins" has been executed, place /neutron_plugin_dir/neutron_h3c_sec_plugin_init.sh after neutron_plugin_dir/ neutron_h3c_plugin_init.sh. |
b. Add the mountPath setting to the yaml file. (Skip this step if the task in "Installing and upgrading the controller Neutron plug-ins" has been executed.)
mountPath: /neutron_plugin_dir/
name: neutron-plugin-dir
c. Add the hostPath setting to the yaml file. (Skip this step if the task in "Installing and upgrading the controller Neutron plug-ins" has been executed.)
hostPath:
path: /neutron_plugin_dir/
name: neutron-plugin-dir
|
|
NOTE: · Because of the deployment flexibility of the containerized environment, the location of the neutron.conf file on the host might vary. You need to edit the corresponding configuration file according to the specific environment. · Because of the various deployment methods for the neutron-server Pod, the edited yaml file might vary when deployment is not used. You need to edit it according to the specific Kubernetes environment. |
Deleting the native neutron-server pod
If neutron-server is started in deployment mode, execute the following command to delete the neutron-server Pod.
[root@localhost ~]# kubectl delete deployments neutron-server-deployment -n namespace
Restarting the neutron-server pod
[root@localhost ~]# kubectl create -f neutron-server-deployment.yaml
|
|
NOTE: · Because of the deployment flexibility of the Kubernetes environment, the content displayed in gray in the previous commands needs to be edited according to the specific environment. · To avoid container restart failure, do not delete all files and dependency packages from the /neutron_plugin_dir/ directory. |
Upgrading the controller Neutron security plug-in
1. Replace the Neutron security plug-in installation package in the /neutron_plugin_dir/sec_packages path of the corresponding node.
2. For more information about deleting the native neutron-server Pod, see "Deleting the native neutron-server Pod."
3. For more information about restarting the neutron-server Pod, see "Restarting the neutron-server Pod."
Upgrading non-converged plug-ins to converged plug-ins
1. Upgrade the controller to a version that supports converged plug-ins.
2. Remove non-converged plug-ins:
a. Access the neutron-server container:
[root@neutron_server ~]# docker exec -itu root neutron_server bash
b. Remove the plug-ins on the controller node:
- Versions earlier than E3702
[root@localhost ~]# h3c-vcfplugin controller uninstall
- E3702 and its later versions
[root@localhost ~]# h3c-sdnplugin controller uninstall
c. Remove the software packages from all nodes:
CentOS 8 operating system:
[root@localhost ~]# pip3 uninstall seerengine-dc-plugin
Other CentOS operating systems:
[root@localhost ~]# pip uninstall seerengine-dc-plugin
|
IMPORTANT: Commands for removing plug-ins vary depending on the software version. |
3. Install converged plug-ins:
a. Install converged plug-ins and security plug-ins as shown in "Deploying OpenStack plug-ins by using Kolla Ansible."
b. Use the vi editor to open the ml2_conf.ini configuration file.
[root@localhost ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
c. Press I to switch to insert mode, and set the parameters in the ml2_conf.ini configuration file. Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.
[VCFCONTROLLER]
sdnc_rpc_url = ws://127.0.0.1:30000
sdnc_rpc_ping_interval = 60
websocket_fragment_size = 102400
cloud_region_name = default
ml2_conf.ini
|
sdnc_rpc_url |
Set the value to the IP address and WebSocket type interface number of Unified Platform when Metadata is enabled or DHCP fail-safe is supported. Configure this parameter based on the URL of the Unified Platform. For example, if the URL of the Unified Platform is http://127.0.0.1:30000, set this parameter to ws://127.0.0.1: 30000. |
|
cloud_region_name |
If one cloud platform is connected to the controller, you can modify this parameter when the cloud platform is connected to the controller for the first time after the upgrade and no tenant resources are newly created on the controller. Make sure the value of this parameter is the same as the name configured on the controller and configure the cloud platform as the default platform. If multiple cloud platforms are connected to the controller, the rules for the single cloud platform interoperability scenario applies for the first cloud platform. For the other cloud platforms, you must change the value of this parameter to be the same as the value for these cloud platforms, and make sure they are the same as those configured on the controller. This parameter cannot be modified after the cloud platforms are connected to the controller. You must specify different values for the vxlan vni_ranges parameter for different cloud platforms. |
d. Delete the backup files generated for non-converged plug-ins or change their file name suffixes. Those backup files include ml2_conf_h3c.ini.bak and ml2_conf_h3c.ini.h3c_bak. If you do not perform this operation, some parameters of non-converged plug-ins might be modified or initialized upon next-time security plug-in upgrades.
4. Configure parameters on the controller:
Some parameters in the ml2_conf_h3c.ini configuration file for non-converged plug-ins have been moved to the Web interface on the controller. After installing converged plug-ins, you must change the values of the parameters to the values before upgrade.
a. Save the ml2_conf_h3c.ini.bak or ml2_conf_h3c.ini.h3c_bak file in the /etc/neutron/plugins/ml2 directory of the controller node.
b. Log in to the controller, click Automation on the top navigation bar, and then select Virtual Networking > OpenStack from the left navigation pane. Click Add OpenStack-Based Cloud Platform, and then click the Parameter Settings tab to edit the parameters based on the information in the ml2_conf_h3c.ini.bak or ml2_conf_h3c.ini.h3c_bak file.
Table 6 Mapping between parameters on the controller and in the configuration file
|
Parameters in the ml2_conf_h3c.ini file before upgrade |
Parameters on the controller after upgrade |
|
cloud_region_name |
Name |
|
hybrid_vnic |
VLAN to VXLAN Conversion |
|
enable_metadata: True enable_dhcp_hierarchical_port_binding: True |
Network Node Access Policy: VLAN |
|
enable_metadata: True enable_dhcp_hierarchical_port_binding: False |
Network Node Access Policy: VXLAN |
|
enable_metadata: False enable_dhcp_hierarchical_port_binding: False |
Network Node Access Policy: No Access |
|
ip_mac_binding |
IP-MAC Anti-Spoofing |
|
directly_external: OFF |
Firewall: On for All |
|
directly_external: ANY |
Firewall: Off for All |
|
directly_external: SUFFIX directly_external_suffix: name where name represents the suffix of the name of the vRouter. |
Firewall: Off for vRouters Matching Suffix |
|
tenant_gw_selection_strategy: match_gateway_name tenant_gateway_name: name, where name represents the name of the boarder gateway. |
External Connectivity Settings: Single-Segment Tenant Border Gateway Policy: Match Boarder Gateway Name |
|
enable_multi_gateways: True |
External Connectivity Settings: Single-Segment Tenant Border Gateway Policy: Match Physical Network Name of vRouter External Network |
|
enable_bind_router_gateway_with_specified_name: True |
External Connectivity Settings: Single-Segment Tenant Border Gateway Policy: Match External Network Name of vRouter |
|
enable_multi_segments: True |
External Connectivity Settings: Multi-Segment Tenant Border Gateway Policy: Match Physical Network Name of vRouter External Network |
|
deploy_network_resource_gateway |
Preconfigure Border Gateway for External Network |
|
network_force_flat |
Forcibly Convert External Network to Flat Network |
|
enable_network_l3vni: False |
Automatic Allocation of L3VNIs for External Networks: Off |
|
dhcp_lease_time |
DHCP Lease Duration |
|
generate_vrf_based_on_router_name: False |
VRF Name Generation Method on vRouter: Auto |
|
generate_vrf_based_on_router_name: 1True |
VRF Name Generation Method on vRouter: Use vRouter Name |
|
vds_name |
Default VDS name. |
|
auto_create_resource: True |
Auto Create Resource: On |
|
resource_mode: CORE_GATEWAY firewall_type: CGSR |
Firewall Resource Type: Service Gateway Firewall Resource Mode: Exclusive |
|
resource_mode: CORE_GATEWAY firewall_type: CGSR_SHARE |
Firewall Resource Type: Service Gateway Firewall Resource Mode: Shared |
|
resource_mode: CORE_GATEWAY firewall_type: NFV_CGSR |
Firewall Resource Type: Service Gateway Firewall Resource Mode: NFV |
|
resource_mode: SERVICE_LEAF firewall_type: ACSR |
Firewall Resource Type: Service Leaf Firewall Resource Mode: Exclusive |
|
resource_mode: SERVICE_LEAF firewall_type: ACSR_SHARE |
Firewall Resource Type: Service Leaf Firewall Resource Mode: Shared |
|
lb_type: CGSR |
LB Resource Mode: Exclusive (Service Gateway) |
|
resource_mode: CORE_GATEWAY lb_type: CGSR_SHARE |
Resource Type: Service Gateway LB Resource Mode: Shared (Service Gateway) |
|
resource_mode: CORE_GATEWAY lb_type: NFV_CGSR |
Resource Type: Service Gateway LB Resource Mode: NFV (Service Gateway) |
|
resource_share_count: 1 |
Shared Resource Nodes: 1 |
|
lb_resource_mode: SP |
LB Resource Pool Mode: Single Resource Pool |
|
lb_resource_mode: MP |
LB Resource Pool Mode: Multiple Resource Pools |
|
lb_enable_snat: True |
SNAT for Loadbalancer: On |
|
lb_member_slow_shutdown: True |
Real Service Slow Shutdown: On |
|
enable_lb_xff: True |
XFF for Loadbalancer: On |
|
enable_lb_certchain: True |
Send Full Certificate Chain on SSL Server: On |
5. Obsolete parameters after upgrade
|
Parameter |
Description |
|
firewall_type |
Type of the firewalls created on the controller. The following firewall type is no longer supported: CGSR_SHARE_BY_COUNT—Context-based gateway service type firewall, all using the same context when the number of contexts reaches the threshold set by the cgsr_fw_context_limit parameter. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY. Only OpenStack Pike supports this firewall type. |
|
fw_share_by_tenant |
Whether to enable exclusive use of a gateway service type firewall context by a single tenant and allow the context to be shared by service resources of the tenant when the firewall type is CGSR_SHARE or ACSR_SHARE. |
|
cgsr_fw_context_limit |
Context threshold for context-based gateway service type firewalls. The value is an integer. When the threshold is reached, all the context-based gateway service type firewalls use the same context. This parameter takes effect only when the value of the firewall_type parameter is CGSR_SHARE_BY_COUNT. Only OpenStack Pike supports this parameter. |
|
nfv_ha |
Whether the NFV and NFV_SHARE resources support stack. · True—Support. · False—Do not support. |
6. Configure the following settings on the controller page:
a. Configure the VNI range to be same as the vni_ranges parameter value in the ml2_conf.ini configuration file before the upgrade.
b. Make sure a VXLAN pool exists on the controller after the upgrade. The VXLAN pool range and the vni_ranges range in the ml2_conf.ini file before the upgrade cannot conflict with each other. As a best practice, make sure the range of the VXLAN pool includes the l3_vni_ranges range in the ml2_conf_h3c.ini file before the upgrade.
|
CAUTION: If OpenStack-based cloud platform has been configured on the controller before the upgrade, after upgrading, you need to click Edit in the corresponding Actions section to enter the Edit OpenStack-Based Cloud Platform page. Confirm the parameters under the Security Settings tab, and then click OK. |
7. Restart the neutron-server service.
[root@localhost ~]# docker restart neutron_server
Extended functions
(Optional.) Configuring the metadata service for network nodes
Configure the Metadata service for network nodes by referring to the plug-ins installation guide specific to the operating system.
· For the CentOS operating system, see H3C SeerEngine-DC Controller Converged OpenStack Plug-ins Installation Guide for CentOS and Kylin.
· For the Ubuntu operating system, see H3C SeerEngine-DC Controller Converged OpenStack Plug-ins Installation Guide for Ubuntu.
Comparing and synchronizing resource information between the controller and cloud platform
|
|
NOTE: This function is not supported in the following scenarios: · H3C CloudOS network-based overlay VXLAN environment. · Third-party cloud platforms except for the Ericsson cloud and OpenStack. · This feature is not applicable to extended resources (vpc-connection, bgpneighbor, exroute, trunk, and taas) except for those provided by H3C proprietary plug-ins and Ericsson cloud plug-ins |
Only OpenStack Ussuri, Train, Stein, Rocky, Queens, and Pike support this task.
To compare and synchronize resource information between the controller and cloud platform:
1. Execute the h3c-sdnplugin-extension compare --file [absolute path] file name.csv command to compare resource information between the controller and cloud platform.
The comparison result file contains the following fields:
¡ Resource—Resource type.
¡ Name—Resource name.
¡ Id—Resource ID.
¡ Tenant_id—Tenant ID of the resource.
¡ Tenant_name—Tenant name of the resource.
¡ Status—Comparison result.
- lost—Less resources on the controller. You must add resources to the controller.
- different—Different resources on the controller than the cloud platform. You must update resources on the controller.
- surplus—More resources on the controller. You must remove excessive resources from the controller.
|
|
NOTE: If virtual router link is created on the CloudOS, when the plug-ins compare resource information, there will be VPC data only on the controller side, which is a normal phenomenon. |
2. Execute the h3c-sdnplugin-extension sync --file comparison result file name.csv command. If the comparison result file is in the /var/log/neutron/ path, enter the file name directly. If the comparison result file is in another path, enter the absolute file path.
After you set the value of the enable_security_group parameter to False, security groups and security groups bound to vPorts might exist on the controller when you perform the comparison operation. To resolve the issue, perform the synchronization twice.
After the command is executed, the system displays resource statistics and prompts for your confirmation to start the synchronization. The system starts the synchronization only after receiving your confirmation for twice.
After the synchronization is complete, a synchronization result file /var/log/neutron/sync_all-time.csv is generated, where time indicating the synchronization start time.
|
CAUTION: · Do not add or edit information in the synchronize result file. · To avoid anomaly caused by misoperations, examine and compare the result file and resource statistics carefully. |
FAQ
The Python tools cannot be installed using the yum command when a proxy server is used for Internet access. What should I do?
Configure HTTP proxy by performing the following steps:
1. Make sure the server or the virtual machine can access the HTTP proxy server.
2. At the CLI of the CentOS system, use the vi editor to open the yum.conf configuration file. If the yum.conf configuration file does not exist, this step creates the file.
[root@localhost ~]# vi /etc/yum.conf
3. Press I to switch to insert mode, and provide HTTP proxy information as follows:
¡ If
the server does not require authentication, enter HTTP proxy information in the following format:
proxy =
http://yourproxyaddress:proxyport
¡ If the server requires authentication, enter
HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport
proxy_username = username
proxy_password = password
4 describes the arguments in HTTP proxy information.
4. Arguments in HTTP proxy information
|
Field |
Description |
|
username |
Username for logging in to the proxy server, for example, sdn. |
|
password |
Password for logging in to the proxy server, for example, 123456. |
|
yourproxyaddress |
IP address of the proxy server, for example, 172.25.1.1. |
|
proxyport |
Port number of the proxy server, for example, 8080. |
Example:
proxy = http://172.25.1.1:8080
proxy_username = sdn
proxy_password = 123456
5. Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the yum.conf file.
After the plug-ins are installed successfully, what should I do if the controller fails to interconnect with the cloud platform?
Follow these steps to resolve the interconnection failure with the cloud platform:
1. Make sure you have strictly followed the procedure in this document to install and configure the plug-ins.
2. Contact the cloud platform vendor to determine whether a configuration issue exists on the cloud platform side.
3. If the issue persists, contact after-sales engineers.
Live migration of a VM to a specified destination host failed because of a service exception on the destination host. What should I do?
To resolve the issue:
1. View the VM state. If the live migration operation has been rolled back, the VM is in normal state, and services are not affected, you can perform live migration again after the destination host recovers.
2. Compare resource information to identify whether residual configuration exists on the destination host. If residual configuration exists, determine whether services will be affected.
¡ If services will not be affected, retain the residual configuration.
¡ If services will be affected, contact the technical support to delete the residual configuration.
The Intel X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?
Use the following procedure to resolve the issue. An enp61s0f3 Ethernet network adapter is used as an example.
1. View detailed information about the Ethernet network adapter and record the value for the bus-info field.
sdn@ubuntu:~$ ethtool -i enp61s0f3
driver: i40e
version: 2.8.20-k
firmware-version: 3.33 0x80000f0c 1.1767.0
expansion-rom-version:
bus-info: 0000:3d:00.3
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
2. Use one of the following solutions:
¡ Solution 1. If this solution fails, use solution 2.
# Execute the following command:
sdn@ubuntu:~$ sudo ethtool --set-priv-flags enp61s0f3 disable-fw-lldp on
# Identify whether the value for the disable-fw-lldp field is on.
sdn@ubuntu:~$ ethtool --show-priv-flags enp61s0f3 | grep lldp
disable-fw-lldp : on
If the value is on, the network adapter then can receive LLDP messages. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.
# Open the self-defined startup program file.
sdn@ubuntu:~$ sudo vi /etc/rc.local
# Press I to switch to insert mode, and add this command to the file. Then press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.
ethtool --set-priv-flags enp61s0f3 disable-fw-lldp on
Make sure this command line is configured before the exit 0 line.
¡ Solution 2.
# Execute the echo "lldp stop" > /sys/kernel/debug/i40e/bus-info/command command. Enter the recorded bus info value for the network adapter, and add a backslash (\) before each ":".
sdn@ubuntu:~$ sudo -i
sdn@ubuntu:~$ echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command
The network adapter can receive LLDP messages after this command is executed. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.
# Open the self-defined startup program file.
sdn@ubuntu:~$ sudo vi /etc/rc.local
# Press I to switch to insert mode, and add this command to the file. Then Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.
echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command
Make sure this command line is configured before the exit 0 line.
VM instances fail to be created in a normal environment. What should I do?
Table 7 Identify whether WebSocket client is installed. If WebSocket client is not installed, identify whether controller cluster has rebooted.
Table 8 If controller cluster has rebooted, restart the neutron-server service and then create VM instances.
As a best practice, install WebSocket client and enable WebSocket client connection with the controller RPC service to prevent data loss in a controller cluster reboot.
In what scenarios do I need to install the Nova patch
You need to install the Nova in the following scenarios:
· In the KVM host- or network-based overlay scenario, a VM is a member of the load balancer, and the load balancer is required to detect the member status.
· vCenter network-based overlay scenario.
For the patch configuration method, see H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for CentOS or H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for Ubuntu. For the patch installation procedure, see "Installing the controller Neutron plug-ins."
In what scenarios do I need to install the openvswitch-agent patch
The open source openvswitch-agent process on an OpenStack compute node might fail to deploy VLAN flow tables to open source vSwitches when the following conditions exist:
· The KVM technology is used on the node.
· The hierarchical port binding feature is configured on the node.
To resolve this issue, you must install the openvswitch-agent patch.
For the patch configuration method, see H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for CentOS or H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for Ubuntu. For the patch installation procedure, see "Installing the controller Neutron plug-ins."
I find that python3-websocket-client is not in the same path as neutron when I am to install the Neutron plug-in on open-source OpenStack Wallaby, Xena, and Yoga with Kolla. What should I do?
This procedure is performed in the environment where the CentOS Stream 8 operating system runs and python3.6.8 is used.
1. Obtain the websocket and neutron paths in the container.
(neutron-server)[root@localhost /]# python
Python 3.6.8 (default, Jan 19 2022, 23:28:49)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-7)] on linux
>>> import websocket
>>> websocket.__path__
['/usr/lib/python3.6/site-packages/websocket']
>>> import neutron
>>> neutron.__path__
['/var/lib/kolla/venv/lib/python3.6/site-packages/neutron']
2. If the WebSocket path is different from the neutron path, perform the following procedure. If the paths are the same, websocket-client can be used.
3. Uninstall the python3-websocket-client installed with yum.
(neutron-server)[root@localhost /]# yum remove python3-websocket-client
4. View the pip3 path in the container. If the pip3 path is consistent with the neutron path, use pip3 to install python3-websocket-client.
Ensure Internet connectivity for python3-websocket-client installation.
(neutron-server)[root@localhost /]# pip3 -V
pip 21.3.1 from /var/lib/kolla/venv/lib/python3.6/site-packages/pip (python 3.6)
(neutron-server)[root@localhost /]# pip3 install websocket-client==0.56.0
5. If the websocket-client still cannot be used, contact technical support.
When neutron database link information is encrypted, converged plug-ins cannot inherit the portforwardings table because they cannot access the neutron database. What should I do?
1. Uninstall the Neutron plug-ins.
(neutron-server) [root@localhost ~]# h3c-sdnplugin controller uninstall
2. Reinstall the Neutron plug-ins and specify the db_connection parameter when you start plug-in installation.
(neutron-server) [root@localhost ~]# h3c-sdnplugin controller install --db_connection mysql+pymysql://neutron:PASSWORD@controller/neutron
The value for the db_connection parameter is that for the connection field in the [database] configuration block in the neutron.conf file. The password used for accessing the Neutron database is the decrypted password.
connection = mysql+pymysql://neutron:PASSWORD@controller/neutron
