H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for Ubuntu-E36xx-5W614

HomeSupportResource CenterSDNApplication-Driven Data CenterSeerEngine-DCTechnical DocumentsConfigure & DeployInteroperability GuidesH3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for Ubuntu-E36xx-5W614
01-Text
Title Size Download
01-Text 287.18 KB

Contents

Overview·· 1

VCF Neutron plug-ins· 1

Nova patch· 1

Openvswitch-agent patch· 1

DHCP fail-safe components· 2

DHCP component 2

Metadata component 2

Preparing for the installation· 3

Hardware requirements· 3

Software requirements· 3

Restrictions and guidelines· 4

Installing OpenStack plug-ins· 5

Installing the Python tools· 5

Installing the VCF Neutron plug-ins· 5

Obtaining the VCF Neutron plug-in installation package· 5

Installing the VCF Neutron plug-ins· 5

Verifying the installation· 12

Parameters and fields· 12

Removing the VCF Neutron plug-ins· 23

Upgrading the VCF Neutron plug-ins· 24

Installing the lldpad service· 24

Installing the Nova patch· 25

Prerequisites· 25

Installation procedure· 25

Verifying the installation· 28

Removing the Nova patch· 28

Upgrading the Nova patch· 29

Installing the openvswitch-agent patch· 29

Prerequisites· 29

Installation procedure· 30

Verifying the installation· 30

Removing the openvswitch-agent patch· 30

Upgrading the openvswitch-agent patch· 31

Installing/removing/upgrading DHCP fail-safe components· 31

Installing basic components· 31

Obtaining the installation package of the DHCP fail-safe components· 31

Installing DHCP fail-safe components on the network node· 32

Removing DHCP fail-safe components· 33

Upgrading DHCP fail-safe components· 34

Parameters and fields· 34

Configuring the open-source metadata service for network nodes· 35

Comparing and synchronizing resource information between the controller and cloud platform·· 35

FAQ·· 37

The Python tools cannot be installed using the apt-get command when a proxy server is used for Internet access. What should I do?· 37

After the tap-service and tap-flow data is updated on OpenStack, the image destination template settings of the controller fail to be synchronized automatically. What should I do?· 37

The Inter X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?· 38

 


Overview

This document describes how to install OpenStack plug-ins including virtual converged framework (VCF) Neutron plug-ins, Nova patch, openvswitch-agent patch, and DHCP fail-safe components on Ubuntu.

VCF Neutron plug-ins

Neutron is a type of OpenStack services used to manage all virtual networking infrastructures (VNIs) in an OpenStack environment. It provides virtual network services to the devices managed by OpenStack computing services.

VCF Neutron plug-ins are developed for SeerEngine-DC controller based on the OpenStack framework. VCF Neutron plug-ins can obtain network configuration from OpenStack through REST APIs and synchronize the configuration to the SeerEngine-DC controllers. They can obtain settings for the tenants' networks, subnets, routers, ports, FW, LB, or VPN. Different types of VCF Neutron plug-ins can provide the following features for tenants:

·     VCF Neutron Core plug-inAllows tenants to use basic network communication for cores, including networks, subnets, routers, and ports.

·     VCF Neutron L3_RoutingAllows tenants to forward traffic to each other at Layer 3.

·     VCF Neutron FWaaS plug-inAllows tenants to create firewall services.

·     VCF Neutron LBaaS plug-inAllows tenants to create LB services.

·     VCF Neutron VPNaaS plug-inAllows tenants to create VPN services.

 

CAUTION

CAUTION:

To avoid service interruptions, do not modify the settings issued by the cloud platform on the controller, such as the virtual link layer network, vRouter, and vSubnet settings after the plug-ins connect to the OpenStack cloud platform.

 

Nova patch

Nova is an OpenStack computing controller that provides virtual services for users. The virtual services include creating, starting up, shutting down, and migrating virtual machines and setting configuration information for the virtual machines, such as CPU and memory information.

In specific scenarios (such as a vCenter network overlay scenario), you must install the Nova patch to enable virtual machines created by OpenStack to access networks managed by SeerEngine-DC controllers.

Openvswitch-agent patch

The open source openvswitch-agent process on an OpenStack compute node might fail to deploy VLAN flow tables to open source vSwitches when the following conditions exist:

·     The kernel-based virtual machine (KVM) technology is used on the node.

·     The hierarchical port binding feature is configured on the node.

To resolve this issue, you must install the openvswitch-agent patch.

DHCP fail-safe components

DHCP component

In the network-based overlay scenario, only a controller is currently allowed to assign addresses to virtual machines or bare metal servers as a DHCP server. When the controller is disconnected from the southbound network, the virtual machines or bare metal servers will not be able to renew and reobtain addresses through DHCP. To resolve the issue, you can install a DHCP component on a network node to provide DHCP fail-safe in the network-based overlay scenario. When the controller loses connection to the southbound network, the virtual machines or bare metal servers can renew and reobtain addresses through the independently deployed DHCP server.

Metadata component

In the DHCP fail-safe scenario, you must install a Metadata component on the network node to provide the Metadata function for the DHCP component.


Preparing for the installation

Hardware requirements

Table 1 shows the hardware requirements for installing the VCF Neutron plug-ins, Nova patch, or openvswitch-agent patch on a server or virtual machine.

Table 1 Hardware requirements

CPU

Memory size

Disk size

Single-core and multicore CPUs

2 GB and above

5 GB and above

 

Software requirements

Table 2 shows the software requirements for installing the VCF Neutron plug-ins, Nova patch, or openvswitch-agent patch.

Table 2 Software requirements

Item

Supported version

OpenStack (deployed on Ubuntu with APT)

·     OpenStack Kilo 2015.1 on Ubuntu 14.04

·     OpenStack Liberty on Ubuntu 14.04

·     OpenStack Mitaka on Ubuntu 14.04

·     OpenStack Newton on Ubuntu 16.04

·     OpenStack Ocata on Ubuntu 16.04

·     OpenStack Pike on Ubuntu 16 and later

·     OpenStack Queens on Ubuntu 16 and later

·     OpenStack Rocky on Ubuntu 16 and later

·     OpenStack Train on Ubuntu 16 and later

 

IMPORTANT

IMPORTANT:

To install OpenStack Pike plug-ins, the dnsmasq version must be 2.76. You can use the dnsmasq –v command to display the dnsmasq version number.

 

IMPORTANT

IMPORTANT:

Before you install the OpenStack plug-ins, make sure the following requirements are met:

·     Your system has a reliable Internet connection.

·     OpenStack has been deployed correctly. Verify that the /etc/hosts file on all nodes has the host name-IP address mappings, and the OpenStack Neutron extension services (Neutron-FWaas, Neutron-VPNaas, or Neutron-LBaas) have been deployed. For the deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website.

 


Restrictions and guidelines

This document describes interoperability between SeerEngine-DC with one OpenStack platform that contains one controller node. In other scenarios, follow these restrictions and guidelines:

·     SeerEngine-DC interoperates with one OpenStack platform that contains multiple controller nodes.

Configure all controller nodes on the OpenStack platform in the same way a single controller is configured, and make sure the configuration on all controller nodes is the same.

·     SeerEngine-DC interoperates with multiple OpenStack platforms.

Make sure the cloud platform name (cloud_region_name) and VXLAN VNI for each OpenStack platform, and the host name for each node are unique across the OpenStack platforms.


Installing OpenStack plug-ins

The VCF Neutron plug-ins, Nova patch, and openvswitch-agent patch can be installed on different OpenStack versions. The installation package varies by OpenStack version. However, you can use the same procedure to install the Neutron plug-ins, Nova patch, or openvswitch-agent patch on different OpenStack versions. This document uses OpenStack Pike as an example.

Install the VCF Neutron plug-ins on an OpenStack controller node, the Nova patch and openvswitch-agent patch on an OpenStack compute node, and the DHCP fail-safe components on a network node. Before installation, you must install the Python tools on the associated node.

Installing the Python tools

Before installing the plug-ins, first you must download the Python tools online and install them.

To download and install the Python tools:

1.     Update the software source list.

sdn@ubuntu:~$ sudo apt-get update

2.     Download and install the Python tools. If the system prompts a confirmation message, enter Y.

sdn@ubuntu:~$ sudo apt-get install python-pip python-setuptools

Installing the VCF Neutron plug-ins

Obtaining the VCF Neutron plug-in installation package

The VCF Neutron plug-ins are included in the VCF OpenStack package. Obtain the VCF OpenStack package of the required version and then save the package to the target installation directory on the server or virtual machine.

Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit.

Installing the VCF Neutron plug-ins

CAUTION

CAUTION:

The QoS feature will not operate correctly if you configure the database connection in configuration file neutron.conf as follows:

[database]

connection = mysql://…

This is an open source bug in OpenStack. To prevent this problem, configure the database connection as follows:

[database]

connection = mysql+pymysql://…

The three dots (…) in the command line represents the neutron database link information.

 

Some parameters must be configured with the required values as described in "Parameters and fields."

To install the VCF Neutron plug-ins:

1.     Access the directory where the VCF OpenStack package (an .egg file) is saved, and install the package on the OpenStack controller node. The name of the VCF OpenStack package is SeerEngine_DC_PLUGIN-version1_version2-py2.7.egg. version1 represents the version of the package. version2 represents the version of OpenStack.

In the following example, the VCF OpenStack package is saved to the path /home/sdn.

sdn@ubuntu:~$ cd /home/sdn

sdn@ubuntu:~$ sudo easy_install SeerEngine_DC_PLUGIN-E3102_pike_2017.10-py2.7.egg

2.     Change the user group and permissions of the plug-in file to be consistent with those of the Neutron file.

sdn@ubuntu:~$ cd /usr/local/lib/python2.7/dist-packages

sdn@ubuntu:~$ chown -R --reference=/usr/lib/python2.7/dist-packages/neutron SeerEngine*

sdn@ubuntu:~$ chmod -R --reference=/usr/lib/python2.7/dist-packages/neutron SeerEngine*

sdn@ubuntu:~$ cd /usr/bin

sdn@ubuntu:~$ chown -R --reference=neutron-server h3c*

sdn@ubuntu:~$ chmod -R --reference=neutron-server h3c*

For Train plug-ins:

sdn@ubuntu:~$ cd /usr/local/lib/python3.6/site-packages

sdn@ubuntu:~$ chown -R --reference=/usr/lib/python3.6/site-packages/neutron SeerEngine*

sdn@ubuntu:~$ chmod -R --reference=/usr/lib/python3.6/site-packages/neutron SeerEngine*

sdn@ubuntu:~$ cd /usr/local/bin

sdn@ubuntu:~$ chown -R --reference=/usr/bin/neutron-server h3c*

sdn@ubuntu:~$ chmod -R --reference=/usr/bin/neutron-server h3c*

3.     Install the VCF Neutron plug-ins.

sdn@ubuntu:~$ sudo h3c-vcfplugin controller install

 

CAUTION

CAUTION:

Make sure no neutron.conf file exists in the /root directory when you execute the sudo h3c-vcfplugin controller install command. If such a file exists, delete it or move it to another directory.

 

4.     Modify the neutron.conf configuration file.

a.     Use the vi editor to open the neutron.conf configuration file.

sdn@ubuntu:~$ sudo vi /etc/neutron/neutron.conf

b.     Press I to switch to insert mode, and modify the neutron.conf configuration file. For information about the parameters, see "neutron.conf."

For Train plug-ins:

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_l3_router, qos, port_forwarding

 

 

NOTE:

The Train plug-ins do not support the firewall, lb, vpn, and vpc_connection parameters.

 

For Pike plug-ins:

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_l3_router,firewall,lbaasv2,vpnaas,qos

[service_providers]

service_provider=FIREWALL:H3C: networking_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C: networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C: networking_h3c.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

service_provider=VPC_CONNECTION:H3C:networking_h3c.vpc_connection.h3c_vpc_connection_driver_match_plugin.H3CVpcConnectionMatchPluginDriver:default

 

IMPORTANT

IMPORTANT:

·     For the Pike plug-ins, when the load balancer supports multiple resource pools of the Context type, you must preprovision a resource pool named dmz or core on the controller, and then change the value of the service provider parameter to LOADBALANCERV2:DMZ:networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDMZDriver:default or LOADBALANCERV2:CORE:networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginCOREDriver:default accordingly.

·     If you set the value for vRouter interconnection to vpc_connection when configuring the service_plugins parameter, you must set the value of the corresponding service_provider parameter to VPC_CONNECTION:H3C:networking_h3c.l3_router.h3c_vpc_connection_driver.H3CVpcConnectionDriver:default.

 

For Queens and Rocky plug-ins:

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_l3_router,firewall,lbaasv2,vpnaas,qos,h3c_vpc_connection

[service_providers]

service_provider=FIREWALL:H3C: networking_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C: networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C: networking_h3c.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

 

IMPORTANT

IMPORTANT:

For Rocky plug-ins, if you are not to enable firewall agent services, change firewall to firewall_h3c in the value of the service_plugins parameter.

 

For Kilo 2015.1, Liberty, and Mitaka plug-ins (Load balancer V1 configured in OpenStack):

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,lbaas,vpnaas

[service_providers]

service_provider=FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCER:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginDriver:default

service_provider=VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

For Kilo 2015.1, Liberty, Mitaka, Newton, and Ocata plug-ins (Load balancer V2 configured in OpenStack):

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,vpnaas

[service_providers]

service_provider=FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

 

IMPORTANT

IMPORTANT:

For the Kilo 2015.1 plug-ins, when the load balancer supports multiple resource pools of the Context type, you must preprovision a resource pool named dmz or core on the controller, and then change the value of the service provider parameter to LOADBALANCER:DMZ:h3c_vcfplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginDMZDriver:default or LOADBALANCER:CORE:h3c_vcfplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginCOREDriver:default accordingly.

 

For Liberty, Mitaka, Newton, and Ocata plug-ins (QoS services configured in OpenStack):

(You can configure only load balancer V2 for Newton and Ocata plug-ins.)

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,vpnaas,qos

[service_providers]

service_provider=FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

[qos]

notification_drivers = message_queue,qos_h3c

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

5.     Modify the ml2_conf.ini configuration file.

a.     Use the vi editor to open the ml2_conf.ini configuration file.

sdn@ubuntu:~$ sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini

b.     Press I to switch to insert mode, and set the parameters in the ml2_conf.ini configuration file. For information about the parameters, see"ml2_conf.ini."

[ml2]

type_drivers = vxlan,vlan

tenant_network_types = vxlan,vlan

mechanism_drivers = ml2_h3c

extension_drivers = ml2_extension_h3c,qos,port_security

[ml2_type_vlan]

network_vlan_ranges = physicnet1:1000:2999

[ml2_type_vxlan]

vni_ranges = 1:500

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.

6.     Modify the local_settings.py configuration file.

a.     Use the vi editor to open the local_settings.py configuration file.

sdn@ubuntu:~$ sudo vi /etc/openstack-dashboard/local_settings.py

b.     Press I to switch to insert mode. Set the values for the LB, FW, and VPN fields in the OPENSTACK_NEUTRON_NETWORK parameter to enable the associated configuration pages in OpenStack Web. For information about the fields, see "OPENSTACK_NEUTRON_NETWORK."

OPENSTACK_NEUTRON_NETWORK = {

    'enable_lb': True,

    'enable_firewall': True,

    'enable_quotas': True,

    'enable_vpn': True,

    # The profile_support option is used to detect if an external router can be

    # configured via the dashboard. When using specific plugins the

    # profile_support can be turned on if needed.

    'profile_support': None,

    #'profile_support': 'cisco',

}

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the local_settings.py file.

7.     Modify the ml2_conf_h3c.ini configuration file.

a.     Use the vi editor to open the ml2_conf_h3c.ini configuration file.

sdn@ubuntu:~$ sudo vi /etc/neutron/plugins/ml2/ml2_conf_h3c.ini

b.     Press I to switch to insert mode and modify the configuration file. For information about the parameters, see "ml2_conf_h3c.ini."

[VCFCONTROLLER]

url = http://127.0.0.1:10080

username = admin

password = admin@123

domain = sdn

timeout = 1800

retry = 10

vif_type = ovs

vnic_type = ovs

vhostuser_mode = server

hybrid_vnic = True

ip_mac_binding = True

denyflow_age =300

white_list = False

binddefaultrouter = False

auto_create_tenant_to_vcfc = True

router_binding_public_vrf = False

enable_subnet_dhcp = False

dhcp_lease_time = 365

firewall_type = CGSR

fw_share_by_tenant = False

lb_type = CGSR

resource_mode = CORE_GATEWAY

resource_share_count = 1

auto_delete_tenant_to_vcfc = True

auto_create_resource = True

nfv_ha = True

vds_name = VDS1

enable_metadata = False

use_neutron_credential = False

enable_security_group = True

disable_internal_l3flow_offload = False

firewall_force_audit = True

enable_l3_router_rpc_notify = False

output_json_log = False

lb_enable_snat = False

empty_rule_action = deny

enable_l3_vxlan = False

l3_vni_ranges = 10000:10100

vendor_rpc_topic = VENDOR_PLUGIN

vsr_descriptor_name = VSR_IRF

vlb_descriptor_name = VLB_IRF

vfw_descriptor_name = VFW_IRF

hierarchical_port_binding_physicnets  =  ANY

hierarchical_port_binding_physicnets_prefix  =  physicnet

network_force_flat = True

directly_external = OFF

directly_external_suffix = DMZ

generate_vrf_based_on_router_name = False

enable_dhcp_hierarchical_port_binding = False

enable_multi_segments = False

enable_https = False

neutron_plugin_ca_file =

neutron_plugin_cert_file =

neutron_plugin_key_file =

router_route_type = None

enable_router_nat_without_firewall = False

cgsr_fw_context_limit = 10

force_vip_port_device_owner_none = False

enable_multi_gateways = False

tenant_gateway_name = None

tenant_gw_selection_strategy = match_first

enable_iam_auth = True

enable_firewall_metadata = False

enable_vcfc_rpc = False

vcfc_rpc_url = ws://99.0.82.55:8080

vcfc_rpc_ping_interval = 60

enable_binding_gateway_with_tenant = False

websocket_fragment_size = 102400

lb_member_slow_shutdown = False

qos_rx_limit_min = 0

enable_network_l3vni = False

lb_resource_mode = SP

neutron_black_list =

enable_lb_xff = False

cloud_identity_mode = disable

custom_cloud_name = openstack-1

deploy_network_resource_gateway = False

force_vlan_port_details_qvo = True

enable_firewall_object_group = False

enable_algorithm_upgrade = False

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the configuration file.

8.     If you have set the white_list parameter to True, add an authentication-free user to the controller.

¡     Enter the IP address of the host where the Neutron server resides.

¡     Specify the role as Admin.

9.     If you have set the binddefaultrouter parameter to True, perform the following steps to configure the default virtual router on the SeerEngine-DC controller.

a.     On the top navigation bar, click Tenants.

b.     From the navigation pane, select All Tenants.

c.     On the tenant list page, select the tenant named default.

d.     From the navigation pane, select Your Network > Virtual Router.

e.     Click Add.

f.     On the page that opens, enter defaultRouter as the name of the virtual router.

g.     On the Advanced Configuration tab, select Share public network VRF and then click Apply.

10.     If you have set the use_neutron_credential parameter to True, perform the following steps:

a.     Modify the neutron.conf configuration file.

# Use the vi editor to open the neutron.conf configuration file.

# Press I to switch to insert mode, and add the following configuration. For information about the parameters, see "neutron.conf."

[keystone_authtoken]

admin_user = neutron

admin_password = 123456

# Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

b.     Add an admin user to the controller.

# Configure the username as neutron.

# Specify the role as Admin.

# Enter the password of the neutron user in OpenStack.

11.     Restart the neutron-server service.

sdn@ubuntu:~$ sudo service neutron-server restart

neutron-server stop/waiting

neutron-server start/running, process 4583

12.     Restart the h3c-agent service.

sdn@ubuntu:~$ sudo service h3c-agent restart

h3c-agent stop/waiting

h3c-agent start/running, process 4678

To avoid repeated deployment of firewall configuration after installation or upgrade of VCF Neutron plugins on multiple nodes, make sure the h3c-agent service is available only for one of the nodes. To stop the h3c-agent service for the other nodes, perform the following steps on each of these nodes:

a.     Execute the service h3c-agent status command to view the h3c-agent service status.

b.     Execute the service h3c-agent stop command to stop the h3c-agent service of the node.

c.     Execute the systemctl disable h3c-agent command to disable restart of the h3c-agent service upon system reboot.

Verifying the installation

# Verify that the VCF OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

sdn@ubuntu:~$ sudo pip freeze | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10

For Train plug-ins:

sdn@ubuntu:~$ sudo pip3 freeze | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-train-2021.9

# Verify that the neutron-server service is enabled. The service is enabled if its state is running.

sdn@ubuntu:~$ sudo service neutron-server status

neutron-server start/running, process 1849

# Verify that the h3c-agent service is enabled. The service is enabled if its state is running.

sdn@ubuntu:~$ sudo service h3c-agent status

h3c-agent start/running, process 4678

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

neutron.conf

Parameter

Required value

Description

core_plugin

ml2

Used for loading the core plug-in ml2 to OpenStack.

service_plugins

h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,lbaas,vpnaas

Used for loading the extension plug-ins to OpenStack.

For the Kilo, Liberty, Mitaka, Pike, and Queens plug-ins, if deployment of firewall policies and rules takes a long time, you can change firewall in the value to fwaas_h3c.

service_provider

·     FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

·     LOADBALANCER:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginDriver:default

·     VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

Directory where the extension plug-ins are saved.

notification_drivers

message_queue,qos_h3c

Name of the QoS notification driver.

admin_user

N/A

Admin username for Keystone authentication in OpenStack, for example, neutron.

admin_password

N/A

Admin password for Keystone authentication in OpenStack, for example, 123456.

 

ml2_conf.ini

Parameter

Required value

Description

type_drivers

vxlan,vlan

Driver type.

vxlan must be specified as the first driver type.

tenant_network_types

vxlan,vlan

Type of the networks to which the tenants belong. For intranet, only vxlan is available. For extranet, only vlan is available.

·     In the host overlay scenario and network overlay with hierarchical port binding scenario, vxlan must be specified as the first network type.

·     In the network overlay without hierarchical port binding scenario, vlan must be specified as the first network type.

mechanism_drivers

ml2_h3c

Name of the ml2 driver.

To create SR-IOV instances for VLAN networks, set this parameter to sriovnicswitch, ml2_h3c.

To create hierarchy-supported instances, set this parameter to ml2_h3c,openvswitch.

extension_drivers

ml2_extension_h3c,qos

Names of the ml2 extension drivers. Available names include ml2_extension_h3c, qos, and port_security. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this parameter. To not enable port security on OpenStack, you do not need to specify the port_security value for this parameter (The Kilo 2015.1, Liberty 2015.2, and Ocata 2017.1 plug-ins do not support the port_security value.)

network_vlan_ranges

N/A

Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999.

Kilo 2015.1 plug-ins do not support the QoS driver.

vni_ranges

N/A

Value range for the VXLAN ID of the intranet, for example, 1:500.

 

OPENSTACK_NEUTRON_NETWORK

Field

Description

enable_lb

Whether to enable or disable the LB configuration page.

·     True—Enable.

·     False—Disable.

enable_firewall

Whether to enable or disable the FW configuration page.

·     True—Enable.

·     False—Disable.

enable_vpn

Whether to enable or disable the VPN configuration page.

·     True—Enable.

·     False—Disable.

 

ml2_conf_h3c.ini

Parameter

Description

url

URL address for logging in to SNA Center or the Unified Platform, for example, http://127.0.0.1:10080 or https://ip_address:10443. The URL for logging in to the Unified Platform is http://ip_address:30000.

username

Username for logging in to SNA Center or the Unified Platform, for example, admin. You do not need to configure a username if the use_neutron_credential parameter is set to True.

password

Password for logging in to SNA Center or the Unified Platform, for example, admin@123. You do not need to configure a password if the use_neutron_credential parameter is set to True. To use character "$" in the password, enter a backslash (\) before the character.

domain

Name of the domain where the controller resides, for example, sdn.

timeout

The amount of time that the Neutron server waits for a response from the controller in seconds, for example, 1800 seconds.

As a best practice, set the waiting time greater than or equal to 1800 seconds.

retry

Maximum times for sending connection requests from the Neutron server to the controller, for example, 10.

vif_type

Default vNIC type:

·     ovs

·     vhostuser (applied to the OVS DPDK solution)

You can set the vhostuser_mode parameter when the value of this parameter is vhostuser.

Only the Mitaka, Newton, and Pike plug-ins support this parameter.

vnic_type

Default vNIC type:

·     ovs

·     vhostuser

Only the plug-ins earlier than Ocata support this parameter. For the Mitaka and Newton plug-ins, you must set the same value as the vif_type parameter.

vhostuser_mode

Default DPDK vHost-user mode:

·     server

·     client

The default value is server.

This setting takes effect only when the value of the vif_type parameter is vhostuser.

hybrid_vnic

Whether to enable or disable the feature of mapping OpenStack VLAN to SeerEngine-DC VXLAN.

·     True—Enable.

·     False—Disable.

ip_mac_binding

Whether to enable or disable IP-MAC binding.

·     True—Enable.

·     False—Disable.

denyflow_age

Anti-spoofing flow table aging time for the virtual distributed switch (VDS), an integer in the range of 1 to 3600 seconds, for example, 300 seconds.

white_list

Whether to enable or disable the authentication-free user feature on OpenStack.

·     True—Enable.

·     False—Disable.

binddefaultrouter

Whether to enable or disable networking binding to the default virtual router on the SeerEngine-DC controller.

·     True—Enable.

·     False—Disable.

This parameter is obsoleted and is used only for version upgrade.

auto_create_tenant_to_vcfc

Whether to enable or disable the feature of automatically creating tenants on the controller.

·     True—Enable.

·     False—Disable.

router_binding_public_vrf

Whether to use the public network VRF for creating a vRouter.

·     TrueUse.

·     False—Do not use.

Do not set the value to True for a weak control network.

enable_subnet_dhcp

Whether to disable or enable DHCP for creating a vSubnet.

·     True—Enable.

·     False—Disable.

dhcp_lease_time

Valid time for vSubnet IP addresses obtained from the DHCP address pool in days, for example, 365 days.

firewall_type

Type of the firewalls created on the controller:

·     CGSR—Context-based gateway service type firewall, each using an independent context. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE—Context-based gateway service type firewall, all using the same context even if they belong to different tenants. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE_BY_COUNT—Context-based gateway service type firewall, all using the same context when the number of contexts reaches the threshold set by the cgsr_fw_context_limit parameter. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY. Only the Pike plug-ins support this firewall type.

·     NFV_CGSRVNF-based gateway service type firewall, each using an independent VNF. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

fw_share_by_tenant

Whether to enable exclusive use of a gateway service type firewall context by a single tenant and allow the context to be shared by service resources of the tenant when the firewall type is CGSR_SHARE.

Only the Pike plug-ins support this parameter.

lb_type

Type of the load balancers created on the controller.

·     CGSRGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, CGSR type load balancers that belong to one tenant use the same context. CGSR type load balancers that belong to different tenants use different contexts. When the value of the lb_resource_mode parameter is MP, CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same context. CGSR type load balancers that belong to different tenants use different contexts.

·     CGSR_SHAREGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, all CGSR_SHARE type load balancers use the same context even if they belong to different tenants. When the value of the lb_resource_mode parameter is MP, CGSR_SHARE type load balancers that belong to different tenants and are bound to the same gateway use the same context.

·     NFV_CGSRGateway service type load balancer on a VNF. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, NFV_CGSR type load balancers that belong to one tenant use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs. When the value of the lb_resource_mode parameter is MP, NFV_CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs.

resource_mode

Type of the resources created on the controller.

·     CORE_GATEWAY—Gateway resources.

·     NFV—VNF resources. This value is obsoleted.

resource_share_count

Number of resources that can share a resource node. The value is in the range of 1 to 65535. The default value is 1, indicating that no resources can share a resource node.

The Queens plug-ins do not support this parameter.

auto_delete_tenant_to_vcfc

Whether to enable or disable the feature of automatically removing tenants from the controller.

·     True—Enable.

·     False—Disable.

auto_create_resource

Whether to enable or disable the feature of automatically creating resources.

·     True—Enable.

·     False—Disable.

nfv_ha

Whether configure the NFV and NFV_SHARE resources to support stack.

·     True—Support.

·     False—Do not support.

vds_name

Name of the VDS, for example, VDS1.

After deleting a VDS and recreating a VDS with the same name, you must perform the following tasks on the controller node for the new VDS to take effect:

·     Reboot the neutron-server service.

·     Reboot the h3c-agent service.

enable_metadata

Whether to enable or disable metadata for OpenStack.

·     True—Enable.

·     False—Disable.

If you enable this feature, you must set the enable_l3_router_rpc_notify parameter to True.

use_neutron_credential

Whether to use the OpenStack Neutron username and password to communicate with the controller.

·     TrueUse.

·     False—Do not use.

enable_security_group

Whether to enable or disable the feature of deploying security group rules to the controller.

·     True—Enable.

·     False—Disable.

disable_internal_l3flow_offload

Whether to enable or disable intra-network traffic routing through the gateway.

·     True—Disable.

·     False—Enable.

firewall_force_audit

Whether to audit firewall policies synchronized to the controller by OpenStack. The default value is False for the Kilo 2015.1 plug-ins and True for plug-ins of other versions.

·     True—Audits firewall policies synchronized to the controller by OpenStack. The auditing state of the synchronized policies on the controller is True (audited).

·     False—Does not audit firewall policies synchronized to the controller by OpenStack. The synchronized policies on the controller retain their previous auditing state.

enable_l3_router_rpc_notify

Whether to enable or disable the feature of sending Layer 3 routing events through RPC.

·     True—Enable.

·     False—Disable.

output_json_log

Whether to output REST API messages to the OpenStack operating logs in JSON format for communication between the VCF Neutron plug-ins and controller.

·     True—Enable.

·     False—Disable.

lb_enable_snat

Whether to enable or disable Source Network Address Translation (SNAT) for load balancers on the controller.

·     True—Enable.

·     False—Disable.

To deploy OpenStack plug-ins on CloudOS, set the value of this parameter to false.

empty_rule_action

Set the action for security policies that do not contain any ACL rules on the controller.

·     permit

·     deny

enable_l3_vxlan

Whether to enable or disable the feature of using Layer 3 VXLAN IDs (L3VNIs) to mark Layer 3 flows between vRouters on the controller.

·     True—Enable.

·     False—Disable.

By default, this feature is disabled.

l3_vni_ranges

Set the value range for the L3VNI, for example, 10000:10100. If the controller interoperates with multiple OpenStack platforms, make sure the L3VNI value range for each OpenStack platform is unique.

vendor_rpc_topic

RPC topic of the vendor. This parameter is required when the vendor needs to obtain Neutron data from the VCF Neutron plug-ins. The available values are as follows:

·     VENDOR_PLUGIN—Default value, which means that the parameter does not take effect.

·     DP_PLUGIN—RPC topic of DPtech.

The value of this parameter must be negotiated by the vendor and H3C.

vsr_descriptor_name

VNF descriptor name of the VNF virtual gateway resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

vlb_descriptor_name

VNF descriptor name of the virtual load balancing resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the lb_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

vfw_descriptor_name

VNF descriptor name of the virtual firewall resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the firewall_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

hierarchical_port_binding_physicnets

Policy for OpenStack to select a physical VLAN when performing hierarchical port binding. The default value is ANY.

·     ANY—A VLAN is selected from all physical VLANs for VLAN ID assignment.

·     PREFIX—A VLAN is selected from all physical VLANs matching the specified prefix for VLAN ID assignment.

Only the Mitaka, Newton, Ocata, Pike, Queens, Rocky, and Train plug-ins support this parameter.

hierarchical_port_binding_physicnets_prefix

Prefix for matching physical VLANs. The default value is physicnet. This parameter is available only when you set the value of the hierarchical_port_binding_physicnets parameter to PREFIX.

Only the Mitaka, Newton, Ocata, Pike, Queens, Rocky, and Train plug-ins support this parameter.

network_force_flat

Whether to enable forcible conversion of an external network to a flat network. The value can only be set to True if the external network is a VXLAN.

directly_external

Whether traffic destined for the external network is directly forwarded by the gateway. The available values are as follows:

·     ANY—Traffic destined for the external network is directly forwarded by the gateway to the external network.

·     OFF—Traffic destined for the external network is forwarded by the gateway to the firewall and then to the external network.

·     SUFFIX—Determine the forwarding method for the traffic destined for the external network by matching the traffic against the vRouter name suffix (set by the directly_external_suffix parameter).

¡     If the traffic destined for the external network matches the suffix, it is directly forwarded by the gateway to the external network.

¡     If the traffic destined for the external network does not match the suffix, it is forwarded by the gateway to the firewall and then to the external network.

The default value is OFF. You can set the value to ANY only when the external network is a VXLAN and the value of network_force_flat is False.

directly_external_suffix

vRouter name suffix (DMZ for example). This parameter is available only when you set the value of the directly_external parameter to SUFFIX. As a best practice, do not change the vRouter name after this parameter is configured.

Only the Pike, Queens, Rocky, and Train plug-ins support this parameter.

generate_vrf_based_on_router_name

Whether to use the vRouter names configured on OpenStack as the VRF names on the controller.

·     True—Use the names. Make sure each vRouter name configured on OpenStack is a case-sensitive string of 1 to 31 characters that contain only letters and digits.

·     False—Not to use the names.

By default, the vRouter names configured on OpenStack are not used as the VRF names on the controller.

enable_dhcp_hierarchical_port_binding

Whether to enable DHCP hierarchical port binding. The default value is False.

·     True—Enable.

·     False—Disable.

Only the Pike, Mitaka, Newton, Rocky, and Train plug-ins support this parameter.

enable_multi_segments

Whether to enable multiple outbound interfaces, allowing the vRouter to access the external network from multiple outbound interfaces. The default value is False.

To enable multiple outbound interfaces, configure the following settings:

·     Set the value of this parameter to True.

·     Set the value of the network_force_flat parameter to False.

·     Access the /etc/neutron/plugins/ml2/ml2_conf.ini file on the control node and specify the controller's gateway name for the network_vlan_ranges parameter.

Only the Pike plug-ins support this parameter.

enable_https

Whether to enable HTTPS bidirectional authentication. The default value is False.

·     True—Enable.

·     False—Disable.

Only the Mitaka, Newton, and Pike plug-ins support this parameter.

neutron_plugin_ca_file

Save location for the CA certificate of the controller. As a best practice, save the CA certificate in the /usr/share/neutron directory.

Only the Mitaka, Newton, and Pike plug-ins support this parameter.

neutron_plugin_cert_file

Save location for the Cert certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

Only the Mitaka, Newton, and Pike plug-ins support this parameter.

neutron_plugin_key_file

Save location for the Key certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

Only the Mitaka, Newton, and Pike plug-ins support this parameter.

router_route_type

Route entry type:

·     None—Standard route.

·     401—Extended route with the IP address of an online vPort as the next hop.

·     402—Extended route with the IP address of an offline vPort as the next hop.

The default value is None.

Only the Pike plug-ins support this parameter.

enable_router_nat_without_firewall

Whether to enable NAT when no firewall is configured for the tenant.

·     True—Enable NAT when no firewall is configured. This setting automatically creates default firewall resources to implement NAT if the vRouter has been bound to an external network.

·     False—Not enable NAT when no firewall is configured.

The default value is False.

Only the Kilo and Pike plug-ins support this parameter.

cgsr_fw_context_limit

Context threshold for context-based gateway service type firewalls. The value is an integer. When the threshold is reached, all the context-based gateway service type firewalls use the same context.

This parameter takes effect only when the value of the firewall_type parameter is CGSR_SHARE_BY_COUNT.

Only the Pike plug-ins support this parameter.

force_vip_port_device_owner_none

Whether to support the LB vport device_owner field.

·     False—Support the LB vport device_owner field. This setting is applicable to an LB tight coupling solution.

·     True—Do not support the LB vport device_owner field. This setting is applicable to an LB loose coupling solution.

The default value is False.

Only the Pike plug-ins support this parameter.

enable_multi_gateways

Whether to enable the multi-gateway mode for the tenant.

·     True—Enable the multi-gateway mode for the tenant. In an OpenStack environment without the Segments configuration, this setting enables different vRouters to access the external network over different gateways.

·     False—Not enable the multi-gateway mode for the tenant.

The default value is False.

Only the Pike, Queens, and Rocky plug-ins support this parameter.

tenant_gateway_name

Name of the gateway to which the tenant is bound. The default value is None.

It takes effect only when the value of the tenant_gw_selection_strategy parameter is match_gateway_name. You must specify the name of an existing gateway on the controller side.

Only the Pike, Queens, Rocky, and Train plug-ins support this parameter.

tenant_gw_selection_strategy

Gateway selection strategy for the tenant.

·     match_first—Select the first gateway.

·     match_gateway_name—Take effect together with the tenant_gateway_name parameter.

Only the Pike, Queens, Rocky, and Train plug-ins support this parameter.

enable_iam_auth

Whether to enable IAM interface authentication.

·     True—Enable.

·     False—Disable.

When connecting to SNA Center, you can set the value to True to use the IAM interface for authentication.

The default value is False.

Only the Mitaka, Newton, Pike, Queens, and Rocky plug-ins support this parameter.

enable_firewall_metadata

Whether to allow the CloudOS platform to issue firewall-related fields such as the resource pool name to the controller.

This parameter is used only for communication with the CloudOS platform.

Only the Pike plug-ins support this parameter.

enable_vcfc_rpc

Whether to enable RPC connection between the plug-ins and the controller in the DHCP fail-safe scenario.

The default value is False.

Only the Pike plug-ins support this parameter.

vcfc_rpc_url

RPC interface URL of the controller. Only a WebSocket type interface is supported.

The default value is ws://127.0.0.1:1080.

Only the Pike plug-ins support this parameter.

vcfc_rpc_ping_interval

Interval at which an RPC ICMP echo request message is sent to the controller, in seconds.

The default value is 60 seconds.

Only the Pike plug-ins support this parameter.

enable_binding_gateway_with_tenant

Whether to enable automatic binding of tenants to the gateway. The default value is False.

When a network is created for a project on the OpenStack cloud platform for the first time, the corresponding tenant will bind to the gateway automatically if you set the value to True. When a vRouter is created for a project on the OpenStack cloud platform for the first time, the corresponding tenant will bind to gateway automatically regardless of whether the value of the parameter is True or False.

Only the Pike plug-ins support this parameter.

websocket_fragment_size

Size of a WebSocket fragment sent from the plug-in to the controller in the DHCP fail-safe scenario, in bytes.

The value is an integer equal to or larger than 1024. The default value is 102400. If the value is 1024, the message is not fragmented.

Only the Pike plug-ins support this parameter.

lb_member_slow_shutdown

Whether to enable slow shutdown when creating an LB pool.

The default value is False.

Only the Pike plug-ins support this parameter.

qos_rx_limit_min

Minimum inbound bandwidth, in kbps. If the QoS minimum inbound bandwidth configured on OpenStack is smaller than this parameter value, this parameter value takes effect.

Only the Kilo 2015.1 plug-ins support this parameter.

enable_network_l3vni

Whether to issue the L3VNIs when creating an external network. This parameter is valid only when the value of the enable_l3_vxlan parameter is True.

The default value is False.

Only the Pike plug-ins support this parameter.

lb_resource_mode

Resource pool mode of LB service resources. When the value is SP, all gateways share one LB resource pool. When the value is MP, the system creates an LB resource pool for each gateway.

The default value is SP.

Only the Pike plug-ins support this parameter.

neutron_black_list

Neutron denylist. Only value flat is supported. No default value exists.

When the value is flat, the SDN ML2 plug-in will not issue flat-type internal network resources to the controller, and you are not allowed to bind or unbind router interfaces from flat-type internal subnets.

Only the Pike plug-ins support this parameter.

enable_lb_xff

Whether to enable XFF transparent transmission for LB listeners.

·     True—Enable.

·     False—Disable.

The default value is False.

When the value is True and the listener protocol is HTTP or TERMINATED_HTTPS, a newly created listener is enabled with XFF transparent transmission by default, and the client's IP address is transparently transmitted to the server encapsulated in the X-Forward-For field of the HTTP header.

Only the Pike plug-ins support this parameter.

To deploy OpenStack plug-ins on CloudOS, set the value of this parameter to false.

cloud_identity_mode

Whether to enable the multicloud function.

·     disable—Not carry the cloud_region_name field when sending a request to the controller.

·     region—Carry the cloud_region_name field when sending a request to the controller.

·     If multiple cloud platforms are connected to the controller, configure a different region name for each cloud platform.

·     custom—Carry the cloud_region_name field when sending a request to the controller. The value of the cloud_region_name field is that of the custom_cloud_name parameter.

The default value is disable.

Only the Newton, Queens, Rocky, and Train plug-ins support this parameter.

custom_cloud_name

Cloud platform name. The default value is openstack-1. If multiple cloud platforms are connected to the controller, configure a different name for each cloud platform.

This parameter takes effect only when the value of the cloud_identity_mode parameter is custom.

Only the Newton, Queens, Rocky, and Train plug-ins support this parameter.

deploy_network_resource_gateway

Whether to carry the gateway_list field for the external network sources.

The default value is False.

When the value of this field is True, you must set the value of the network_force_flat field to False.

Only the Pike plug-ins support this parameter.

force_vlan_port_details_qvo

Whether to forcibly create a qvo-type vPort on the OVS bridge after a VM in a VLAN network comes online. If the value is True, the system forcibly creates a qvo-type vPort. If the value is False, the system automatically creates a tap-type or qvo-type vPort as configured. As a best practice, set the value to False for interoperability with the cloud platform for the first time.

Only the Mitaka, Newton, Pike, Queens, Rocky, and Train plug-ins support this parameter.

enable_firewall_object_group

Whether to enable firewall object groups for the plug-ins.

The default value is False. If the value is True, the OpenStack platform can create firewall object groups through the plug-ins.

Only the Rocky plug-ins support this parameter.

For this feature to take effect, you must ensure its compatibility with the OpenStack Platform. For the compatibility configuration, contact the technical support.

enable_algorithm_upgrade

Whether to enable compatibility between host ID calculation methods when the system is upgraded from a version that uses the Kilo plug-ins to a version that uses Pike plug-ins. This parameter is used only for a backward-compatible upgrade.

 

Removing the VCF Neutron plug-ins

To remove the VCF Neutron plug-ins, first remove the VCF Neutron plug-ins and then remove the VCF OpenStack package.

To remove the VCF Neutron plug-ins:

1.     Remove the VCF Neutron plug-ins.

sdn@ubuntu:~$ sudo h3c-vcfplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

For the VCF Neutron plug-ins of Kilo 2015.1, Liberty, Mitaka, Newton, or Ocata version, you are prompted whether to remove the database when removing the plug-ins.

¡     To remove the database, enter y. Removing the plug-ins will simultaneously remove the connected database. Before removing the plug-ins, remove SERVICE_CHAIN type firewalls and GATEWAY or SERVICE_CHAIN type load balancers (if any) from the OpenStack.

¡     To not remove the database, enter n. When you install the VCF Neutron plug-ins of a new version, the plug-ins restore the configuration from the configuration file in the original database.

2.     Remove the VCF OpenStack package.

sdn@ubuntu:~$ sudo pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-vcfplugin

  /usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

For Train plug-ins (CentOS 8 operating system):

sdn@ubuntu:~$ sudo pip3 uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-train-2021.9:

  /usr/bin/h3c-agent

  /usr/bin/h3c-sdnplugin

  /usr/local/lib/python3.6/site-packages/SeerEngine_DC_PLUGIN-E3603P01 _train_2021.9-py3.6.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-train-2021.9

Upgrading the VCF Neutron plug-ins

CAUTION

CAUTION:

·     Services might be interrupted during the VCF Neutron plug-ins upgrade procedure.

·     The default parameter settings for VCF Neutron plug-ins might vary by OpenStack version (Kilo 2015.1, Liberty, Mitaka, and Ocata). Modify the default parameter settings for VCF Neutron plug-ins when upgrading the OpenStack version to ensure that the plug-ins have the same configurations before and after the upgrade.

 

IMPORTANT

IMPORTANT:

To avoid repeated deployment of firewall configuration after reinstallation of VCF Neutron plugins on multiple nodes, make sure the h3c-agent service is available only for one of the nodes. For how to stop the h3c-agent service on the other nodes, see "Installing the VCF Neutron plug-ins."

 

To upgrade the VCF Neutron plug-ins, you must remove the current version first, and install the new version. For information about installing the VCF Neutron plug-ins, see "Installing the VCF Neutron plug-ins." For information about removing the VCF Neutron plug-ins, see "Removing the VCF Neutron plug-ins."

Installing the lldpad service

In KVM network-based overlay scenario, you are required to install the lldpad service on each compute node.

1.     Install and start the lldpad service on the compute node.

sdn@ubuntu:~$ sudo apt-get install lldpad

sdn@ubuntu:~$ sudo service lldpad enable

sdn@ubuntu:~$ sudo service lldpad start

2.     Enable the uplink interface to send LLDP messages. eno2 is the uplink interface in this example.

sdn@ubuntu:~$ sudo lldptool set-lldp -i eno2 adminStatus=rxtx

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V sysName enableTx=yes

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V portDesc enableTx=yes

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V sysDesc enableTx=yes

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V sysCap enableTx=yes

Installing the Nova patch

IMPORTANT

IMPORTANT:

The Train plug-ins do not support the Nova patch.

 

You must install the Nova patch only in the following scenarios:

·     In KVM host overlay or network overlay scenario, virtual machines are load balancer members, and the load balancer must be aware of the member status.

·     vCenter network overlay scenario.

Prerequisites

The Nova patch is included in the VCF OpenStack package. Perform the following steps to download the VCF OpenStack package from the H3C website:

1.     In the Web browser address bar, enter http://www.h3c.com/cn/Software_Download. Select SDN > H3C Virtual Converged Framework Controller, and download the VCF OpenStack package of the required version.

2.     Copy the VCF OpenStack package to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SCP.

 

 

NOTE:

If you decide to upload the VCF OpenStack package through FTP or TFTP, use the binary mode to avoid damage to the package.

 

Installation procedure

Based on your network environment, choose one step between step 3 and step 4.

To install the Nova patch on the OpenStack compute node:

1.     Change the working directory to where the VCF OpenStack package (an .egg file) is saved, and install the package on the OpenStack compute node. The name of the VCF OpenStack package is SeerEngine_DC_PLUGIN-version1_version2-py2.7.egg. version1 represents the version of the package. version2 represents the version of OpenStack.

In this example, the VCF OpenStack package is saved to the path /home/compute.

sdn@ubuntu:~$ cd /home/compute

sdn@ubuntu:~$ sudo easy_install SeerEngine_DC_PLUGIN-E3102_pike_2017.10-py2.7.egg

2.     Install the Nova patch.

sdn@ubuntu:~$ sudo h3c-vcfplugin compute install

Install the nova patch

 

modifying:

/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py

modify success, backuped at: /usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py.h3c_bak

 

 

NOTE:

The contents below the modifying: line indicate the modified open source Neutron file and the backup path of the file before modification.

 

3.     (Optional.) If the networking type of the compute node is host-based overlay, perform the following steps:

a.     Stop the neutron-openvswitch-agent service on the compute node and disable the system from starting the service at startup.

sdn@ubuntu:~$ service neutron-openvswitch-agent stop

sdn@ubuntu:~$ systemctl disable neutron-openvswitch-agent.service

b.     Execute the neutron agent-list command on the controller node to identify whether the agent of the compute node exists in the database.

-     If the agent of the compute node does not exist in the database, go to the next step.

-     If the agent of the compute node exists in the database, execute the neutron agent-delete id command to delete the agent. The id argument represents the agent ID.

sdn@ubuntu:~$ neutron agent-list

| id                                   | agent_type      |host     |

| 25c3d3ac-5158-4123-b505-ed619b741a52 | Open vSwitch agent | compute3

sdn@ubuntu:~$ neutron agent-delete 25c3d3ac-5158-4123-b505-ed619b741a52

Deleted agent: 25c3d3ac-5158-4123-b505-ed619b741a52

c.     Use the vi editor on the compute node to open the nova.conf configuration file.

sdn@ubuntu:~$ vi /etc/nova/nova.conf

d.     Press I to switch to insert mode, and set the parameters in the nova.conf configuration file as follows. For descriptions of the parameters, see Table 3.

If the hypervisor type of the compute node is KVM, modify the nova.conf configuration file as follows:

[s1020v]

s1020v = False

member_status = True

[neutron]

ovs_bridge = vds1-br

If the hypervisor type of the compute node is VMware vCenter, modify the nova.conf configuration file as follows:

[DEFAULT]

compute_driver = vmwareapi.VMwareVCDriver

[vmware]

host_ip = 127.0.0.1

host_username = sdn

host_password = skyline123

cluster_name = vcenter

insecure = True

[s1020v]

s1020v = False

vds = VDS2

e.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the nova.conf file.

4.     (Optional.) If the networking type of the compute node is network-based overlay, perform the following steps:

If the hypervisor type of the compute node is KVM, you do not need to install the Nova patch.

If the hypervisor type of the compute node is VMware vCenter, perform the following steps:

a.     Stop the neutron-openvswitch-agent service and disable the system to start the service at startup.

sdn@ubuntu:~$ sudo service neutron-openvswitch-agent stop

sdn@ubuntu:~$ sudo systemctl disable neutron-openvswitch-agent.service

b.     Select Provision > Network Design > Domain from the top navigation bar of the controller webpage to identify whether the compute node is online. If the compute node is online, delete the compute node.

c.     Execute the neutron agent-list command on the controller node to identify the agent of the compute node exists in the database.

-     If the agent of the compute node does not exist in the database, go to the next step.

-     If the agent of the compute node exists in the database, execute the neutron agent-delete id command to delete the agent. The id argument represents the agent ID.

sdn@ubuntu:~$ sudo neutron agent-list

| id                                   | agent_type         | host        |

| 25c3d3ac-5158-4123-b505-ed619b741a52 | Open vSwitch agent | compute3

 

sdn@ubuntu:~$ sudo neutron agent-delete 25c3d3ac-5158-4123-b505-ed619b741a52

Deleted agent: 25c3d3ac-5158-4123-b505-ed619b741a52

d.     Use the vi editor to open the nova.conf configuration file.

sdn@ubuntu:~$ sudo vi /etc/nova/nova.conf

e.     Press I to switch to insert mode, and set the parameters in the nova.conf configuration file as follows. For descriptions of the parameters, see Table 3.

[DEFAULT]

compute_driver = vmwareapi.VMwareVCDriver

[vmware]

host_ip = 127.0.0.1

host_username = sdn

host_password = skyline123

cluster_name = vcenter

insecure = True

[s1020v]

s1020v = False

vds = VDS2

uplink_teaming_policy = loadbalance_srcid

f.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the nova.conf file.

Table 3 Parameters in the configuration file

Parameter

Description

s1020v

Whether to use the H3C S1020V vSwitch to forward the traffic between vSwitches and the traffic between the vSwitches and the external network:

·     True—Use the H3C S1020V vSwitch.

·     False—Do not use the H3C S1020V vSwitch.

This parameter is obsoleted.

member_status

Whether to enable or disable the feature of modifying the status of members on OpenStack load balancers.

·     True—Enable.

·     False—Disable.

vds

VDS to which the host in the vCenter belongs. In this example, the host belongs to VDS2. In the host overlay networking, you can only specify the VDS that the controller synchronizes to the vCenter. In the network overlay networking, you can specify an existing VDS on demand.

ovs_bridge

Name of the bridge for the H3C S1020V vSwitch. Make sure the bridges created on all H3C S1020V vSwitches use the same name.

compute_driver

Name of the driver used by the compute node for virtualization.

host_ip

IP address used to log in to the vCenter, for example, 127.0.0.1.

host_username

Username for logging in to the vCenter, for example, sdn.

host_password

Password for logging in to the vCenter, for example, skyline123. To use character "$" in the password, enter a backslash (\) before the character.

cluster_name

Name of the team in the vCenter environment, for example, vcenter.

insecure

Whether to enable or disable security check.

·     True—Do not perform security check.

·     False—Perform security check. This value is not supported in the current software version.

uplink_teaming_policy

Uplink routing policy.

·     loadbalance_srcid—Source vPort-based routing.

·     loadbalance_ip—IP hash-based routing.

·     loadbalance_srcmac—Source MAC hash-based routing.

·     loadbalance_loadbased—Physical NIC load-based routing.

·     failover_explicit—Explicit failover order-based routing.

 

5.     Restart the openstack-nova-compute service.

sdn@ubuntu:~$ sudo service nova-compute restart

Verifying the installation

# Verify that the VCF OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

sdn@ubuntu:~$ sudo pip freeze | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10

# Verify that the openstack-nova-compute service is enabled. The service is enabled if its state is running.

sdn@ubuntu:~$ sudo service nova-compute status

nova-compute start/running, process 184

Removing the Nova patch

You must remove the Nova patch before removing the VCF OpenStack package.

To remove the Nova patch:

1.     Remove the Nova patch.

sdn@ubuntu:~$ sudo h3c-vcfplugin compute uninstall

Uninstall the nova patch

2.     Remove the VCF OpenStack package.

sdn@ubuntu:~$ sudo pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-vcfplugin

  /usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

Upgrading the Nova patch

CAUTION

CAUTION:

Services might be interrupted during the Nova patch upgrade procedure.

 

To upgrade the Nova patch, you must remove the current version first, and install the new version. For information about installing the Nova patch, see "Installing the lldpad service

In KVM network-based overlay scenario, you are required to install the lldpad service on each compute node.

1.     Install and start the lldpad service on the compute node.

sdn@ubuntu:~$ sudo apt-get install lldpad

sdn@ubuntu:~$ sudo service lldpad enable

sdn@ubuntu:~$ sudo service lldpad start

2.     Enable the uplink interface to send LLDP messages. eno2 is the uplink interface in this example.

sdn@ubuntu:~$ sudo lldptool set-lldp -i eno2 adminStatus=rxtx

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V sysName enableTx=yes

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V portDesc enableTx=yes

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V sysDesc enableTx=yes

sdn@ubuntu:~$ sudo lldptool -T -i eno2 -V sysCap enableTx=yes

Installing the Nova patch." For information about removing the Nova patch, see "Removing the Nova patch."

Installing the openvswitch-agent patch

The Rocky and Train plug-ins do not require installation of the openvswitch-agent patch.

Prerequisites

The openvswitch-agent patch is included in the VCF OpenStack package. Perform the following steps to download the VCF OpenStack package from the H3C website:

1.     In the Web browser address bar, enter http://www.h3c.com/cn/Software_Download. Select SDN > H3C Virtual Converged Framework Controller, and download the VCF OpenStack package of the required version.

2.     Copy the VCF OpenStack package to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SCP.

 

 

NOTE:

If you decide to upload the VCF OpenStack package through FTP or TFTP, use the binary mode to avoid damage to the package.

 

Installation procedure

To install the openvswitch-agent patch:

1.     Change the working directory to where the VCF OpenStack package (an .egg file) is saved, and install the package on the OpenStack compute node. The name of the VCF OpenStack package is SeerEngine_DC_PLUGIN-version1_version2-py2.7.egg. version1 represents the version of the package. version2 represents the version of OpenStack.

sdn@ubuntu:~$ cd /home/compute

sdn@ubuntu:~$ sudo easy_install SeerEngine_DC_PLUGIN-E3102_pike_2017.10-py2.7.egg

2.     Install the openvswitch-agent patch.

sdn@ubuntu:~$ sudo h3c-vcfplugin openvswitch install

3.     Restart the openvswitch-agent service.

sdn@ubuntu:~$ sudo service neutron-plugin-openvswitch-agent restart

Verifying the installation

# Verify that the VCF OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

sdn@ubuntu:~$ sudo pip freeze | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10

# Verify that the openvswitch-agent service is enabled. The service is enabled if its state is running.

sdn@ubuntu:~$ sudo service neutron-plugin-openvswitch-agent status

neutron-plugin-openvswitch-agent start/running, process 184

Removing the openvswitch-agent patch

You must remove the openvswitch-agent patch before removing the VCF OpenStack package.

To remove the openvswitch-agent patch:

1.     Remove the openvswitch-agent patch.

sdn@ubuntu:~$ sudo h3c-vcfplugin openvswitch uninstall

2.     Remove the VCF OpenStack package.

sdn@ubuntu:~$ sudo pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

/usr/bin/h3c-agent

/usr/bin/h3c-vcfplugin

/usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

Upgrading the openvswitch-agent patch

CAUTION

CAUTION:

Services might be interrupted during the openvswitch-agent patch upgrade procedure.

 

To upgrade the openvswitch-agent patch, you must remove the current version first, and install a new version. For information about installing the openvswitch-agent patch, see "Installing the openvswitch-agent patch." For information about removing the openvswitch-agent patch, see "Removing the openvswitch-agent patch."

Installing/removing/upgrading DHCP fail-safe components

To provide DHCP fail-safe in the network-based overlay scenario, you must install DHCP fail-safe components. Only the Pike plug-ins support DHCP fail-safe.

 

IMPORTANT

IMPORTANT:

The DHCP failover components can operate only on the CentOS 7 operating system with a kernel version matching that of the S1020V. If the kernel version does not match that of the S1020V, install the kernel patch first .

 

Installing basic components

1.     Install WebSocket Client on the controller and network node.

 

IMPORTANT

IMPORTANT:

Make sure WebSocket Client is in version 0.56 or later.

 

sdn@ubuntu:~$ sudo apt-get install python-websocket-client

2.     Install an S1020V vSwitch on the network node and configure bridge and controller settings. For the installation and configuration procedures, see H3C S1020V Installation Guide.

sdn@ubuntu:~$ sudo dpkg --force-all -i s1020v_ubuntu14.04-2.2.1.20_amd64.deb

3.     Stop the open-source DHCP and Metadata services on OpenStack.

Skip this step if open-source DHCP and Metadata services do not exist.

sdn@ubuntu:~$ sudo systemctl stop neutron-dhcp-agent neutron-metadata-agent

sdn@ubuntu:~$ sudo systemctl disable neutron-dhcp-agent neutron-metadata-agent

Obtaining the installation package of the DHCP fail-safe components

Two VCF OpenStack packages are available for your choice: one contains the DHCP failover components package and one does not. The VCF OpenStack package that contains the DHCP failover components package is named in the SeerEngine_DC_PLUGIN-DHCP_version1_version2.egg format. version1 represents the software package version number. version2 represents the OpenStack version number.

Obtain the required version of the VCF OpenStack package and then save the package to the target installation directory on the server or virtual machine. You can also transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit.

Installing DHCP fail-safe components on the network node

Installing the DHCP component

1.     Access the directory where the VCF OpenStack package (an .egg file) is saved and then install the package.

In the following example, the VCF OpenStack package is in the /root directory.

sdn@ubuntu:~$ sudo easy_install SeerEngine_DC_PLUGIN-DHCP_E3607_pike_2017.10-py2.7.egg

2.     Install the DHCP component.

sdn@ubuntu:~$ sudo h3c-vcfplugin dhcp install

3.     Edit the DHCP component configuration file.

a.     Use the vi editor to open the h3c_dhcp_agent.ini file on the network node.

sdn@ubuntu:~$ sudo vi /etc/neutron/h3c_dhcp_agent.ini

b.     Press I to switch to insert mode and edit the configuration file as follows:

[DEFAULT]

interface_driver = openvswitch

dhcp_driver = networking_h3c.agent.dhcp.driver.dhcp.Dnsmasq

enable_isolated_metadata = true

force_metadata = true

ovs_integration_bridge = vds1-br

[agent]

[h3c]

transport_url = ws://127.0.0.1:8080

websocket_fragment_size = 102400

[ovs]

ovsdb_interface = vsctl

c.     To enable certificate authentication, add the following configurations:

[h3c]

ca_file = /etc/neutron/ca.crt

cert_file = /etc/neutron/sna.pem

key_file = /etc/neutron/sna.key

key_password = 123456

insecure = true

d.     To use the northbound API of the controller for connection, add the following configurations:

[VCFCONTROLLER]

url = https://127.0.0.1:8443

username = sdn

password = skyline

enable_https = False

neutron_plugin_ca_file =

neutron_plugin_cert_file =

neutron_plugin_key_file =

e.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

4.     Start the DHCP component.

sdn@ubuntu:~$ sudo systemctl enable h3c-dhcp-agent.service

sdn@ubuntu:~$ sudo systemctl start h3c-dhcp-agent.service

Installing the Metadata component

1.     Install the Metadata component.

sdn@ubuntu:~$ sudo h3c-vcfplugin metadata install

2.     Edit the Metadata component configuration file.

a.     Use the vi editor to open theh3c_metadata_agent.ini configuration file on the network node.

sdn@ubuntu:~$ sudo vi /etc/neutron/h3c_metadata_agent.ini

b.     Press I to switch to insert mode and edit the configuration file as follows:

[DEFAULT]

nova_metadata_host = controller

nova_metadata_port = 8775

nova_proxy_shared_secret = METADATA_SECRET

enable_keystone_authtoken = True

[agent]

[cache]

[keystone_authtoken]…//The configuration requirements are the same as the items of the same name in the neutron.conf configuration file.

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = NEUTRON_PASSWORD

 [VCFCONTROLLER]…//The configuration requirements are the same as the items of the same name in the ml2_conf_h3c.ini configuration file.

url = https://127.0.0.1:8443

username = sdn

password = skyline

enable_https = False

neutron_plugin_ca_file =

neutron_plugin_cert_file =

neutron_plugin_key_file =

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

3.     Start the Metadata component.

sdn@ubuntu:~$ sudo systemctl enable h3c-metadata-agent.service

sdn@ubuntu:~$ sudo systemctl start h3c-metadata-agent.service

Removing DHCP fail-safe components

Remove the VCF OpenStack package after removing the DHCP and Metadata components.

To remove the DHCP fail-safe components:

1.     Remove the DHCP component.

sdn@ubuntu:~$ sudo h3c-vcfplugin dhcp uninstall

2.     Remove the Metadata component.

sdn@ubuntu:~$ sudo h3c-vcfplugin metadata uninstall

3.     Remove the VCF OpenStack package.

sdn@ubuntu:~$ sudo pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-vcfplugin

  /usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

Upgrading DHCP fail-safe components

To upgrade DHCP fail-safe components, first remove the old version and then install the new version.

 

CAUTION

CAUTION:

Service might be interrupted during the upgrade. Before performing an upgrade, be sure you fully understand its impact on the services.

 

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

DHCP component configuration file

Parameter

Description

interface_driver

Driver that manages vPorts.

Only value openvswitch is supported.

dhcp_driver

Driver that manages DHCP Server.

Only value networking_h3c.agent.dhcp.driver.dhcp.Dnsmasq is supported.

ovs_integration_bridge

vSwitch bridge where the DHCP port resides.

transport_url

RPC interface URL of the controller. Only a WebSocket (WS) interface is supported. The default value is ws://127.0.0.1:1080. The value depends on the connected controller configuration. For example, if the component connects to SNA Center over a WS connection, the value is ws://SNA cluster IP:10080. If the component connects to SNA Center over a WebSocket Secure (WSS) connection, the value is wss://SNA Install IP:10443. If the component connects to U-Center over a WS connection, the value is ws://U-Center cluster IP:30000.

websocket_fragment_size

Size of a WebSocket message fragment sent to the controller, in bytes.

The value is an integer equal to or larger than 1024. The default value is 102400. When the value is 1024, the websocket messages are not fragmented.

insecure

Whether to enable WebSocket certificate authentication.

The default value is False.

 

Metadata component configuration file

Parameter

Description

nova_metadata_host

IP address or DNS name for the Nova metadata service.

nova_metadata_port

TCP port number for the Nova metadata service.

nova_proxy_shared_secret

When proxying metadata requests, Neutron uses the shared secret key to sign the Instance-ID header to prevent spoofing. This parameter must be consistent with the metadata_proxy_shared_secret parameter in the nova.conf file of the control node.

enable_keystone_authtoken

Whether to enable Neutron API. When the value is True, you must configure the keystone_authtoken parameter. When the value is False, you must configure the VCFCONTROLLER parameter.

 

Configuring the open-source metadata service for network nodes

OpenStack supports obtaining metadata from network nodes for VMs through DHCP or L3 gateway. H3C supports only the DHCP method. To configure the metadata service for network nodes:

1.     Download the OpenStack installation guide from the OpenStack official website and follow the installation guide to configure the metadata service for the network nodes.

2.     Configure the network nodes to provide metadata service through DHCP.

a.     Use the vi editor to open configuration file dhcp_agent.ini.

sdn@ubuntu:~$ sudo vi /etc/neutron/dhcp_agent.ini

b.     Press I to switch to insert mode, and modify configuration file dhcp_agent.ini as follows:

force_metadata = True

Set the value to True for the force_metadata parameter to force the network nodes to provide metadata service through DHCP.

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the dhcp_agent.ini configuration file.

Comparing and synchronizing resource information between the controller and cloud platform

Only Rocky, Train, Queens, Pike, Newton, and Mitaka plug-ins support this task.

To compare and synchronize resource information between the controller and cloud platform:

1.     Execute the h3c-vcfplugin-extension compare --file [absolute path] file name.csv command to compare resource information between the controller and cloud platform.

¡     If you do not specify --file [absolute path] filename.csv, the comparison result is saved to the /var/log/neutron/compare_data-time.csv file, where time indicates the comparison start time.

¡     If you specify --file [absolute path] filename.csv, the comparison result is saved to the specified file. If you do not specify an absolute path, the result is saved to /var/log/neutron/file name.csv.

The comparison result file contains the following fields:

¡     Resource—Resource type.

¡     Name—Resource name.

¡     Id—Resource ID.

¡     Tenant_id—Tenant ID of the resource.

¡     Tenant_name—Tenant name of the resource.

¡     Status—Comparison result.

-     lost—Less resources on the controller. You must add resources to the controller.

-     different—Different resources on the controller than the cloud platform. You must update resources on the controller.

-     surplus—More resources on the controller. You must remove excessive resources from the controller.

2.     Execute the h3c-vcfplugin-extension sync --file comparison result file name.csv command. If the comparison result file is in the /var/log/neutron/ path, enter the file name directly. If the comparison result file is in another path, enter the absolute file path.

After the command is executed, the system displays resource statistics and prompts for your confirmation to start the synchronization. The system starts the synchronization only after receiving your confirmation for twice.

After the synchronization is complete, a synchronization result file /var/log/neutron/sync_all-time.csv is generated, where time indicating the synchronization start time.

 

CAUTION

CAUTION:

·     Do not add or edit information in the synchronize result file.

·     To avoid anomaly caused by misoperations, examine and compare the result file and resource statistics carefully.

 

 


FAQ

The Python tools cannot be installed using the apt-get command when a proxy server is used for Internet access. What should I do?

Configure HTTP proxy by performing the following steps:

1.     Make sure the server or the virtual machine can correctly access the HTTP proxy server.

2.     At the CLI of the Ubuntu, use the vi editor to open the apt.conf configuration file. If the apt.conf configuration file does not exist, this step creates the file.

sdn@ubuntu:~$ sudo vi /etc/apt/apt.conf

3.     Press I to switch to insert mode, and provide HTTP proxy information as follows:

¡     If the server does not require authentication, enter HTTP proxy information in the following format:
Acquire::http::proxy "http://yourproxyaddress:proxyport"

¡     If the server requires authentication, enter HTTP proxy information in the following format:
Acquire::http::proxy "http://username:password@yourproxyaddress:proxyport"

Table 4 describes the arguments in HTTP proxy information.

Table 4 Arguments in HTTP proxy information

Field

Description

username

Username for logging in to the proxy server, for example, sdn.

password

Password for logging in to the proxy server, for example, 123456.

yourproxyaddress

IP address of the proxy server, for example, 172.25.1.1.

proxyport

Port number of the proxy server, for example, 8080.

 

Acquire::http::proxy "http://sdn:123456@172.25.1.1:8080";

~

~

4.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the apt.conf file.

After the tap-service and tap-flow data is updated on OpenStack, the image destination template settings of the controller fail to be synchronized automatically. What should I do?

This is an open source issue. You can only modify the image destination template on the controller manually for synchronization.

The Inter X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?

Use the following procedure to resolve the issue. An enp61s0f3 Ethernet network adapter is used as an example.

1.     View detailed information about the Ethernet network adapter and record the value for the bus-info field.

sdn@ubuntu:~$ ethtool -i enp61s0f3

driver: i40e

version: 2.8.20-k

firmware-version: 3.33 0x80000f0c 1.1767.0

expansion-rom-version:

bus-info: 0000:3d:00.3

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

2.     Use one of the following solutions.

¡     Solution 1. If this solution fails, use solution 2.

# Execute the following command:

sdn@ubuntu:~$ sudo ethtool --set-priv-flags enp61s0f3  disable-fw-lldp on

# Identify whether the value for the disable-fw-lldp field is on.

sdn@ubuntu:~$ ethtool --show-priv-flags enp61s0f3  | grep lldp

disable-fw-lldp       : on

If the value is on, the network adapter then can receive LLDP messages. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file.

sdn@ubuntu:~$ sudo vi /etc/rc.local

# Press I to switch to insert mode, and add this command to the file. Then press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

ethtool --set-priv-flags enp61s0f3  disable-fw-lldp on

Make sure this command line is configured before the exit 0 line.

¡     Solution 2.

# Execute the echo "lldp stop" > /sys/kernel/debug/i40e/bus-info/command command. Enter the recorded bus info value for the network adapter, and add a backslash (\) before each ":".

sdn@ubuntu:~$ sudo -i

sdn@ubuntu:~$ echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

The network adapter can receive LLDP messages after this command is executed. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file.

sdn@ubuntu:~$ sudo vi /etc/rc.local

# Press I to switch to insert mode, and add this command to the file. Then Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

Make sure this command line is configured before the exit 0 line.

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网