H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for Kolla-E36xx-5W609

HomeSupportAD-NET(SDN)H3C SeerEngine-DCConfigure & DeployInteroperability GuidesH3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for Kolla-E36xx-5W609
01-Text
Title Size Download
01-Text 184.65 KB

Overview

This document describes how to install SeerEngine-DC Neutron plug-ins on OpenStack deployed by using Kolla-Ansible.

Neutron is a type of OpenStack services used to manage all virtual networking infrastructures (VNIs) in an OpenStack environment. It provides virtual network services to the devices managed by OpenStack computing services. It allows tenants to create advanced virtual services, such as a firewall (FW), load balancer (LB), and virtual private network (VPN).

SeerEngine-DC Neutron plug-ins are developed for the SeerEngine-DC controller based on the OpenStack framework. The following SeerEngine-DC Neutron plug-ins are available:

·     SeerEngine-DC Neutron Core plug-insIncludes networks, subnets, routers, and ports and provides tenants with core basic network communication capabilities.

·     SeerEngine-DC Neutron L3_Routing plug-inAllows tenants to forward traffic to each other at Layer 3.

·     SeerEngine-DC Neutron FWaaS plug-inAllows tenants to create firewall services.

·     SeerEngine-DC Neutron LBaaS plug-inAllows tenants to create LB services.

·     SeerEngine-DC Neutron VPNaaS plug-inAllows tenants to create VPN services.

The SeerEngine-DC Neutron plug-ins allow deployment of the network configuration obtained from OpenStack through REST APIs on the SeerEngine-DC controller, including tenants' networks, subnets, routers, ports, FW, LB, and VPN settings.

 

CAUTION

CAUTION:

To avoid service interruptions, do not modify the settings issued by the cloud platform on the controller, such as the virtual link layer network, vRouter, and vSubnet settings after the plug-ins connect to the OpenStack cloud platform.

 

 


Preparing for installation

Hardware requirements

Table 1 shows the hardware requirements for installing the SeerEngine-DC Neutron plug-ins on a server or virtual machine.

Table 1 Hardware requirements

CPU

Memory size

Disk space

Single-core and multicore CPUs

2 GB and above

5 GB and above

 

Software requirements

Table 2 shows the software requirements for installing the SeerEngine-DC Neutron plug-ins.

Table 2 Software requirements

Item

Supported versions

OpenStack deployed by using Kolla-Ansible

·     OpenStack Ocata

·     OpenStack Pike

·     OpenStack Rocky

 

IMPORTANT

IMPORTANT:

Before you install the OpenStack plug-ins, make sure the following requirements are met:

·     Your system has a reliable Internet connection.

·     OpenStack has been deployed correctly. Verify that the /etc/hosts file on all nodes has the host name-IP address mappings, and the OpenStack Neutron extension services (Neutron-FWaas, Neutron-VPNaas, or Neutron-LBaas) have been deployed. For the deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website.

 


Deploying OpenStack by using Kolla Ansible

Before installing the plug-ins, deploy OpenStack by using Kolla Ansible first. For the OpenStack deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website.


Preprovisioning basic SeerEngine-DC settings

This procedure preprovisions only basic SeerEngine-DC settings. For the configuration in a specific scenario, see the SeerEngine-DC configuration guide for that scenario.

Table 3 Preprovisioning basic SeerEngine-DC settings

Item

Configuration directory

Fabrics

Provision > Network Design > Fabrics

VDS

Tenants > Common Network Settings > Virtual Distributed Switches

IP address pool

Provision > Inventory > IP Address Pools

Add access devices and border devices to a fabric

Provision > Network Design > Fabrics

L4-L7 device, physical resource pool, and template

Provision > Inventory > Devices > L4-L7 Device

Provision > Inventory > Devices > L4-L7 Physical Resource Pools

Border gateway

Tenants > Common Network Settings > Gateway

 

 


Installing OpenStack plug-ins

The SeerEngine-DC Neutron plug-ins can be installed on different OpenStack versions. The installation package varies by OpenStack version. However, you can use the same procedure to install the Neutron plug-ins on different OpenStack versions. This document uses OpenStack Ocata as an example.

The SeerEngine-DC Neutron plug-ins are installed on the OpenStack control node.

Setting up the basic environment

Before installing SeerEngine-DC Neutron plug-ins on the OpenStack control node, set up the basic environment on the node.

To set up the basic environment:

1.     Update the software source list, and then download and install the Python tools.

The following uses commands on a CentOS operating system as an example.

[root@controller01 ~]# yum clean all

[root@controller01 ~]# yum makecache

[root@controller01 ~]# yum install –y python-pip python-setuptools

For Train plug-ins (CentOS operating system):

[root@controller01 ~]# yum install –y python-pip3 python-setuptools

2.     Install runlike.

[root@controller01 ~]# pip install runlike

For Train plug-ins (CentOS operating system):

[root@controller01 ~]# pip3 install runlike

3.     Access the neutron_server container and edit the /etc/hosts file. Add the following information to the file.

¡     IP and name mappings of all hosts in this OpenStack environment. To obtain this information, access the SeerEngine-DC controller and select Provision > Domains > Hosts.

¡     IP and name mappings of all leaf, spine, and border devices in this scenario. To obtain this information, access the SeerEngine-DC controller and select Provision > Inventory > Devices.

Installing the SeerEngine-DC Neutron plug-ins

Obtaining the SeerEngine-DC Neutron plug-in installation package

The SeerEngine-DC Neutron plug-ins are included in the SeerEngine-DC OpenStack package. Obtain the SeerEngine-DC OpenStack package of the required version and then save the package to the target installation directory on the server or virtual machine.

Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit.

Installing the SeerEngine-DC Neutron plug-ins on the OpenStack control node

1.     Create the startup scripts for the neutron-server and h3c-agent containers.

[root@controller01 ~]# runlike neutron_server>docker-neutron-server.sh

[root@controller01 ~]# cp docker-neutron-server.sh  docker-h3c-agent.sh

[root@controller01 ~]# sed -i 's/neutron-server/h3c-agent/g' docker-h3c-agent.sh

[root@controller01 ~]# sed -i 's/neutron_server/h3c_agent/g' docker-h3c-agent.sh

2.     Modify the neutron.conf configuration file.

a.     Use the vi editor to open the neutron.conf configuration file.

[root@controller01 ~]# vi /etc/kolla/neutron-server/neutron.conf

b.     Configure the neutron.conf configuration file based on the operating system you use:

-     If a CentOS operating system is used, see H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for CentOS.

-     If a Ubuntu operating system is used, see H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for Ubuntu.

 

IMPORTANT

IMPORTANT:

·     In the neutrone_server configuration directory (/etc/kolla/neutron-server/), you can configure the service_provider parameter for a service once only. If you have configured the service_provider parameter for the firewall service in the neutron.conf configuration file, do not configure the service_provider parameter in the fwaas_driver.ini file. This rule applies also to the LBaaS and PNaaS services.

·     For h3c_agent to load the driver correctly, change the FWaaS driver value in the /etc/kolla/neutron-server/fwaas_driver.ini file to networking_h3c.fw.h3c_fwplugin_driver.H3CfwaasDriver.

 

3.     Modify the ml2_conf.ini configuration file.

a.     Use the vi editor to open the ml2_conf.ini configuration file.

[root@controller01 ~]# vi /etc/kolla/neutron-server/ml2_conf.ini

b.     Press I to switch to insert mode, and set the parameters in the ml2_conf.ini configuration file. For information about the parameters, see "ml2_conf.ini."

[ml2]

type_drivers = vxlan,vlan

tenant_network_types = vxlan,vlan

mechanism_drivers = ml2_h3c

extension_drivers = ml2_extension_h3c,qos

[ml2_type_vlan]

network_vlan_ranges = physicnet1:1000:2999,port_security

[ml2_type_vxlan]

vni_ranges = 1:500

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.

4.     Modify the neutron.conf configuration file and add plug-ins configuration items.

a.     Use the vi editor to open the neutron.conf configuration file.

[root@controller01 ~]# vi /etc/kolla/neutron-server/neutron.conf

b.     Add ml2_conf_h3c.ini to the neutron.conf configuration file based on the operating system you use:

-     If a CentOS operating system is used, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS.

-     If a Ubuntu operating system is used, see H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu.

5.     Copy the plug-ins installation package to the neutron_server container.

[root@controller01 ~]# docker cp SeerEngine_DC_PLUGIN-D3601_ocata_2017.1-py2.7.egg neutron_server:/

6.     Access the neutron_server container and install the plug-ins installation package.

[root@controller01 ~]# neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)

[root@controller01 ~]# docker exec -it -u root --name $neutron_server_image bash

[root@controller01 ~]# easy_install SeerEngine_DC_PLUGIN-D3601_ocata_2017.1-py2.7.egg

[root@controller01 ~]# h3c-vcfplugin controller install

 

 

NOTE:

·     An error might be reported when the h3c-vcfplugin controller install command is executed. Just ignore it.

·     Make sure no neutron.conf file exists in the /root directory when you execute the h3c-vcfplugin controller install command. If such a file exists, delete it or move it to another directory.

 

7.     Create neutron-server and h3c-agent container images.

For the Rocky plugins, the h3c-agent container image is not required if you are to enable the firewall agent service.

[root@controller01 ~]# neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)

[root@controller01 ~]# h3c_agent_image=$(echo $neutron_server_image | sed 's/neutron-server/h3c-agent/')

[root@controller01 ~]# docker ps | grep neutron_server

16d60524b8b3        kolla/centos-source-neutron-server:rocky              "dumb-init --single-?   16 months ago       Up 2 weeks                              neutron_server

[root@controller01 ~]# docker commit container_id (replace the ID with the ID obtained through the previous command)

kolla/neutron-server-h3c

[root@controller01 ~]# docker commit $neutron_server_image kolla/neutron-server-h3c

[root@controller01 ~]# docker rm -f neutron_server

[root@controller01 ~]# docker tag $neutron_server_image kolla/neutron-server-origin

[root@controller01 ~]# docker rmi $neutron_server_image

[root@controller01 ~]# docker tag kolla/neutron-server-h3c $neutron_server_image

[root@controller01 ~]# docker tag kolla/neutron-server-h3c $h3c_agent_image

[root@controller01 ~]# docker rmi kolla/neutron-server-h3c

8.     Copy the neutron-server configuration to the h3c-agent directory and modify the configuration.

[root@controller01 ~]# cp -pR /etc/kolla/neutron-server /etc/kolla/h3c-agent

[root@controller01 ~]# sed -i 's/neutron-server/h3c-agent/g' /etc/kolla/h3c-agent/config.json

9.     Start the neutron-server and h3c-agent containers

[root@controller01 ~]# source docker-neutron-server.sh

[root@controller01 ~]# source docker-h3c-agent.sh

To avoid repeated deployment of firewall configuration after installation or upgrade of Neutron plugins on multiple nodes, make sure the h3c-agent service is available only for one of the nodes. To stop the h3c-agent service for the other nodes, perform the following steps on each of these nodes:

a.     Execute the h3c_agent=$(docker ps --format {{.ID}} --filter name= h3c_agent) command to view the h3c-agent service status.

b.     Execute the docker stop $h3c_agent command to stop the h3c-agent container.

10.     View the startup status of the containers. If their status is Up, they have been started up correctly.

[root@controller01 ~]# docker ps --filter "name=neutron_server"

CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES

289e4e132a9b        kolla/centos-source-neutron-server:ocata   "dumb-init --single-?   1 minutes ago        Up 1 minutes                              neutron_server

[root@controller01 ~]# docker ps --filter "name=h3c_agent"

CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES

c334f7ec9857        kolla/centos-source-h3c-agent:ocata   "dumb-init --single-?   1 minutes ago        Up 1 minutes                              h3c_agent

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

neutron.conf

Parameter

Required value

Description

core_plugin

ml2

Used for loading the core plug-in ml2 to OpenStack.

service_plugins

h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,lbaas,vpnaas

Used for loading the extension plug-ins to OpenStack.

For the Kilo, Mitaka, Pike, and Queens plug-ins, if deployment of firewall policies and rules takes a long time, you can change firewall in the value to fwaas_h3c.

service_provider

·     FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

·     LOADBALANCER:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginDriver:default

·     VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

Directory where the extension plug-ins are saved.

notification_drivers

message_queue,qos_h3c

Name of the QoS notification driver.

admin_user

N/A

Admin username for Keystone authentication in OpenStack, for example, neutron.

admin_password

N/A

Admin password for Keystone authentication in OpenStack, for example, 123456.

 

ml2_conf.ini

Parameter

Required value

Description

type_drivers

vxlan,vlan

Driver type.

vxlan must be specified as the first driver type.

tenant_network_types

vxlan,vlan

Type of the networks to which the tenants belong.

·     In the host overlay scenario and network overlay with hierarchical port binding scenario, vxlan must be specified as the first network type.

·     In the network overlay without hierarchical port binding scenario, vlan must be specified as the first network type.

For intranet, only vxlan is available.

For extranet, only vlan is available.

mechanism_drivers

ml2_h3c

Name of the ml2 driver.

To create SR-IOV instances for VLAN networks, set this parameter to sriovnicswitch, ml2_h3c.

To create hierarchy-supported instances, set this parameter to ml2_h3c,openvswitch.

extension_drivers

ml2_extension_h3c,qos

Names of the ml2 extension drivers. Available names include ml2_extension_h3c, qos, and port_security. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this parameter. To not enable port security on OpenStack, you do not need to specify the port_security value for this parameter (The Ocata 2017.1 plug-ins do not support the port_security value.)

Kilo 2015.1 plug-ins do not support the QoS driver.

network_vlan_ranges

N/A

Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999.

vni_ranges

N/A

Value range for the VXLAN ID of the intranet, for example, 1:500.

 

OPENSTACK_NEUTRON_NETWORK

Field

Description

enable_lb

Whether to enable or disable the LB configuration page.

·     True—Enable.

·     False—Disable.

enable_firewall

Whether to enable or disable the FW configuration page.

·     True—Enable.

·     False—Disable.

enable_vpn

Whether to enable or disable the VPN configuration page.

·     True—Enable.

·     False—Disable.

 

ml2_conf_h3c.ini

Parameter

Description

url

URL address for logging in to SNA Center, for example, http://127.0.0.1:10080.

username

Username for logging in to SNA Center, for example, admin. You do not need to configure a username when the use_neutron_credential parameter is set to True.

password

Password for logging in to SNA Center, for example, admin@123. You do not need to configure a password when the use_neutron_credential parameter is set to True. To use character "$" in the password, enter a backslash (\) before the character.

domain

Name of the domain where the controller resides, for example, sdn.

timeout

The amount of time that the Neutron server waits for a response from the controller in seconds, for example, 1800 seconds.

As a best practice, set the waiting time greater than or equal to 1800 seconds.

retry

Maximum times for sending connection requests from the Neutron server to the controller, for example, 10.

vif_type

Default vNIC type:

·     ovs

·     vhostuser (applied to the OVS DPDK solution)

You can set the vhostuser_mode parameter when the value of this parameter is vhostuser.

Only the Pike plug-ins support this parameter.

vnic_type

Default vNIC type:

·     ovs

·     vhostuser

Only the plug-ins earlier than Ocata support this parameter.

vhostuser_mode

Default DPDK vHost-user mode:

·     server

·     client

The default value is server.

This setting takes effect only when the value of the vif_type parameter is vhostuser.

hybrid_vnic

Whether to enable or disable the feature of mapping OpenStack VLAN to SeerEngine-DC VXLAN.

·     True—Enable.

·     False—Disable.

ip_mac_binding

Whether to enable or disable IP-MAC binding.

·     True—Enable.

·     False—Disable.

denyflow_age

Anti-spoofing flow table aging time for the virtual distributed switch (VDS), an integer in the range of 1 to 3600 seconds, for example, 300 seconds.

white_list

Whether to enable or disable the authentication-free user feature on OpenStack.

·     True—Enable.

·     False—Disable.

auto_create_tenant_to_vcfc

Whether to enable or disable the feature of automatically creating tenants on the controller.

·     True—Enable.

·     False—Disable.

router_binding_public_vrf

Whether to use the public network VRF for creating a vRouter.

·     True—Use.

·     False—Do not use.

Do not set the value to True for a weak control network.

enable_subnet_dhcp

Whether to enable or disable DHCP for creating a vSubnet.

·     True—Enable.

·     False—Disable.

dhcp_lease_time

Valid time for vSubnet IP addresses obtained from the DHCP address pool in days, for example, 365 days.

firewall_type

Type of the firewalls created on the controller:

·     CGSR—Context-based gateway service type firewall, each using an independent context. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE—Context-based gateway service type firewall, all using the same context even if they belong to different tenants. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE_BY_COUNT—Context-based gateway service type firewall, all using the same context when the number of contexts reaches the threshold set by the cgsr_fw_context_limit parameter. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY. Only the Pike plug-ins support this firewall type.

·     NFV_CGSR—VNF-based gateway service type firewall, each using an independent VNF. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

fw_share_by_tenant

Whether to enable exclusive use of a gateway service type firewall context by a single tenant and allow the context to be shared by service resources of the tenant when the firewall type is CGSR_SHARE.

lb_type

Type of the load balancers created on the controller.

·     CGSRGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, CGSR type load balancers that belong to one tenant use the same context. CGSR type load balancers that belong to different tenants use different contexts. When the value of the lb_resource_mode parameter is MP, CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same context. CGSR type load balancers that belong to different tenants use different contexts.

·     CGSR_SHAREGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, all CGSR_SHARE type load balancers use the same context even if they belong to different tenants. When the value of the lb_resource_mode parameter is MP, CGSR_SHARE type load balancers that belong to different tenants and are bound to the same gateway use the same context.

·     NFV_CGSRGateway service type load balancer on a VNF. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, NFV_CGSR type load balancers that belong to one tenant use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs. When the value of the lb_resource_mode parameter is MP, NFV_CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs.

resource_mode

Type of the resources created on the controller.

·     CORE_GATEWAY—Gateway resources.

·     NFV—VNF resources. This value is obsoleted.

resource_share_count

Number of resources that can share a resource node. The value is in the range of 1 to 65535. The default value is 1, indicating that no resources can share a resource node.

auto_delete_tenant_to_vcfc

Whether to enable or disable the feature of automatically removing tenants from the controller.

·     True—Enable.

·     False—Disable.

auto_create_resource

Whether to enable or disable the feature of automatically creating resources.

·     True—Enable.

·     False—Disable.

nfv_ha

Whether configure the NFV and NFV_SHARE resources to support stack.

·     True—Support.

·     False—Do not support.

vds_name

Name of the VDS, for example, VDS1.

After deleting a VDS and recreating a VDS with the same name, you must perform the following tasks on the controller node for the new VDS to take effect:

·     Reboot the neutron-server service.

·     Reboot the h3c-agent service.

enable_metadata

Whether to enable or disable metadata for OpenStack.

·     True—Enable.

·     False—Disable.

If you enable this feature, you must set the enable_l3_router_rpc_notify parameter to True.

use_neutron_credential

Whether to use the OpenStack Neutron username and password to communicate with the controller.

·     True—Use.

·     False—Do not use.

enable_security_group

Whether to enable or disable the feature of deploying security group rules to the controller.

·     True—Enable.

·     False—Disable.

disable_internal_l3flow_offload

Whether to enable or disable intra-network traffic routing through the gateway.

·     True—Disable.

·     False—Enable.

firewall_force_audit

Whether to audit firewall policies synchronized to the controller by OpenStack. The default value is False for the Kilo 2015.1 plug-ins and True for plug-ins of other versions.

·     TrueAudits firewall policies synchronized to the controller by OpenStack. The auditing state of the synchronized policies on the controller is True (audited).

·     FalseDoes not audit firewall policies synchronized to the controller by OpenStack. The synchronized policies on the controller retain their previous auditing state.

enable_l3_router_rpc_notify

Whether to enable or disable the feature of sending Layer 3 routing events through RPC.

·     True—Enable.

·     False—Disable.

output_json_log

Whether to output REST API messages to the OpenStack operating logs in JSON format for communication between the SeerEngine-DC Neutron plug-ins and the controller.

·     True—Enable.

·     False—Disable.

lb_enable_snat

Whether to enable or disable Source Network Address Translation (SNAT) for load balancers on the controller.

·     True—Enable.

·     False—Disable.

empty_rule_action

Set the action for security policies that do not contain any ACL rules on the controller.

·     permit

·     deny

enable_l3_vxlan

Whether to enable or disable the feature of using Layer 3 VXLAN IDs (L3VNIs) to mark Layer 3 flows between vRouters on the controller.

·     True—Enable.

·     False—Disable.

By default, this feature is disabled.

l3_vni_ranges

Set the value range for the L3VNI, for example, 10000:10100. If the controller interoperates with multiple OpenStack platforms, make sure the L3VNI value range for each OpenStack platform is unique.

vendor_rpc_topic

RPC topic of the vendor. This parameter is required when the vendor needs to obtain Neutron data from the SeerEngine-DC Neutron plug-ins. The available values are as follows:

·     VENDOR_PLUGIN—Default value, which means that the parameter does not take effect.

·     DP_PLUGIN—RPC topic of DPtech.

The value of this parameter must be negotiated by the vendor and H3C.

vsr_descriptor_name

VNF descriptor name of the VNF virtual gateway resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

vlb_descriptor_name

VNF descriptor name of the virtual load balancing resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the lb_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

vfw_descriptor_name

VNF descriptor name of the virtual firewall resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the firewall_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

hierarchical_port_binding_physicnets

Policy for OpenStack to select a physical VLAN when performing hierarchical port binding. The default value is ANY.

·     ANY—A VLAN is selected from all physical VLANs for VLAN ID assignment.

·     PREFIX—A VLAN is selected from all physical VLANs matching the specified prefix for VLAN ID assignment.

hierarchical_port_binding_physicnets_prefix

Prefix for matching physical VLANs. The default value is physicnet. This parameter is available only when you set the value of the hierarchical_port_binding_physicnets parameter to PREFIX.

Only the Ocata and Pike plug-ins support this parameter.

network_force_flat

Whether to enable forcible conversion of an external network to a flat network. The value can only be set to True if the external network is a VXLAN.

directly_external

Whether traffic to the external network is directly forwarded by the gateway. The available values are as follows:

·     ANY—Traffic to the external network is directly forwarded by the gateway to the external network.

·     OFF—Traffic to the external network is forwarded by the gateway to the firewall and then to the external network.

·     SUFFIXDetermine the forwarding method for the traffic to the external network by matching the traffic against the vRouter name suffix (set by the directly_external_suffix parameter).

¡     If the traffic matches the suffix, the traffic is directly forwarded by the gateway to the external network.

¡     If the traffic does not match the suffix, the traffic is forwarded by the gateway to the firewall and then to the external network.

The default value is OFF. You can set the value to ANY only when the external network is a VXLAN and the value of network_force_flat is False.

directly_external_suffix

vRouter name suffix (DMZ for example). This parameter is available only when you set the value of the directly_external parameter to SUFFIX.

generate_vrf_based_on_router_name

Whether to use the vRouter names configured on OpenStack as the VRF names on the controller.

·     True—Use the names. Make sure each vRouter name configured on OpenStack is a case-sensitive string of 1 to 31 characters that contain only letters and digits.

·     False—Not to use the names.

By default, the vRouter names configured on OpenStack are not used as the VRF names on the controller.

enable_dhcp_hierarchical_port_binding

Whether to enable DHCP hierarchical port binding. The default value is False.

·     True—Enable.

·     False—Disable.

Only the Pike, Mitaka, Newton, and Rocky plug-ins support this parameter.

enable_multi_segments

Whether to enable multiple outbound interfaces, allowing the vRouter to access the external network from multiple outbound interfaces. The default value is False.

To enable multiple outbound interfaces, configure the following settings:

·     Set the value of this parameter to True.

·     Set the value of the network_force_flat parameter to False.

·     Access the /etc/neutron/plugins/ml2/ml2_conf.ini file on the control node and specify the controller's gateway name for the network_vlan_ranges parameter.

Only the Pike plug-ins support this parameter.

enable_https

Whether to enable HTTPS bidirectional authentication. The default value is False.

·     True—Enable.

·     False—Disable.

Only the Pike plug-ins support this parameter.

neutron_plugin_ca_file

Save location for the CA certificate of the controller. As a best practice, save the CA certificate in the /usr/share/neutron directory.

Only the Pike plug-ins support this parameter.

neutron_plugin_cert_file

Save location for the Cert certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

Only the Pike plug-ins support this parameter.

neutron_plugin_key_file

Save location for the Key certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

Only the Pike plug-ins support this parameter.

router_route_type

Route entry type:

·     None—Standard route.

·     401—Extended route with the IP address of an online vPort as the next hop.

·     402—Extended route with the IP address of an offline vPort as the next hop.

The default value is None.

Only the Pike plug-ins support this parameter.

enable_router_nat_without_firewall

Whether to enable NAT when no firewall is configured for the tenant.

·     True—Enable NAT when no firewall is configured. This setting automatically creates default firewall resources to implement NAT if the vRouter has been bound to an external network.

·     False—Not enable NAT when no firewall is configured.

The default value is False.

Only the Pike plug-ins support this parameter.

cgsr_fw_context_limit

Context threshold for context-based gateway service type firewalls. The value is an integer. When the threshold is reached, all the context-based gateway service type firewalls use the same context.

This parameter takes effect only when the value of the firewall_type parameter is CGSR_SHARE_BY_COUNT.

Only the Pike plug-ins support this parameter.

force_vip_port_device_owner_none

Whether to support the LB vport device_owner field.

·     False—Support the LB vport device_owner field. This setting is applicable to an LB tight coupling solution.

·     True—Do not support the LB vport device_owner field. This setting is applicable to an LB loose coupling solution.

The default value is False.

enable_multi_gateways

Whether to enable the multi-gateway mode for the tenant.

·     True—Enable the multi-gateway mode for the tenant. In an OpenStack environment without the Segments configuration, this setting enables different vRouters to access the external network over different gateways.

·     False—Not enable the multi-gateway mode for the tenant.

The default value is False.

Only the Pike, Queens, and Rocky plug-ins support this parameter.

tenant_gateway_name

Name of the gateway to which the tenant is bound. The default value is None.

When the value of the tenant_gw_selection_strategy parameter is match_gateway_name. You must specify the name of an existing gateway on the controller side.

Only the Pike and Rocky plug-ins support this parameter.

tenant_gw_selection_strategy

Gateway selection strategy for the tenant.

·     match_first—Select the first gateway.

·     match_gateway_name—Take effect together with the tenant_gateway_name parameter.

Only the Pike and Rocky plug-ins support this parameter.

enable_iam_auth

Whether to enable IAM interface authentication.

·     True—Enable.

·     False—Disable.

When connecting to SNA Center, you can set the value to True to use the IAM interface for authentication.

The default value is False.

Only the Mitaka and Newton plug-ins support this parameter.

enable_vcfc_rpc

Whether to enable RPC connection between the plug-ins and the controller in the DHCP fail-safe scenario.

The default value is False.

Only the Pike plug-ins support this parameter.

vcfc_rpc_url

RPC interface URL of the controller. Only a WebSocket type interface is supported.

The default value is ws://127.0.0.1:1080.

vcfc_rpc_ping_interval

Interval at which an RPC ICMP echo request message is sent to the controller, in seconds.

The default value is 60 seconds.

websocket_fragment_size

Size of a WebSocket fragment sent from the plug-in to the controller in the DHCP fail-safe scenario, in bytes.

The value is an integer equal to or larger than 1024. The default value is 1024. If the value is 1024, the message is not fragmented.

lb_member_slow_shutdown

Whether to enable slow shutdown when creating an LB pool.

The default value is False.

enable_network_l3vni

Whether to issue the L3VNIs when creating an external network. This parameter is valid only when the value of the enable_l3_vxlan parameter is True.

The default value is False.

lb_resource_mode

Resource pool mode of LB service resources. When the value is SP, all gateways share one LB resource pool. When the value is MP, the system creates an LB resource pool for each gateway.

The default value is SP.

neutron_black_list

Neutron denylist. Only value flat is supported. No default value exists.

When the value is flat, the SDN ML2 plug-in will not issue flat-type internal network resources to the controller, and you are not allowed to bind or unbind router interfaces from flat-type internal subnets.

Only the Pike plug-ins support this parameter.

enable_lb_xff

Whether to enable XFF transparent transmission for LB listeners.

·     True—Enable.

·     False—Disable.

The default value is False.

When the value is True and the listener protocol is HTTP or TERMINATED_HTTPS, a newly created listener is enabled with XFF transparent transmission by default, and the client's IP address is transparently transmitted to the server encapsulated in the X-Forward-For field of the HTTP header.

Only the Pike plug-ins support this parameter.

cloud_identity_mode

Whether to enable the multicloud function.

·     disable—Not carry the cloud_region_name field when sending a request to the controller.

·     region—Carry the cloud_region_name field when sending a request to the controller.

If multiple cloud platforms are connected to the controller, configure a different region name for each cloud platform.

·     custom—Carry the cloud_region_name field when sending a request to the controller. The value of the cloud_region_name field is that of the custom_cloud_name parameter.

The default value is disable.

Only the Newton, Queens, and Rocky plug-ins support this parameter.

custom_cloud_name

Cloud platform name. The default value is openstack-1. If multiple cloud platforms are connected to the controller, configure a different name for each cloud platform.

This parameter takes effect only when the value of the cloud_identity_mode parameter is custom.

Only the Newton, Queens, and Rocky plug-ins support this parameter.

deploy_network_resource_gateway

Whether to carry the gateway_list field for the external network sources.

The default value is False.

When the value of this field is True, you must set the value of the network_force_flat field to False.

force_vlan_port_details_qvo

Whether to forcibly create a qvo-type vPort on the OVS bridge after a VM in a VLAN network comes online. If the value is True, the system forcibly creates a qvo-type vPort. If the value is False, the system automatically creates a tap-type or qvo-type vPort as configured. As a best practice, set the value to False for interoperability with the cloud platform for the first time.

Only the Mitaka, Newton, Pike, Queens, and Rocky plug-ins support this parameter.

enable_firewall_object_group

Whether to enable firewall object groups for the plug-ins.

The default value is False. If the value is True, the OpenStack platform can create firewall object groups through the plug-ins.

Only the Rocky plug-ins support this parameter.

For this feature to take effect, you must ensure its compatibility with the OpenStack Platform. For the compatibility configuration, contact the technical support.

 

Upgrading the SeerEngine-DC Neutron plug-ins

CAUTION

CAUTION:

·     Services might be interrupted during the SeerEngine-DC Neutron plug-ins upgrade procedure. Make sure you understand the impact of the upgrade before performing it on a live network.

·     The SeerEngine-DC Neutron plugin configuration files of different versions use different default settings for some parameters. Manually modify parameter settings after an upgrade across versions to ensure configuration consistency.

 

To upgrade the SeerEngine-DC Neutron plug-ins without retaining the neutron_server and h3c_agent container containers:

1.     Delete the neutron_server and h3c_agent container containers and images that have the old version of SeerEngine-DC Neutron plugins.

neutron_server_image=$(docker ps --format {{.Image}} --filter name=neutron_server)

a.     If no docker-neutron-server.sh file exists in the system, execute the following command. If such a file exists, skip this step.

[root@controller ~]# runlike neutron_server>docker-neutron-server.sh

b.     Delete the neutron_server and h3c_agent container containers and images.

[root@controller ~]# docker rm -f neutron_server

[root@controller ~]# docker rmi  $neutron_server_image

[root@controller ~]# docker rm -f h3c_agent

[root@controller ~]# docker rmi  $h3c_agent_image

c.     Restore the default containers and images.

[root@controller ~]# docker tag kolla/neutron-server-origin $neutron_server_image

[root@controller ~]# docker rmi kolla/neutron-server-origin

[root@controller ~]# source docker-neutron-server.sh

 

CAUTION

CAUTION:

To avoid container startup failure, restore the settings in the neutron.conf and ml2_conf.ini. files to remove H3C plug-in settings before you restart the neutron_server container.

 

2.     Reinstall the new version of SeerEngine-DC Neutron plug-ins. For more information, see "Installing the SeerEngine-DC Neutron plug-ins."

To upgrade the SeerEngine-DC Neutron plug-ins with the neutron_server and h3c_agent container containers retained:

1.     Upgrade the neutron_server container container:

a.     Access the neutron_server container and uninstall the old version of SeerEngine-DC Neutron plugins.

[root@controller ~]# docker exec -it -u root neutron_server bash

(neutron-server) [root@controller ~]# h3c-vcfplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

(neutron-server) [root@controller ~]# pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine_DC_PLUGIN-D3601_ocata_2017.1:

/usr/bin/h3c-agent

/usr/bin/h3c-vcfplugin

For Train plug-ins (CentOS 8 operating system):

[root@controller ~]# docker exec -it -u root neutron_server bash

(neutron-server) [root@controller ~]# h3c-vcfplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

(neutron-server) [root@controller ~]# pip3 uninstall seerengine-dc-plugin

Uninstalling SeerEngine_DC_PLUGIN-D3601_ocata_2017.1:

/usr/bin/h3c-agent

/usr/bin/h3c-vcfplugin

b.     Install the new version of SeerEngine-DC Neutron plugins.

[root@controller ~]# docker cp SeerEngine_DC_PLUGIN-D3601_ocata_2017.1-py2.7.egg neutron_server:/

[root@controller ~]# docker exec -it -u root neutron_server bash

(neutron-server) [root@controller ~]# easy_install SeerEngine_DC_PLUGIN-D3601_ocata_2017.1-py2.7.egg

(neutron-server) [root@controller ~]# h3c-vcfplugin controller install

c.     Exit and restart the neutron_server container.

(neutron-server)[root@controller01 ~]# exit

[root@controller01 ~]# docker restart neutron_server

 

 

NOTE:

·     The system might display an error message when you execute the h3c-vcfplugin controller install/uninstall command. You can just ignore this message.

·     Make sure no neutron.conf file exists in the /root directory when you execute the h3c-vcfplugin controller install command. If such a file exists, delete it or move it to another directory.

 

2.     Upgrade the h3c_agent container container:

a.     Access the h3c_agent container and uninstall the old version of SeerEngine-DC Neutron plugins.

[root@controller ~]# docker exec -it -u root h3c_agent bash

(h3c-agent) [root@controller ~]# h3c-vcfplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

(h3c-agent) [root@controller ~]# pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine_DC_PLUGIN-D3601_ocata_2017.1:

/usr/bin/h3c-agent

/usr/bin/h3c-vcfplugin

For Train plug-ins (CentOS 8 operating system):

[root@controller ~]# docker exec -it -u root h3c_agent bash

(h3c-agent) [root@controller ~]# h3c-vcfplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

(h3c-agent) [root@controller ~]# pip3 uninstall seerengine-dc-plugin

Uninstalling SeerEngine_DC_PLUGIN-D3601_ocata_2017.1:

/usr/bin/h3c-agent

/usr/bin/h3c-vcfplugin

b.     Install the new version of SeerEngine-DC Neutron plugins.

[root@controller ~]# docker cp SeerEngine_DC_PLUGIN-D3601_ocata_2017.1-py2.7.egg h3c_agent:/

[root@controller ~]# docker exec -it -u root h3c_agent bash

(h3c-agent) [root@controller ~]# easy_install SeerEngine_DC_PLUGIN-D3601_ocata_2017.1-py2.7.egg

(h3c-agent) [root@controller ~]# h3c-vcfplugin controller install

c.     Exit and restart the h3c_agent container.

(h3c-agent)[root@controller01 ~]# exit

[root@controller01 ~]# docker restart h3c_agent

 

 

NOTE:

·     The system might display an error message when you execute the h3c-vcfplugin controller install/uninstall command. You can just ignore this message.

·     Make sure no neutron.conf file exists in the /root directory when you execute the h3c-vcfplugin controller install command. If such a file exists, delete it or move it to another directory.

 

3.     To avoid repeated deployment of firewall configuration after installation or upgrade of plugins on multiple nodes, make sure the h3c-agent service is available only for one of the nodes. To stop the h3c-agent service for the other nodes, perform the following steps on each of these nodes:

a.     Execute the h3c_agent=$(docker ps --format {{.ID}} --filter name= h3c_agent) command to view the h3c-agent service status.

b.     Execute the docker stop $h3c_agent command to stop the h3c-agent service of the node.


(Optional.) Configuring the metadata service for network nodes

OpenStack supports obtaining metadata from network nodes for VMs through DHCP or L3 gateway. H3C supports only the DHCP method. To configure the metadata service for network nodes:

1.     Download the OpenStack installation guide from the OpenStack official website and follow the installation guide to configure the metadata service for the network nodes.

2.     Configure the network nodes to provide metadata service through DHCP.

a.     Use the vi editor to open configuration file dhcp_agent.ini.

[root@network ~]# vi /etc/kolla/neutron-dhcp-agent/dhcp_agent.ini

b.     Press I to switch to insert mode, and modify configuration file dhcp_agent.ini as follows:

[DEFAULT]

force_metadata = True

Set the value to True for the force_metadata parameter to force the network nodes to provide metadata service through DHCP.

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the dhcp_agent.ini configuration file.

3.     Restart the dhcp-agent container.

[root@network ~]# docker restart neutron_dhcp_agent


FAQ

The Python tools cannot be installed using the yum command when a proxy server is used for Internet access. What should I do?

Configure HTTP proxy by performing the following steps:

1.     Make sure the server or the virtual machine can access the HTTP proxy server.

2.     At the CLI of the CentOS system, use the vi editor to open the yum.conf configuration file. If the yum.conf configuration file does not exist, this step creates the file.

[root@controller01 ~]# vi /etc/yum.conf

3.     Press I to switch to insert mode, and provide HTTP proxy information as follows:

¡     If the server does not require authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport

¡     If the server requires authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport
proxy_username=username
proxy_password=password

Table 4 describes the arguments in HTTP proxy information.

Table 4 Arguments in HTTP proxy information

Field

Description

username

Username for logging in to the proxy server, for example, sdn.

password

Password for logging in to the proxy server, for example, 123456.

yourproxyaddress

IP address of the proxy server, for example, 172.25.1.1.

proxyport

Port number of the proxy server, for example, 8080.

 

proxy = http://172.25.1.1:8080

proxy_username = sdn

proxy_password = 123456

4.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the yum.conf file.

After the plug-ins are installed successfully, what should I do if the controller fails to interconnect with the cloud platform?

Follow these steps to resolve the interconnection failure with the cloud platform:

1.     Make sure you have strictly followed the procedure in this document to install and configure the plug-ins.

2.     Contact the cloud platform vendor to determine whether a configuration issue exists on the cloud platform side.

3.     If the issue persists, contact after-sales engineers.

After the tap-service and tap-flow data is updated on OpenStack, the image destination template settings of the controller fail to be synchronized automatically. What should I do?

This is an open source issue. You can only modify the image destination template on the controller manually for synchronization.

The Inter X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?

Use the following procedure to resolve the issue. An enp61s0f3 Ethernet network adapter is used as an example.

1.     View and record system kernel information.

[root@controller01 ~]# uname -r

3.10.0-957.1.3.el7.x86_64

2.     View detailed information about the Ethernet network adapter and record the values for the firmware-version and bus-info fields.

[root@controller01 ~]# ethtool -i enp61s0f3

driver: i40e

version: 2.8.20-k

firmware-version: 3.33 0x80000f0c 1.1767.0

expansion-rom-version:

bus-info: 0000:3d:00.3

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

3.     Use one of the following solutions, depending on the kernel version and network adapter firmware version:

¡     The kernel version is higher than kernel-3.10.0-957.el7 and the network adapter firmware version is 4 or higher.

# Execute the following command:

[root@controller01 ~]# ethtool --set-priv-flags enp61s0f3 disable-fw-lldp on

# Identify whether the value for the disable-fw-lldp field is on.

[root@controller01 ~]# ethtool --show-priv-flags enp61s0f3  | grep lldp

disable-fw-lldp       : on

If the value is on, the  network adapter then can receive LLDP messages. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file

[root@controller01 ~]# vi /etc/rc.d/rc.local

# Press I to switch to insert mode, and add this command to the file. Then press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

ethtool --set-priv-flags enp61s0f3  disable-fw-lldp on

# Configure the file to be executable.

[root@controller01 ~]# chmod 755 /etc/rc.d/rc.local

¡     The kernel version is lower than kernel-3.10.0-957.el7, or the network adapter firmware version is lower than 4.

# Execute the echo "lldp stop" > /sys/kernel/debug/i40e/bus-info/command command. Enter the recorded bus info value for the network adapter, and add a backslash (\) before each ":".

[root@controller01 ~]# echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

The network adapter can receive LLDP messages after this command is executed. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file

[root@controller01 ~]# vi /etc/rc.d/rc.local

# Press I to switch to insert mode, and add this command to the file. Then Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

# Configure the file to be executable.

[root@controller01 ~]# chmod 755 /etc/rc.d/rc.local

How to install the Nova and openvswitch-agent patches and to which scenarios they are applied?

To install the Nova or openvswitch-agent patch, see "Installing the SeerEngine-DC Neutron plug-ins" and "Upgrading the SeerEngine-DC Neutron plug-ins" and the following documents, depending on the operating system you use:

·     H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for CentOS

·     H3C SeerEngine-DC Controller Converged OpenStack Plug-Ins Installation Guide for Ubuntu

You must install the Nova patch only in the following scenarios:

·     In KVM host overlay or network overlay scenario, virtual machines are load balancer members, and the load balancer must be aware of the member status.

·     vCenter network overlay scenario.

The open source openvswitch-agent process on an OpenStack compute node might fail to deploy VLAN flow tables to open source vSwitches when the following conditions exist:

·     The kernel-based virtual machine (KVM) technology is used on the node.

·     The hierarchical port binding feature is configured on the node.

To resolve this issue, you must install the openvswitch-agent patch.

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网