H3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for CentOS-E61xx-5W105

HomeSupportResource CenterAD-NET(SDN)Application-Driven Data CenterSeerEngine-DCTechnical DocumentsConfigure & DeployInteroperability GuidesH3C SeerEngine-DC Controller OpenStack Plug-Ins Installation Guide for CentOS-E61xx-5W105
01-Text
Title Size Download
01-Text 260.07 KB

Contents

Overview·· 1

SeerEngine-DC Neutron plug-ins· 1

Nova patch· 1

Openvswitch-agent patch· 1

DHCP failover components· 2

DHCP component 2

Metadata component 2

Preparing for installation· 3

Hardware requirements· 3

Software requirements· 3

Restrictions and guidelines· 4

Installing OpenStack plug-ins· 5

Installing the Python tools· 5

Installing the SeerEngine-DC Neutron plug-ins· 5

Obtaining the SeerEngine-DC Neutron plug-in installation package· 5

Installing the SeerEngine-DC Neutron plug-ins· 5

Verifying the installation· 10

Parameters and fields· 10

Removing the SeerEngine-DC Neutron plug-ins· 19

Upgrading the SeerEngine-DC Neutron plug-ins· 20

Installing the lldpad service· 20

Installing the Nova patch· 20

Prerequisites· 20

Installation procedure· 21

Verifying the installation· 24

Removing the Nova patch· 24

Upgrading the Nova patch· 24

Installing the openvswitch-agent patch· 25

Prerequisites· 25

Installation procedure· 25

Verifying the installation· 26

Removing the openvswitch-agent patch· 26

Upgrading the openvswitch-agent patch· 26

Installing/removing/upgrading DHCP failover components· 27

Installing basic components· 27

Obtaining the installation package of the DHCP failover components· 27

Installing DHCP failover components on the network node· 27

Removing DHCP failover components· 30

Upgrading DHCP failover components· 30

Parameters and fields· 31

Configuring the open-source metadata service for network nodes· 31

Comparing and synchronizing resource information between the controller and cloud platform·· 32

FAQ·· 33

The Python tools cannot be installed using the yum command when a proxy server is used for Internet access. What should I do?· 33

The Inter X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?· 33

 


Overview

This document describes how to install OpenStack plug-ins including SeerEngine-DC Neutron plug-ins, Nova patch, openvswitch-agent patch, and DHCP failover components on CentOS.

SeerEngine-DC Neutron plug-ins

Neutron is a type of OpenStack services used to manage all virtual networking infrastructures (VNIs) in an OpenStack environment. It provides virtual network services to the devices managed by OpenStack computing services.

SeerEngine-DC Neutron plug-ins are developed for the SeerEngine-DC controller based on the OpenStack framework. SeerEngine-DC Neutron plug-ins can obtain network configuration from OpenStack through REST APIs and synchronize the configuration to the SeerEngine-DC controllers. They can obtain settings for the tenants' networks, subnets, routers, ports, FW, LB, or VPN. Different types of SeerEngine-DC Neutron plug-ins can provide the following features for tenants:

·     SeerEngine-DC Neutron Core plug-inAllows tenants to use basic network communication for cores, including networks, subnets, routers, and ports.

·     SeerEngine-DC Neutron L3_RoutingAllows tenants to forward traffic to each other at Layer 3.

·     SeerEngine-DC Neutron FWaaS plug-inAllows tenants to create firewall services.

·     SeerEngine-DC Neutron LBaaS plug-inAllows tenants to create LB services.

·     SeerEngine-DC Neutron VPNaaS plug-inAllows tenants to create VPN services.

 

CAUTION

CAUTION:

To avoid service interruptions, do not modify the settings issued by the cloud platform on the controller, such as the virtual link layer network, vRouter, and vSubnet settings after the plug-ins connect to the OpenStack cloud platform.

 

Nova patch

Nova is an OpenStack computing controller that provides virtual services for users. The virtual services include creating, starting up, shutting down, and migrating virtual machines and setting configuration information for the virtual machines, such as CPU and memory information.

In specific scenarios (such as a vCenter network overlay scenario), you must install the Nova patch to enable virtual machines created by OpenStack to access networks managed by SeerEngine-DC controllers.

Openvswitch-agent patch

The open source openvswitch-agent process on an OpenStack compute node might fail to deploy VLAN flow tables to open source vSwitches when the following conditions exist:

·     The kernel-based virtual machine (KVM) technology is used on the node.

·     The hierarchical port binding feature is configured on the node.

To resolve this issue, you must install the openvswitch-agent patch.

DHCP failover components

DHCP component

In the network-based overlay scenario, only a controller is currently allowed to assign addresses to virtual machines or bare metal servers as a DHCP server. When the controller is disconnected from the southbound network, the virtual machines or bare metal servers will not be able to renew and reobtain addresses through DHCP. To resolve the issue, you can install a DHCP component on a network node to provide DHCP failover in the network-based overlay scenario. When the controller loses connection to the southbound network, the virtual machines or bare metal servers can renew and reobtain addresses through the independently deployed DHCP server.

Metadata component

In the DHCP failover scenario, you must install a Metadata component on the network node to provide the Metadata function for the DHCP component.


Preparing for installation

Hardware requirements

Table 1 shows the hardware requirements for installing the SeerEngine-DC Neutron plug-ins, Nova patch, or openvswitch-agent patch on a server or virtual machine.

Table 1 Hardware requirements

CPU

Memory size

Disk space

Single-core and multicore CPUs

2 GB and above

5 GB and above

 

Software requirements

You can install the SeerEngine-DC Neutron plug-ins or Nova patch on OpenStack Pike that is deployed on CentOS 7.2.1511 with YUM.

 

IMPORTANT

IMPORTANT:

To install OpenStack plug-ins, the dnsmasq version must be 2.76. You can use the dnsmasq –v command to display the dnsmasq version number.

 

IMPORTANT

IMPORTANT:

Before you install the OpenStack plug-ins, make sure the following requirements are met:

·     Your system has a reliable Internet connection.

·     OpenStack has been deployed correctly. Verify that the /etc/hosts file on all nodes has the host name-IP address mappings, and the OpenStack Neutron extension services (Neutron-FWaas, Neutron-VPNaas, or Neutron-LBaas) have been deployed. For the deployment procedure, see the installation guide for the specific OpenStack version on the OpenStack official website.

 

 

NOTE:

For the installation of converged version of SeerEngine_DC plug-ins (SeerEngine_DC_PLUGIN-version-py2.7.egg or SeerEngine_DC_PLUGIN-version.noarch.rpm), see H3C SeerEngine-DC OpenStack Converged Plug-Ins Installation Guide.

 


Restrictions and guidelines

This document describes interoperability between SeerEngine-DC with one OpenStack platform that contains one controller node. In other scenarios, follow these restrictions and guidelines:

·     SeerEngine-DC interoperates with one OpenStack platform that contains multiple controller nodes.

Configure all controller nodes on the OpenStack platform in the same way a single controller is configured, and make sure the configuration on all controller nodes is the same.

·     SeerEngine-DC interoperates with multiple OpenStack platforms.

Make sure the cloud platform name (cloud_region_name) and VXLAN VNI for each OpenStack platform, and the host name for each node are unique across the OpenStack platforms.

If OpenStack is deployed in Yum mode under CentOS, SeerEngine-DC Neutron plug-ins or Nova patches can be installed in OpenStack Pike (CentOS 7.2.1511).


Installing OpenStack plug-ins

Install the SeerEngine-DC Neutron plug-ins on an OpenStack controller node, the Nova patch and openvswitch-agent patch on an OpenStack compute node, and the DHCP failover components on a network node. Before installation, you must install the Python tools on the associated node.

Installing the Python tools

Before installing the plug-ins, first you must download the Python tools online and install them.

To download and install the Python tools:

1.     Update the software source list.

[root@localhost ~]# yum clean all

[root@localhost ~]# yum makecache

2.     Download and install the Python tools.

[root@localhost ~]# yum install –y python-pip python-setuptools

Installing the SeerEngine-DC Neutron plug-ins

Obtaining the SeerEngine-DC Neutron plug-in installation package

The SeerEngine-DC Neutron plug-ins are included in the SeerEngine-DC OpenStack package. Obtain the SeerEngine-DC OpenStack package of the required version and then save the package to the target installation directory on the server or virtual machine.

Alternatively, transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit.

Installing the SeerEngine-DC Neutron plug-ins

CAUTION

CAUTION:

The QoS feature will not operate correctly if you configure the database connection in configuration file neutron.conf as follows:

[database]

connection = mysql://…

This is an open source bug in OpenStack. To prevent this problem, configure the database connection as follows:

[database]

connection = mysql+pymysql://…

The three dots (…) in the command line represents the neutron database link information.

 

Some parameters must be configured with the required values as described in "Parameters and fields."

To install the SeerEngine-DC Neutron plug-ins:

1.     Access the directory where the SeerEngine-DC OpenStack package (an .egg or .rpm file) is saved, and install the package on the OpenStack controller node. The name of the SeerEngine-DC OpenStack package is SeerEngine_DC_PLUGIN-version1_ pike_2017.10-py2.7.egg or SeerEngine_DC_PLUGIN-version1_ pike_2017.10-1.noarch.rpm. version1 represents the version of the package.

In the following example, the SeerEngine-DC OpenStack package is in the /root directory.

¡     .egg file

[root@localhost ~]# easy_install SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

¡     .rpm file

[root@localhost ~]# rpm -ivh SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-1.noarch.rpm

2.     Change the user group and permissions of the plug-in file to be consistent with those of the Neutron file.

[root@localhost ~]# cd /usr/lib/python2.7/site-packages

[root@localhost ~]# chown -R --reference=neutron SeerEngine*

[root@localhost ~]# chmod -R --reference=neutron SeerEngine*

[root@localhost ~]# cd /usr/bin

[root@localhost ~]# chown -R --reference=neutron-server h3c*

[root@localhost ~]# chmod -R --reference=neutron-server h3c*

3.     Install the SeerEngine-DC Neutron plug-ins.

[root@localhost ~]# h3c-sdnplugin controller install

 

CAUTION

CAUTION:

Before executing the h3c-sdnplugin controller install command, make sure no neutron.conf file exists in the /root directory. If such a file exists, delete it or move it to another location.

 

4.     Modify the neutron.conf configuration file.

a.     Use the vi editor to open the neutron.conf configuration file.

[root@localhost ~]# vi /etc/neutron/neutron.conf

b.     Press I to switch to insert mode, and modify the configuration file. For information about the parameters, see "neutron.conf."

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_l3_router,firewall,lbaasv2,vpnaas,qos,h3c_vpc_connection

[service_providers]

service_provider=FIREWALL:H3C: networking_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C: networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C: networking_h3c.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

service_provider=VPC_CONNECTION:H3C:networking_h3c.vpc_connection.h3c_vpc_connection_driver_match_plugin.H3CVpcConnectionMatchPluginDriver:default

 

IMPORTANT

IMPORTANT:

·     When the load balancer supports multiple resource pools of the Context type, you must preprovision a resource pool named dmz or core on the controller, and then change the value of the service provider parameter to LOADBALANCERV2:DMZ:networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDMZDriver:default or LOADBALANCERV2:CORE:networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginCOREDriver:default accordingly.

·     If you set the value for vRouter interconnection to vpc_connection when configuring the service_plugins parameter, you must set the value of the corresponding service_provider parameter to VPC_CONNECTION:H3C:networking_h3c.l3_router.h3c_vpc_connection_driver.H3CVpcConnectionDriver:default.

 

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

5.     Modify the ml2_conf.ini configuration file.

a.     Use the vi editor to open the ml2_conf.ini configuration file.

[root@localhost ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

b.     Press I to switch to insert mode, and set the parameters in the ml2_conf.ini configuration file. For information about the parameters, see "ml2_conf.ini."

[ml2]

type_drivers = vxlan,vlan

tenant_network_types = vxlan,vlan

mechanism_drivers = ml2_h3c

extension_drivers = ml2_extension_h3c,qos

[ml2_type_vlan]

network_vlan_ranges = physicnet1:1000:2999,port_security

[ml2_type_vxlan]

vni_ranges = 1:500

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.

6.     Modify the local_settings configuration file.

a.     Use the vi editor to open the local_settings configuration file.

[root@localhost ~]# vi /etc/openstack-dashboard/local_settings

b.     Press I to switch to insert mode. Set the values for the LB, FW, and VPN fields in the OPENSTACK_NEUTRON_NETWORK parameter to enable the associated configuration pages in OpenStack Web. For information about the fields, see "OPENSTACK_NEUTRON_NETWORK."

OPENSTACK_NEUTRON_NETWORK = {

    'enable_lb': True,

    'enable_firewall': True,

    'enable_quotas': True,

    'enable_vpn': True,

    # The profile_support option is used to detect if an external router can be

    # configured via the dashboard. When using specific plugins the

    # profile_support can be turned on if needed.

    'profile_support': None,

    #'profile_support': 'cisco',

}

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the local_settings file.

7.     Modify the ml2_conf_h3c.ini configuration file.

a.     Use the vi editor to open the ml2_conf_h3c.ini configuration file.

[root@localhost ~]# vi /etc/neutron/plugins/ml2/ml2_conf_h3c.ini

b.     Press I to switch to insert mode, and set the following parameters in the ml2_conf_h3c.ini configuration file. For information about the parameters, see "ml2_conf_h3c.ini."

[SDNCONTROLLER]

url = http://127.0.0.1:30000

username = admin

password = admin@123

domain = sdn

timeout = 1800

retry = 10

vif_type = ovs

vhostuser_mode = server

hybrid_vnic = True

ip_mac_binding = True

denyflow_age =300

white_list = False

auto_create_tenant_to_sdnc = True

router_binding_public_vrf = False

enable_subnet_dhcp = True

dhcp_lease_time = 365

firewall_type = CGSR

fw_share_by_tenant = False

lb_type = CGSR

resource_mode = CORE_GATEWAY

resource_share_count = 1

auto_delete_tenant_to_sdnc = True

auto_create_resource = True

nfv_ha = True

vds_name = VDS1

enable_metadata = False

use_neutron_credential = False

enable_security_group = True

disable_internal_l3flow_offload = False

firewall_force_audit = True

enable_l3_router_rpc_notify = False

output_json_log = False

lb_enable_snat = False

empty_rule_action = deny

enable_l3_vxlan = False

l3_vni_ranges = 10000:10100

vendor_rpc_topic = VENDOR_PLUGIN

vsr_descriptor_name = VSR_IRF

vlb_descriptor_name = VLB_IRF

vfw_descriptor_name = VFW_IRF

hierarchical_port_binding_physicnets  =  ANY

hierarchical_port_binding_physicnets_prefix  =  physicnet

network_force_flat = True

directly_external = OFF

directly_external_suffix = DMZ

generate_vrf_based_on_router_name = False

enable_dhcp_hierarchical_port_binding = False

enable_multi_segments = False

enable_https = False

neutron_plugin_ca_file =

neutron_plugin_cert_file =

neutron_plugin_key_file =

router_route_type = None

enable_router_nat_without_firewall = False

cgsr_fw_context_limit = 10

force_vip_port_device_owner_none = False

custom_cloud_name = openstack-1

enable_multi_gateways = False

tenant_gateway_name = None

tenant_gw_selection_strategy = match_first

enable_iam_auth = True

enable_firewall_metadata = False

enable_sdnc_rpc = True

sdnc_rpc_url = ws://99.0.82.55:8080

sdnc_rpc_ping_interval = 60

enable_binding_gateway_with_tenant = False

websocket_fragment_size = 102400

lb_member_slow_shutdown = False

enable_network_l3vni = False

neutron_black_list =

force_vlan_port_details_qvo = True

lb_resource_mode = SP

enable_lb_certchain = True

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf_h3c.ini file.

8.     If you have set the white_list parameter to True, enter the IP address of the host where the Neutron server resides, and specify the role as Admin to add an authentication-free user to the controller.

9.     If you have set the use_neutron_credential parameter to True, perform the following steps:

a.     Modify the neutron.conf configuration file.

# Use the vi editor to open the neutron.conf configuration file.

# Press I to switch to insert mode, and add the following configuration. For information about the parameters, see "neutron.conf."

[keystone_authtoken]

admin_user = neutron

admin_password = 123456

# Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

b.     Add an admin user to the controller.

# Configure the username as neutron.

# Specify the role as Admin.

# Enter the password of the neutron user in OpenStack.

10.     Restart the neutron-server service.

[root@localhost ~]# service neutron-server restart

neutron-server stop/waiting

neutron-server start/running, process 4583

11.     Restart the h3c-agent service.

[root@localhost ~]# service h3c-agent restart

h3c-agent stop/waiting

h3c-agent start/running, process 4678

Verifying the installation

# Verify that the SeerEngine-DC OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

·     .egg file

[root@localhost ~]# pip freeze | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10

·     .rpm file

[root@localhost ~]# rpm -qa | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10.noarch

# Verify that the neutron-server service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service neutron-server status

neutron-server start/running, process 1849

# Verify that the h3c-agent service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service h3c-agent status

h3c-agent start/running, process 4678

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

neutron.conf

Parameter

Required value

Description

core_plugin

ml2

Used for loading the core plug-in ml2 to OpenStack.

service_plugins

h3c_sdnplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,lbaas,vpnaas

Used for loading the extension plug-ins to OpenStack.

service_provider

·     FIREWALL:H3C:h3c_sdnplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

·     LOADBALANCER:H3C:h3c_sdnplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginDriver:default

·     VPN:H3C:h3c_sdnplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

Directory where the extension plug-ins are saved.

notification_drivers

message_queue,qos_h3c

Name of the QoS notification driver.

admin_user

N/A

Admin username for Keystone authentication in OpenStack, for example, neutron.

admin_password

N/A

Admin password for Keystone authentication in OpenStack, for example, 123456.

 

ml2_conf.ini

Parameter

Required value

Description

type_drivers

vxlan,vlan

Driver type.

vxlan must be specified as the first driver type.

tenant_network_types

vxlan,vlan

Type of the networks to which the tenants belong.

·     In the host overlay scenario and network overlay with hierarchical port binding scenario, vxlan must be specified as the first network type.

·     In the network overlay without hierarchical port binding scenario, vlan must be specified as the first network type.

For intranet, only vxlan is available.

For extranet, only vlan is available.

mechanism_drivers

ml2_h3c

Name of the ml2 driver.

To create SR-IOV instances for VLAN networks, set this parameter to sriovnicswitch, ml2_h3c.

To create hierarchy-supported instances, set this parameter to ml2_h3c,openvswitch.

extension_drivers

ml2_extension_h3c,qos

Names of the ml2 extension drivers. Available names include ml2_extension_h3c, qos, and port_security. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this parameter. To not enable port security on OpenStack, you do not need to specify the port_security value for this parameter.

network_vlan_ranges

N/A

Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999.

vni_ranges

N/A

Value range for the VXLAN ID of the intranet, for example, 1:500.

 

OPENSTACK_NEUTRON_NETWORK

Field

Description

enable_lb

Whether to enable or disable the LB configuration page.

·     True—Enable.

·     False—Disable.

enable_firewall

Whether to enable or disable the FW configuration page.

·     True—Enable.

·     False—Disable.

enable_vpn

Whether to enable or disable the VPN configuration page.

·     True—Enable.

·     False—Disable.

 

ml2_conf_h3c.ini

Parameter

Description

url

URL address for logging in to Unified Platform. The URL for logging in to the Unified Platform is http://ip_address:30000.

username

Username for logging in to Unified Platform, for example, admin. You do not need to configure a username when the use_neutron_credential parameter is set to True.

password

Password for logging in to Unified Platform, for example, admin@123. You do not need to configure a password when the use_neutron_credential parameter is set to True. If the password contains a dollar sign ($), enter a backward slash (\) before the dollar sign.

domain

Name of the domain where the controller resides, for example, sdn.

timeout

The amount of time that the Neutron server waits for a response from the controller in seconds, for example, 1800 seconds.

As a best practice, set the waiting time greater than or equal to 1800 seconds.

retry

Maximum times for sending connection requests from the Neutron server to the controller, for example, 10.

vif_type

Default vNIC type:

·     ovs

·     vhostuser (applied to the OVS DPDK solution)

You can set the vhostuser_mode parameter when the value of this parameter is vhostuser.

vhostuser_mode

Default DPDK vHost-user mode:

·     server

·     client

The default value is server.

This setting takes effect only when the value of the vif_type parameter is vhostuser.

hybrid_vnic

Whether to enable or disable the feature of mapping OpenStack VLAN to SeerEngine-DC VXLAN.

·     True—Enable.

·     False—Disable.

ip_mac_binding

Whether to enable or disable IP-MAC binding.

·     True—Enable.

·     False—Disable.

denyflow_age

Anti-spoofing flow table aging time for the virtual distributed switch (VDS), an integer in the range of 1 to 3600 seconds, for example, 300 seconds.

white_list

Whether to enable or disable the authentication-free user feature on OpenStack.

·     True—Enable.

·     False—Disable.

auto_create_tenant_to_sdnc

Whether to enable or disable the feature of automatically creating tenants on the controller.

·     True—Enable.

·     False—Disable.

router_binding_public_vrf

Whether to use the public network VRF for creating a vRouter.

·     True—Use.

·     False—Do not use.

Do not set the value to True for a weak control network.

enable_subnet_dhcp

Whether to enable or disable DHCP for creating a vSubnet.

·     True—Enable.

·     False—Disable.

dhcp_lease_time

Valid time for vSubnet IP addresses obtained from the DHCP address pool in days, for example, 365 days.

firewall_type

Type of the firewalls created on the controller:

·     CGSR—Context-based gateway service type firewall, each using an independent context. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE—Context-based gateway service type firewall, all using the same context even if they belong to different tenants. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     CGSR_SHARE_BY_COUNT—Context-based gateway service type firewall, all using the same context when the number of contexts reaches the threshold set by the cgsr_fw_context_limit parameter. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

·     NFV_CGSR—VNF-based gateway service type firewall, each using an independent VNF. This firewall type is available only when the value of the resource_mode parameter is CORE_GATEWAY.

fw_share_by_tenant

Whether to enable exclusive use of a gateway service type firewall context by a single tenant and allow the context to be shared by service resources of the tenant when the firewall type is CGSR_SHARE.

lb_type

Type of the load balancers created on the controller.

·     CGSRGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, CGSR type load balancers that belong to one tenant use the same context. CGSR type load balancers that belong to different tenants use different contexts. When the value of the lb_resource_mode parameter is MP, CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same context. CGSR type load balancers that belong to different tenants use different contexts.

·     CGSR_SHAREGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, all CGSR_SHARE type load balancers use the same context even if they belong to different tenants. When the value of the lb_resource_mode parameter is MP, CGSR_SHARE type load balancers that belong to different tenants and are bound to the same gateway use the same context.

·     NFV_CGSR—Gateway service type load balancer on a VNF. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. When the value of the lb_resource_mode parameter is SP, NFV_CGSR type load balancers that belong to one tenant use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs. When the value of the lb_resource_mode parameter is MP, NFV_CGSR type load balancers that belong to one tenant and are bound to the same gateway use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs.

resource_mode

Type of the resources created on the controller.

·     CORE_GATEWAY—Gateway resources.

·     NFV—VNF resources. This value is obsoleted.

resource_share_count

Number of resources that can share a resource node. The value is in the range of 1 to 65535. The default value is 1, indicating that no resources can share a resource node.

auto_delete_tenant_to_sdnc

Whether to enable or disable the feature of automatically removing tenants from the controller.

·     True—Enable.

·     False—Disable.

auto_create_resource

Whether to enable or disable the feature of automatically creating resources.

·     True—Enable.

·     False—Disable.

nfv_ha

Whether configure the NFV and NFV_SHARE resources to support stack.

·     True—Support.

·     False—Do not support.

vds_name

Name of the VDS, for example, VDS1.

After deleting a VDS and recreating a VDS with the same name, you must perform the following tasks on the controller node for the new VDS to take effect:

·     Reboot the neutron-server service.

·     Reboot the h3c-agent service.

enable_metadata

Whether to enable or disable metadata for OpenStack.

·     True—Enable.

·     False—Disable.

use_neutron_credential

Whether to use the OpenStack Neutron username and password to communicate with the controller.

·     True—Use.

·     False—Do not use.

enable_security_group

Whether to enable or disable the feature of deploying security group rules to the controller.

·     True—Enable.

·     False—Disable.

disable_internal_l3flow_offload

Whether to enable or disable intra-network traffic routing through the gateway.

·     True—Disable.

·     False—Enable.

firewall_force_audit

Whether to audit firewall policies synchronized to the controller by OpenStack. The default value is False.

·     TrueAudits firewall policies synchronized to the controller by OpenStack. The auditing state of the synchronized policies on the controller is True (audited).

·     FalseDoes not audit firewall policies synchronized to the controller by OpenStack. The synchronized policies on the controller retain their previous auditing state.

enable_l3_router_rpc_notify

Whether to enable or disable the feature of sending Layer 3 routing events through RPC.

·     True—Enable.

·     False—Disable.

This parameter does not take effect in the current software version.

output_json_log

Whether to output REST API messages to the OpenStack operating logs in JSON format for communication between the SeerEngine-DC Neutron plug-ins and the controller.

·     True—Enable.

·     False—Disable.

lb_enable_snat

Whether to enable or disable Source Network Address Translation (SNAT) for load balancers on the controller.

·     True—Enable.

·     False—Disable.

As a best practice, set the parameter value to False if you deploy the plug-in on CloudOS.

empty_rule_action

Set the action for security policies that do not contain any ACL rules on the controller.

·     permit

·     deny

enable_l3_vxlan

Whether to enable or disable the feature of using Layer 3 VXLAN IDs (L3VNIs) to mark Layer 3 flows between vRouters on the controller.

·     True—Enable.

·     False—Disable.

By default, this feature is disabled.

l3_vni_ranges

Set the value range for the L3VNI, for example, 10000:10100. If the controller interoperates with multiple OpenStack platforms, make sure the L3VNI value range for each OpenStack platform is unique.

vendor_rpc_topic

RPC topic of the vendor. This parameter is required when the vendor needs to obtain Neutron data from the SeerEngine-DC Neutron plug-ins. The available values are as follows:

·     VENDOR_PLUGIN—Default value, which means that the parameter does not take effect.

·     DP_PLUGIN—RPC topic of DPtech.

The value of this parameter must be negotiated by the vendor and H3C.

vsr_descriptor_name

VNF descriptor name of the VNF virtual gateway resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

vlb_descriptor_name

VNF descriptor name of the virtual load balancing resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the lb_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

vfw_descriptor_name

VNF descriptor name of the virtual firewall resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the firewall_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the controller.

hierarchical_port_binding_physicnets

Policy for OpenStack to select a physical VLAN when performing hierarchical port binding. The default value is ANY.

·     ANY—A VLAN is selected from all physical VLANs for VLAN ID assignment.

·     PREFIX—A VLAN is selected from all physical VLANs matching the specified prefix for VLAN ID assignment.

hierarchical_port_binding_physicnets_prefix

Prefix for matching physical VLANs. The default value is physicnet. This parameter is available only when you set the value of the hierarchical_port_binding_physicnets parameter to PREFIX.

network_force_flat

Whether to enable forcible conversion of an external network to a flat network. The value can only be set to True if the external network is a VXLAN.

directly_external

Whether traffic destined for the external network is directly forwarded by the gateway. The available values are as follows:

·     ANY—Traffic destined for the external network is directly forwarded by the gateway to the external network.

·     OFF—Traffic destined for the external network is forwarded by the gateway to the firewall and then to the external network.

·     SUFFIXDetermine the forwarding method for the traffic destined for the external network by matching the traffic against the vRouter name suffix (set by the directly_external_suffix parameter).

¡     If the traffic destined for the external network matches the suffix, it is directly forwarded by the gateway to the external network.

¡     If the traffic destined for the external network does not match the suffix, it is forwarded by the gateway to the firewall and then to the external network.

The default value is OFF. You can set the value to ANY only when the external network is a VXLAN and the value of network_force_flat is False.

directly_external_suffix

vRouter name suffix (DMZ for example). This parameter is available only when you set the value of the directly_external parameter to SUFFIX. As a best practice, do not change the vRouter name after this parameter is configured.

generate_vrf_based_on_router_name

Whether to use the vRouter names configured on OpenStack as the VRF names on the controller.

·     True—Use the names. Make sure each vRouter name configured on OpenStack is a case-sensitive string of 1 to 31 characters that contain only letters and digits.

·     False—Not to use the names.

By default, the vRouter names configured on OpenStack are not used as the VRF names on the controller.

enable_dhcp_hierarchical_port_binding

Whether to enable DHCP hierarchical port binding. The default value is False.

·     True—Enable.

·     False—Disable.

enable_multi_segments

Whether to enable multiple outbound interfaces, allowing the vRouter to access the external network from multiple outbound interfaces. The default value is False.

To enable multiple outbound interfaces, configure the following settings:

·     Set the value of this parameter to True.

·     Set the value of the network_force_flat parameter to False.

·     Access the /etc/neutron/plugins/ml2/ml2_conf.ini file on the control node and specify the controller's gateway name for the network_vlan_ranges parameter.

enable_https

Whether to enable HTTPS bidirectional authentication. The default value is False.

·     True—Enable.

·     False—Disable.

neutron_plugin_ca_file

Save location for the CA certificate of the controller. As a best practice, save the CA certificate in the /usr/share/neutron directory.

neutron_plugin_cert_file

Save location for the Cert certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

neutron_plugin_key_file

Save location for the Key certificate of the controller. As a best practice, save the Cert certificate in the /usr/share/neutron directory.

router_route_type

Route entry type:

·     None—Standard route.

·     401—Extended route with the IP address of an online vPort as the next hop.

·     402—Extended route with the IP address of an offline vPort as the next hop.

The default value is None.

enable_router_nat_without_firewall

Whether to enable NAT when no firewall is configured for the tenant.

·     True—Enable NAT when no firewall is configured. This setting automatically creates default firewall resources to implement NAT if the vRouter has been bound to an external network.

·     False—Not enable NAT when no firewall is configured.

The default value is False.

cgsr_fw_context_limit

Context threshold for context-based gateway service type firewalls. The value is an integer. When the threshold is reached, all the context-based gateway service type firewalls use the same context.

This parameter takes effect only when the value of the firewall_type parameter is CGSR_SHARE_BY_COUNT.

force_vip_port_device_owner_none

Whether to support the LB vport device_owner field.

·     False—Support the LB vport device_owner field. This setting is applicable to an LB tight coupling solution.

·     True—Do not support the LB vport device_owner field. This setting is applicable to an LB loose coupling solution.

The default value is False.

enable_multi_gateways

Whether to enable the multi-gateway mode for the tenant.

·     True—Enable the multi-gateway mode for the tenant. In an OpenStack environment without the Segments configuration, this setting enables different vRouters to access the external network over different gateways.

·     False—Not enable the multi-gateway mode for the tenant.

The default value is False.

tenant_gateway_name

Name of the gateway to which the tenant is bound. The default value is None.

When the value of the tenant_gw_selection_strategy parameter is match_gateway_name. You must specify the name of an existing gateway on the controller side.

tenant_gw_selection_strategy

Gateway selection strategy for the tenant.

·     match_first—Select the first gateway.

·     match_gateway_name—Take effect together with the tenant_gateway_name parameter.

enable_iam_auth

Whether to enable IAM interface authentication.

·     True—Enable.

·     False—Disable.

When connecting to Unified Platform, you can set the value to True to use the IAM interface for authentication.

The default value is False.

This parameter is obsoleted.

enable_firewall_metadata

Whether to allow the CloudOS platform to issue firewall-related fields such as the resource pool name to the controller.

This parameter is used only for communication with the CloudOS platform.

enable_sdnc_rpc

Whether to enable RPC connection between the plug-ins and the controller in the DHCP failover scenario.

The default value is False.

sdnc_rpc_url

RPC interface URL of the controller. Only a WebSocket type interface is supported.

The default value is ws://127.0.0.1:1080.

Configure this parameter based on the URL of the Unified Platform. For example, if the URL of the Unified Platform is http://127.0.0.1:30000, set this parameter to ws://127.0.0.1: 30000.

sdnc_rpc_ping_interval

Interval at which an RPC ICMP echo request message is sent to the controller, in seconds.

The default value is 60 seconds.

enable_binding_gateway_with_tenant

Whether to enable automatic binding of tenants to the gateway. The default value is False.

When a network is created for a project on the OpenStack cloud platform for the first time, the corresponding tenant will bind to the gateway automatically if you set the value to True. When a vRouter is created for a project on the OpenStack cloud platform for the first time, the corresponding tenant will bind to gateway automatically regardless of whether the value of the parameter is True or False.

websocket_fragment_size

Size of a WebSocket fragment sent from the plug-in to the controller in the DHCP failover scenario, in bytes.

The value is an integer equal to or larger than 1024. The default value is 1024. If the value is 1024, the message is not fragmented.

lb_member_slow_shutdown

Whether to enable slow shutdown when creating an LB pool.

The default value is False.

enable_network_l3vni

Whether to issue the L3VNIs when creating an external network. This parameter is valid only when the value of the enable_l3_vxlan parameter is True.

The default value is False.

neutron_black_list

Neutron network denylist function. This parameter takes effect only when the value is flat.

force_vlan_port_details_qvo

Whether to forcibly create a qvo-type vPort on the OVS bridge after a VM in a VLAN network comes online. If the value is true, the system forcibly creates a qvo-type vPort. If the value is false, the system automatically creates a tap-type or qvo-type vPort as configured. As a best practice, set the value to false for interoperability with the cloud platform for the first time.

lb_resource_mode

Resource pool mode of LB service resources.

·     SP—All gateways share the same LB resource pool.

·     MP—Each gateway uses an LB resource pool.

The default value is SP.

enable_lb_certchain

Whether to enable the SSL server end to send the complete certificate chain for SSL negotiation.

·     true—Enable.

·     false—Disable.

The default value is true.

 

Removing the SeerEngine-DC Neutron plug-ins

To remove the SeerEngine-DC Neutron plug-ins, first remove the SeerEngine-DC Neutron plug-ins and then remove the SeerEngine-DC OpenStack package.

To remove the SeerEngine-DC Neutron plug-ins:

1.     Remove the SeerEngine-DC Neutron plug-ins.

[root@localhost ~]# h3c-sdnplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

2.     Remove the SeerEngine-DC OpenStack package.

¡     .egg file

[root@localhost ~]# pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-sdnplugin

  /usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

¡     .rpm file

[root@localhost ~]# rpm -e SeerEngine_DC_PLUGIN

Upgrading the SeerEngine-DC Neutron plug-ins

CAUTION

CAUTION:

·     Services might be interrupted during the SeerEngine-DC Neutron plug-ins upgrade procedure.

·     The default parameter settings for SeerEngine-DC Neutron plug-ins might vary by OpenStack version. Modify the default parameter settings for SeerEngine-DC Neutron plug-ins when upgrading the OpenStack version to ensure that the plug-ins have the same configurations before and after the upgrade.

 

To upgrade the SeerEngine-DC Neutron plug-ins, you need to remove the current version first, and install the new version. For information about installing the SeerEngine-DC Neutron plug-ins, see "Installing the SeerEngine-DC Neutron plug-ins." For information about removing the SeerEngine-DC Neutron plug-ins, see "Removing the SeerEngine-DC Neutron plug-ins."

Installing the lldpad service

In the KVM network-based overlay scenario, you are required to install the lldpad service on each compute node.

1.     Install and start the lldpad service on the compute node.

[root@localhost ~]# yum install -y lldpad

[root@localhost ~]# systemctl enable lldpad.service

[root@localhost ~]# systemctl start lldpad.service

2.     Enable the uplink interface to send LLDP messages. eno2 is the uplink interface in this example.

[root@localhost ~]# lldptool set-lldp -i eno2 adminStatus=rxtx;

[root@localhost ~]# lldptool -T -i eno2 -V sysName enableTx=yes;

[root@localhost ~]# lldptool -T -i eno2 -V portDesc enableTx=yes;

[root@localhost ~]# lldptool -T -i eno2 -V sysDesc enableTx=yes;

[root@localhost ~]# lldptool -T -i eno2 -V sysCap enableTx=yes;

Installing the Nova patch

You must install the Nova patch only in the following scenarios:

·     In KVM host overlay or network overlay scenario, virtual machines are load balancer members, and the load balancer must be aware of the member status.

·     vCenter network overlay scenario.

Prerequisites

The Nova patch is included in the SeerEngine-DC OpenStack package. Obtain and copy the SeerEngine-DC OpenStack package to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SCP.

 

 

NOTE:

If you decide to upload the SeerEngine-DC OpenStack package through FTP or TFTP, use the binary mode to avoid damage to the package.

 

Installation procedure

Based on your network environment, choose one step between step 3 and step 4.

To install the Nova patch on the OpenStack compute node:

1.     Access the directory where the SeerEngine-DC OpenStack package (an .egg or .rpm file) is saved, and install the package on the OpenStack compute node. The name of the SeerEngine-DC OpenStack package is SeerEngine_DC_PLUGIN-version1_ pike_2017.10-py2.7.egg or SeerEngine_DC_PLUGIN-version1_ pike_2017.10-1.noarch.rpm. version1 represents the version of the package.

In this example, the SeerEngine-DC OpenStack package is saved to the /root directory.

¡     .egg file

[root@localhost ~]# easy_install SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

¡     .rpm file

[root@localhost ~]# rpm -ivh SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-1.noarch.rpm

2.     Install the Nova patch.

[root@localhost ~]# h3c-sdnplugin compute install

Install the nova patch

 

modifying:

/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py

modify success, backuped at: /usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py.h3c_bak

 

 

NOTE:

The contents below the modifying: line indicate the modified open source Neutron file and the backup path of the file before modification.

 

3.     (Optional.) If the networking type of the compute node is host-based overlay, perform the following steps:

a.     Stop the neutron-openvswitch-agent service on the compute node and disable the system from starting the service at startup.

[root@localhost ~]# service neutron-openvswitch-agent stop

[root@localhost ~]# systemctl disable neutron-openvswitch-agent.service

b.     Execute the neutron agent-list command on the controller node to identify whether the agent of the compute node exists in the database.

-     If the agent of the compute node does not exist in the database, go to the next step.

-     If the agent of the compute node exists in the database, execute the neutron agent-delete id command to delete the agent. The id argument represents the agent ID.

[root@localhost ~]# neutron agent-list

| id                                   | agent_type         | host     |

| 25c3d3ac-5158-4123-b505-ed619b741a52 | Open vSwitch agent | compute3

[root@localhost ~]# neutron agent-delete 25c3d3ac-5158-4123-b505-ed619b741a52

Deleted agent: 25c3d3ac-5158-4123-b505-ed619b741a52

c.     Use the vi editor on the compute node to open the nova.conf configuration file.

[root@localhost ~]# vi /etc/nova/nova.conf

d.     Press I to switch to insert mode, and set the parameters in the nova.conf configuration file as follows. For descriptions of the parameters, see Table 2.

If the hypervisor type of the compute node is KVM, modify the nova.conf configuration file as follows:

[s1020v]

s1020v = False

member_status = True

[neutron]

ovs_bridge = vds1-br

If the hypervisor type of the compute node is VMware vCenter, modify the nova.conf configuration file as follows:

[DEFAULT]

compute_driver = vmwareapi.VMwareVCDriver

[vmware]

host_ip = 127.0.0.1

host_username = sdn

host_password = skyline123

cluster_name = vcenter

insecure = True

[s1020v]

s1020v = False

vds = VDS2

e.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the nova.conf file.

4.     (Optional.) If the networking type of the compute node is network-based overlay, perform the following steps:

If the hypervisor type of the compute node is KVM, you do not need to install the Nova patch.

If the hypervisor type of the compute node is VMware vCenter, perform the following steps:

a.     Stop the neutron-openvswitch-agent service and disable the system to start the service at startup.

[root@localhost ~]# service neutron-openvswitch-agent stop

[root@localhost ~]# systemctl disable neutron-openvswitch-agent.service

b.     Select Automation > Data Center Networks > Fabrics > Domain from the top navigation tree of the controller webpage to identify whether the compute node is online. If the compute node is online, delete the compute node.

c.     Execute the neutron agent-list command on the controller node to identify whether the agent of the compute node exists in the database.

-     If the agent of the compute node does not exist in the database, go to the next step.

-     If the agent of the compute node exists in the database, execute the neutron agent-delete id command to delete the agent. The id argument represents the agent ID.

[root@localhost ~]# neutron agent-list

| id                                   | agent_type         | host        |

| 25c3d3ac-5158-4123-b505-ed619b741a52 | Open vSwitch agent | compute3

 

[root@localhost ~]# neutron agent-delete 25c3d3ac-5158-4123-b505-ed619b741a52

Deleted agent: 25c3d3ac-5158-4123-b505-ed619b741a52

d.     Use the vi editor to open the nova.conf configuration file.

[root@localhost ~]# vi /etc/nova/nova.conf

e.     Press I to switch to insert mode, and set the parameters in the nova.conf configuration file as follows. For descriptions of the parameters, see Table 2.

[DEFAULT]

compute_driver = vmwareapi.VMwareVCDriver

[vmware]

host_ip = 127.0.0.1

host_username = sdn

host_password = skyline123

cluster_name = vcenter

insecure = True

[s1020v]

s1020v = False

vds = VDS2

uplink_teaming_policy = loadbalance_srcid

f.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the nova.conf file.

Table 2 Parameters in the configuration file

Parameter

Description

s1020v

Whether to use the H3C S1020V vSwitch to forward the traffic between vSwitches and the traffic between the vSwitches and the external network.

·     True—Use the H3C S1020V vSwitch.

·     False—Do not use the H3C S1020V vSwitch.

This parameter is obsoleted.

member_status

Whether to enable or disable the feature of modifying the status of members on OpenStack load balancers.

·     True—Enable.

·     False—Disable.

vds

VDS to which the host in the vCenter belongs. In this example, the host belongs to VDS2. In the host overlay networking, you can only specify the VDS that the controller synchronizes to the vCenter. In the network overlay networking, you can specify an existing VDS on demand.

ovs_bridge

Name of the bridge for the H3C S1020V vSwitch. Make sure the bridges created on all H3C S1020V vSwitches use the same name.

compute_driver

Name of the driver used by the compute node for virtualization.

host_ip

IP address used to log in to the vCenter, for example, 127.0.0.1.

host_username

Username for logging in to the vCenter, for example, sdn.

host_password

Password for logging in to the vCenter, for example, skyline123. If the password contains a dollar sign ($), enter a backward slash (\) before the dollar sign.

cluster_name

Name of the team in the vCenter environment, for example, vcenter.

insecure

Whether to enable or disable security check.

·     True—Do not perform security check.

·     False—Perform security check. This value is not supported in the current software version.

uplink_teaming_policy

Uplink routing policy.

·     loadbalance_srcid—Source vPort-based routing.

·     loadbalance_ip—IP hash-based routing.

·     loadbalance_srcmac—Source MAC hash-based routing.

·     loadbalance_loadbased—Physical NIC load-based routing.

·     failover_explicit—Explicit failover order-based routing.

 

5.     Restart the openstack-nova-compute service.

[root@localhost ~]# service openstack-nova-compute restart

Verifying the installation

# Verify that the SeerEngine-DC OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

·     .egg file

[root@localhost ~]# pip freeze | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10

·     .rpm file

[root@localhost ~]# rpm -qa | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10.noarch

# Verify that the openstack-nova-compute service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service openstack-nova-compute status

nova-compute start/running, process 184

Removing the Nova patch

You must remove the Nova patch before removing the SeerEngine-DC OpenStack package.

To remove the Nova patch:

1.     Remove the Nova patch.

[root@localhost ~]# h3c-sdnplugin compute uninstall

Uninstall the nova patch

2.     Remove the SeerEngine-DC OpenStack package.

¡     .egg file

[root@localhost ~]# pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-sdnplugin

  /usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

¡     .rpm file

[root@localhost ~]# rpm -e SeerEngine_DC_PLUGIN

Upgrading the Nova patch

CAUTION

CAUTION:

Services might be interrupted during the Nova patch upgrade procedure.

 

To upgrade the Nova patch, you need to remove the current version first, and install the new version. For information about installing the Nova patch, see "Installing the lldpad service

In the KVM network-based overlay scenario, you are required to install the lldpad service on each compute node.

1.     Install and start the lldpad service on the compute node.

[root@localhost ~]# yum install -y lldpad

[root@localhost ~]# systemctl enable lldpad.service

[root@localhost ~]# systemctl start lldpad.service

2.     Enable the uplink interface to send LLDP messages. eno2 is the uplink interface in this example.

[root@localhost ~]# lldptool set-lldp -i eno2 adminStatus=rxtx;

[root@localhost ~]# lldptool -T -i eno2 -V sysName enableTx=yes;

[root@localhost ~]# lldptool -T -i eno2 -V portDesc enableTx=yes;

[root@localhost ~]# lldptool -T -i eno2 -V sysDesc enableTx=yes;

[root@localhost ~]# lldptool -T -i eno2 -V sysCap enableTx=yes;

Installing the Nova patch." For information about removing the Nova patch, see "Removing the Nova patch."

Installing the openvswitch-agent patch

Prerequisites

The openvswitch-agent patch is included in the SeerEngine-DC OpenStack package. Obtain and copy the SeerEngine-DC OpenStack package to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SCP.

 

 

NOTE:

If you decide to upload the SeerEngine-DC OpenStack package through FTP or TFTP, use the binary mode to avoid damage to the package.

 

Installation procedure

To install the openvswitch-agent patch:

1.     Access the directory where the SeerEngine-DC OpenStack package (an .egg or .rpm file) is saved, and install the package on the OpenStack compute node. The name of the SeerEngine-DC OpenStack package is SeerEngine_DC_PLUGIN-version1_ pike_2017.10--py2.7.egg or SeerEngine_DC_PLUGIN-version1_ pike_2017.10--1.noarch.rpm. version1 represents the version of the package.

¡     .egg file

[root@localhost ~]# easy_install SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

¡     .rpm file

[root@localhost ~]# rpm -ivh SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-1.noarch.rpm

2.     Install the openvswitch-agent patch.

[root@localhost ~]# h3c-sdnplugin openvswitch install

3.     Restart the openvswitch-agent service.

[root@localhost ~]# service neutron-openvswitch-agent restart

Verifying the installation

# Verify that the SeerEngine-DC OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

·     .egg file

[root@localhost ~]# pip freeze | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10

·     .rpm file

[root@localhost ~]# rpm -qa | grep PLUGIN

SeerEngine-DC-PLUGIN===E3603P01-pike-2017.10.noarch

# Verify that the openvswitch-agent service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service neutron-openvswitch-agent status

Redirecting to /bin/systemctl status  neutron-openvswitch-agent.service

neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent

   Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled)

   Active: active (running) since Mon 2016-12-05 16:58:18 CST; 18h ago

Main PID: 807 (neutron-openvsw)

Removing the openvswitch-agent patch

You must remove the openvswitch-agent patch before removing the SeerEngine-DC OpenStack package.

To remove the openvswitch-agent patch:

1.     Remove the openvswitch-agent patch.

[root@localhost ~]# h3c-sdnplugin openvswitch uninstall

2.     Remove the SeerEngine-DC OpenStack package.

¡     .egg file

[root@localhost ~]# pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-sdnplugin

  /usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

¡     .rpm file

[root@localhost ~]# rpm -e SeerEngine_DC_PLUGIN

Upgrading the openvswitch-agent patch

CAUTION

CAUTION:

Services might be interrupted during the openvswitch-agent patch upgrade procedure.

 

To upgrade the openvswitch-agent patch, you must remove the current version first, and install a new version. For information about installing the openvswitch-agent patch, see "Installing the openvswitch-agent patch." For information about removing the openvswitch-agent patch, see "Removing the openvswitch-agent patch."

Installing/removing/upgrading DHCP failover components

To provide DHCP failover in the network-based overlay scenario, you must install DHCP failover components.

 

IMPORTANT

IMPORTANT:

The DHCP failover components can operate only on the CentOS 7.2.1511 operating system with a kernel version of 3.10.0-327.el7.x86_64. If the kernel version does not match that of the S1020V, install the kernel patch first .

 

Installing basic components

1.     Install WebSocket Client on the controller and network node.

 

IMPORTANT

IMPORTANT:

Make sure the WebSocket Client version is 0.56.

 

[root@localhost ~]# yum install –y python-websocket-client

2.     Install an S1020V vSwitch on the network node and configure bridge and controller settings. For the installation and configuration procedures, see H3C S1020V Installation Guide.

[root@localhost ~]# rpm -ivh --force s1020v-centos71-3.10.0-229.el7.x86_64-x86_64.rpm

3.     Stop the open-source DHCP and Metadata services on OpenStack.

[root@localhost ~]# systemctl stop neutron-dhcp-agent neutron-metadata-agent

[root@localhost ~]# systemctl disable neutron-dhcp-agent neutron-metadata-agent

Obtaining the installation package of the DHCP failover components

Two SeerEngine-DC OpenStack packages are available: one contains the DHCP failover components package and one does not. The SeerEngine-DC OpenStack package that contains the DHCP failover components package is named in the SeerEngine_DC_PLUGIN-DHCP_version1_ pike_2017.10.egg format. version1 represents the software package version number.

Obtain the required version of the SeerEngine-DC OpenStack package and then save the package to the target installation directory on the server or virtual machine. You can also transfer the installation package to the target installation directory through a file transfer protocol such as FTP, TFTP, or SCP. Use the binary transfer mode to prevent the software package from being corrupted during transit.

Installing DHCP failover components on the network node

Installing the DHCP component

1.     Access the directory where the SeerEngine-DC OpenStack package (an .egg file) is saved and then install the package.

In the following example, the SeerEngine-DC OpenStack package is in the /root directory.

[root@localhost ~]# easy_install SeerEngine_DC_PLUGIN-DHCP_E3607_pike_2017.10-py2.7.egg

2.     Install the DHCP component.

[root@localhost ~]# h3c-sdnplugin dhcp install

Install Environment dependent packages

Preparing…                         ########## [100%]

Updating / installing…

1.     python2-six-1.10.0-9.el7     ########## [  1%]

2.     ………

Install config files

Install services

Installation complete

Please do not remove the *.h3c_bak files.

3.     Edit the DHCP component configuration file.

a.     Use the vi editor to open the h3c_dhcp_agent.ini file on the network node.

[root@localhost ~]# vi /etc/neutron/h3c_dhcp_agent.ini

b.     Press I to switch to insert mode and edit the configuration file as follows:

[DEFAULT]

interface_driver = openvswitch

dhcp_driver = networking_h3c.agent.dhcp.driver.dhcp.Dnsmasq

enable_isolated_metadata = true

force_metadata = true

ovs_integration_bridge = br0

[h3c]

transport_url = ws://127.0.0.1:8080

websocket_fragment_size = 102400

[ovs]

ovsdb_interface = vsctl

c.     To enable certificate authentication, add the following configurations:

[h3c]

ca_file = /etc/neutron/ca.crt

cert_file = /etc/neutron/sna.pem

key_file = /etc/neutron/sna.key

key_password = 123456

insecure = true

d.     To use the northbound API of the controller for connection, add the following configurations:

[SDNCONTROLLER]

url = https://127.0.0.1:8443

username = sdn

password = skyline

enable_https = False

neutron_plugin_ca_file =

neutron_plugin_cert_file =

neutron_plugin_key_file =

e.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

4.     Start the DHCP component.

[root@localhost ~]# systemctl enable h3c-dhcp-agent.service

[root@localhost ~]# systemctl start h3c-dhcp-agent.service

Installing the Metadata component

1.     Access the directory where the SeerEngine-DC OpenStack package (an .egg or .rpm file) is saved and then install the package. The name of the SeerEngine-DC OpenStack package is SeerEngine_DC_PLUGIN-version1_ pike_2017.10-py2.7.egg or SeerEngine_DC_PLUGIN-version1_ pike_2017.10-1.noarch.rpm. version1 represents the version of the package.

In the following example, the SeerEngine-DC OpenStack package is in the /root directory.

¡     .egg file

[root@localhost ~]# easy_install SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

¡     .rpm file

[root@localhost ~]# rpm -ivh SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-1.noarch.rpm

2.     Install the Metadata component.

[root@localhost ~]# h3c-sdnplugin metadata install

Install config files

Install services

Installation complete

Please do not remove the *.h3c_bak files.

3.     Edit the Metadata component configuration file.

a.     Use the vi editor to open theh3c_metadata_agent.ini configuration file on the network node.

[root@localhost ~]# vi /etc/neutron/h3c_metadata_agent.ini

b.     Press I to switch to insert mode and edit the configuration file as follows:

[DEFAULT]

nova_metadata_host = controller

nova_metadata_port = 8775

nova_proxy_shared_secret = METADATA_SECRET

enable_keystone_authtoken = True

[cache]

[keysone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = NEUTRON_PASSWORD

[SDNCONTROLLER]

url = https://127.0.0.1:8443

username = sdn

password = skyline

enable_https = False

neutron_plugin_ca_file =

neutron_plugin_cert_file =

neutron_plugin_key_file =

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

4.     Start the Metadata component.

[root@localhost ~]# systemctl enable h3c-metadata-agent.service

[root@localhost ~]# systemctl start h3c-metadata-agent.service

Removing DHCP failover components

Remove the SeerEngine-DC OpenStack package after removing the DHCP and Metadata components.

To remove the DHCP failover components:

1.     Remove the DHCP component.

[root@localhost ~]# h3c-sdnplugin dhcp uninstall

Remove services

Removed symlink /etc/system/system/multi-user.target.wants/h3c-dhcp-agent.service.

Backup config files

Uninstallation complete

2.     Remove the Metadata component.

[root@localhost ~]# h3c-sdnplugin metadata uninstall

Remove services

Removed symlink /etc/system/system/multi-user.target.wants/h3c-metadata-agent.service.

Backup config files

Uninstallation complete

3.     Remove the SeerEngine-DC OpenStack package.

¡     .egg file

[root@localhost ~]# pip uninstall seerengine-dc-plugin

Uninstalling SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-sdnplugin

  /usr/lib/python2.7/site-packages/SeerEngine_DC_PLUGIN-E3603P01_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled SeerEngine-DC-PLUGIN-E3603P01-pike-2017.10

¡     .rpm file

[root@localhost ~]# rpm -e SeerEngine_DC_PLUGIN

Upgrading DHCP failover components

To upgrade DHCP failover components, first remove the old version and then install the new version.

 

CAUTION

CAUTION:

Service might be interrupted during the upgrade. Before performing an upgrade, be sure you fully understand its impact on the services.

 

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

DHCP component configuration file

Parameter

Description

ovs_integration_bridge

vSwitch bridge where the DHCP port resides.

websocket_fragment_size

Size of a websocket message fragment sent to the controller, in bytes.

The value is an integer equal to or larger than 1024. The default value is 102400. When the value is 1024, the websocket messages are not fragmented.

insecure

Whether to enable WebSocket certificate authentication.

The default value is False.

 

Metadata component configuration file

Parameter

Description

enable_keystone_authtoken

Whether to enable Neutron API. When the value is True, you must configure the keystone_authtoken parameter. When the value is False, you must configure the SeerEngine-DCCONTROLLER parameter.

 

Configuring the open-source metadata service for network nodes

OpenStack supports obtaining metadata from network nodes for VMs through DHCP or L3 gateway. H3C supports only the DHCP method. To configure the metadata service for network nodes:

1.     Download the OpenStack installation guide from the OpenStack official website and follow the installation guide to configure the metadata service for the network nodes.

2.     Configure the network nodes to provide metadata service through DHCP.

a.     Use the vi editor to open configuration file dhcp_agent.ini.

[root@network ~]# vi /etc/neutron/dhcp_agent.ini

b.     Press I to switch to insert mode, and modify configuration file dhcp_agent.ini as follows:

force_metadata = True

Set the value to True for the force_metadata parameter to force the network nodes to provide metadata service through DHCP.

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the dhcp_agent.ini configuration file.

Comparing and synchronizing resource information between the controller and cloud platform

 

NOTE:

This function is not supported in the following scenarios:

·     Network overlay VXLAN environment with compute node neutron-cas-ovs-agent, neutron-cas-sriov-agent, neutron-vmware-ovs-agent, or f5-oslbaasv2-agent.

·     CloudOS and third-party cloud platforms except for Ericsson.

·     VPC connections except for those provided by H3C-proprietary plug-ins.

 

1.     Execute the h3c-sdnplugin-extension compare --file [absolute path] file name.csv command to compare resource information between the controller and cloud platform.

¡     If you do not specify --file [absolute path] filename.csv, the comparison result is saved to the /var/log/neutron/compare_data-time.csv file, where time indicates the comparison start time.

¡     If you specify --file [absolute path] filename.csv, the comparison result is saved to the specified file. If you do not specify an absolute path, the result is saved to /var/log/neutron/file name.csv.

The comparison result file contains the following fields:

¡     Resource—Resource type.

¡     Name—Resource name.

¡     Id—Resource ID.

¡     Tenant_id—Tenant ID of the resource.

¡     Tenant_name—Tenant name of the resource.

¡     Status—Comparison result.

-     lost—Less resources on the controller. You must add resources to the controller.

-     different—Different resources on the controller than the cloud platform. You must update resources on the controller.

-     surplus—More resources on the controller. You must remove excessive resources from the controller.

2.     Execute the h3c-sdnplugin-extension sync –file comparison result file name.csv command. If the comparison result file is in the /var/log/neutron/ path, enter the file name directly. If the comparison result file is in another path, enter the absolute file path.

After the command is executed, the system displays resource statistics and prompts for your confirmation to start the synchronization. The system starts the synchronization only after receiving your confirmation for twice.

After the synchronization is complete, a synchronization result file /var/log/neutron/sync_all-time.csv is generated, where time indicates the synchronization start time.

 

CAUTION

CAUTION:

·     Do not add or edit information in the synchronize result file.

·     To avoid anomaly caused by misoperations, examine and compare the result file and resource statistics carefully.

 


FAQ

The Python tools cannot be installed using the yum command when a proxy server is used for Internet access. What should I do?

Configure HTTP proxy by performing the following steps:

1.     Make sure the server or the virtual machine can access the HTTP proxy server.

2.     At the CLI of the CentOS system, use the vi editor to open the yum.conf configuration file. If the yum.conf configuration file does not exist, this step creates the file.

[root@localhost ~]# vi /etc/yum.conf

3.     Press I to switch to insert mode, and provide HTTP proxy information as follows:

¡     If the server does not require authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport

¡     If the server requires authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport
proxy_username=username
proxy_password=password

Table 3 describes the arguments in HTTP proxy information.

Table 3 Arguments in HTTP proxy information

Field

Description

username

Username for logging in to the proxy server, for example, sdn.

password

Password for logging in to the proxy server, for example, 123456.

yourproxyaddress

IP address of the proxy server, for example, 172.25.1.1.

proxyport

Port number of the proxy server, for example, 8080.

 

proxy = http://172.25.1.1:8080

proxy_username = sdn

proxy_password = 123456

4.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the yum.conf file.

The Inter X700 Ethernet network adapter series fails to receive LLDP messages. What should I do?

Use the following procedure to resolve the issue. An enp61s0f3 Ethernet network adapter is used as an example.

1.     View and record system kernel information.

[root@localhost ~]# uname -r

3.10.0-957.1.3.el7.x86_64

2.     View detailed information about the Ethernet network adapter and record the values for the firmware-version and bus-info fields.

[root@localhost ~]# ethtool -i enp61s0f3

driver: i40e

version: 2.8.20-k

firmware-version: 3.33 0x80000f0c 1.1767.0

expansion-rom-version:

bus-info: 0000:3d:00.3

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

3.     Use one of the following solutions, depending on the kernel version and network adapter firmware version:

¡     The kernel version is higher than kernel-3.10.0-957.el7 and the network adapter firmware version is 4 or higher.

# Execute the following command:

[root@localhost ~]# ethtool --set-priv-flags enp61s0f3 disable-fw-lldp on

# Identify whether the value for the disable-fw-lldp field is on.

[root@localhost ~]# ethtool --show-priv-flags enp61s0f3  | grep lldp

disable-fw-lldp       : on

If the value is on, the network adapter then can receive LLDP messages. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file.

[root@localhost ~]# vi /etc/rc.d/rc.local

# Press I to switch to insert mode, and add this command to the file. Then press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

ethtool --set-priv-flags enp61s0f3  disable-fw-lldp on

# Configure the file to be executable.

[root@localhost ~]# chmod 755 /etc/rc.d/rc.local

¡     The kernel version is lower than kernel-3.10.0-957.el7, or the network adapter firmware version is lower than 4.

# Execute the echo "lldp stop" > /sys/kernel/debug/i40e/bus-info/command command. Enter the recorded bus info value for the network adapter, and add a backslash (\) before each ":".

[root@localhost ~]# echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

The network adapter can receive LLDP messages after this command is executed. For this command to remain effective after a system restart, you must write this command into the user-defined startup program file.

# Open the self-defined startup program file.

[root@localhost ~]# vi /etc/rc.d/rc.local

# Press I to switch to insert mode, and add this command to the file. Then Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the file.

echo "lldp stop" > /sys/kernel/debug/i40e/0000\:3d\:00.3/command

# Configure the file to be executable.

[root@localhost ~]# chmod 755 /etc/rc.d/rc.local

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网