H3C VCFC-DC Controller OpenStack Plug-Ins Installation Guide for CentOS-E31xx-5W100

HomeSupportResource CenterSDNH3C SeerEngine-DCH3C SeerEngine-DCTechnical DocumentsInstallOpenStack Plug-Ins Installation GuideH3C VCFC-DC Controller OpenStack Plug-Ins Installation Guide for CentOS-E31xx-5W100
01-Text
Title Size Download
01-Text 128.44 KB

Overview

This document describes how to install the virtual converged framework (VCF) Neutron plug-ins, Nova patch, and openvswitch-agent patch that are compatible with OpenStack on CentOS.

VCF Neutron plug-ins

Neutron is a type of OpenStack services used to manage all virtual networking infrastructures (VNIs) in an OpenStack environment. It provides virtual network services to the devices managed by OpenStack computing services.

VCF Neutron plug-ins are developed for the VCFC-DC controller based on the OpenStack framework. VCF Neutron plug-ins can obtain network configuration from OpenStack through REST APIs and synchronize the configuration to the VCFC-DC controllers. They can obtain settings for the tenants' networks, subnets, routers, ports, FW, LB, or VPN. Different types of VCF Neutron plug-ins can provide the following features for tenants:

·     VCF Neutron Core plug-inAllows tenants to use basic network communication for cores, including networks, subnets, routers, and ports.

·     VCF Neutron L3_RoutingAllows tenants to forward traffic to each other at Layer 3.

·     VCF Neutron FWaaS plug-inAllows tenants to create firewall services.

·     VCF Neutron LBaaS plug-inAllows tenants to create LB services.

·     VCF Neutron VPNaaS plug-inAllows tenants to create VPN services.

Nova patch

Nova is an OpenStack computing controller that provides virtual services for users. The virtual services include creating, starting up, shutting down, and migrating virtual machines and setting configuration information for the virtual machines, such as CPU and memory information.

The Nova patch enables virtual machines created by OpenStack to access networks managed by VCFC-DC controllers.

Openvswitch-agent patch

The open source openvswitch-agent process on an OpenStack compute node might fail to deploy VLAN flow tables to open source vSwitches when the following conditions exist:

·     The kernel-based virtual machine (KVM) technology is used on the node.

·     The hierarchical port binding feature is configured on the node.

To resolve this problem, you must install the openvswitch-agent patch.


Preparing for installation

Hardware requirements

Table 1 shows the hardware requirements for installing the VCF Neutron plug-ins, Nova patch, or openvswitch-agent patch on a server or virtual machine.

Table 1 Hardware requirements

CPU

Memory size

Disk space

Single-core and multicore CPUs

2 GB and above

5 GB and above

 

Software requirements

Table 2 shows the software requirements for installing the VCF Neutron plug-ins, Nova patch, or openvswitch-agent patch.

Table 2 Software requirements

Item

Supported versions

OpenStack

·     OpenStack Kilo 2015.1 on CentOS 7.1.1503

·     OpenStack Liberty on CentOS 7.1.1503

·     OpenStack Mitaka on CentOS 7.1.1503

·     OpenStack Ocata on CentOS 7.2.1511

·     OpenStack Pike on CentOS 7.2.1511

 

IMPORTANT

IMPORTANT:

Before you install the OpenStack plug-ins, make sure the following requirements are met:

·     Your system has Internet access because the system must first access the Internet to set up the installation environment.

·     Make sure the OpenStack environment is deployed correctly. For example, the /etc/hosts file on all nodes has the host name-IP address mappings. For information about the OpenStack environment deployment, see the installation guide for the specific OpenStack version on the official website.

 


Installing OpenStack plug-ins

The VCF Neutron plug-ins, Nova patch, and openvswitch-agent patch can be installed on different OpenStack versions. The installation package varies by OpenStack version. However, you can use the same procedure to install the Neutron plug-ins, Nova patch, or openvswitch-agent patch on different OpenStack versions. This document uses OpenStack Pike as an example.

Install the VCF Neutron plug-ins on an OpenStack controller node and install the Nova patch and openvswitch-agent patch on an OpenStack compute node. Before installation, you need to install the Python tools on the associated node.

Installing the Python tools

Before installing the plug-ins, first you need to download the Python tools online and install them.

To download and install the Python tools:

1.     Update the software source list.

[root@localhost ~]# yum clean all

[root@localhost ~]# yum makecache

2.     Download and install the Python tools.

[root@localhost ~]# yum install –y python-pip python-setuptools

Installing the VCF Neutron plug-ins

Prerequisites

The VCF Neutron plug-ins are included in the VCF OpenStack package. Perform the following steps to download the VCF OpenStack package from the H3C website:

1.     In the Web browser address bar, enter http://www.h3c.com/cn/Software_Download. Select SDN > H3C Virtual Converged Framework Controller, and download the VCF OpenStack package of the required version.

2.     Copy the VCF OpenStack package to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SCP.

 

 

NOTE:

If you decide to upload the VCF OpenStack package through FTP or TFTP, use the binary mode to avoid damage to the package.

 

Installation procedure

CAUTION

CAUTION:

The QoS feature will not operate correctly if you configure the database connection in configuration file neutron.conf as follows:

[database]

connection = mysql://…

This is an open source bug in OpenStack. To prevent this problem, configure the database connection as follows:

[database]

connection = mysql+pymysql://…

The three dots (…) in the command line represents the neutron database link information.

 

Some parameters must be configured with the required values as described in "Parameters and fields."

To install the VCF Neutron plug-ins:

1.     Change the working directory to where the VCF OpenStack package (an .egg file) is saved, and install the package on the OpenStack controller node. The name of the VCF OpenStack package is VCFC_DC_PLUGIN-version1_version2-py2.7.egg. version1 represents the version of the package. version2 represents the version of OpenStack.

In the following example, the VCF OpenStack package is saved to the path /root.

[root@localhost ~]# easy_install VCFC_DC_PLUGIN-E3103_pike_2017.10-py2.7.egg

2.     Install the VCF Neutron plug-ins.

[root@localhost ~]# h3c-vcfplugin controller install

3.     Modify the neutron.conf configuration file.

a.     Use the vi editor to open the neutron.conf configuration file.

[root@localhost ~]# vi /etc/neutron/neutron.conf

b.     Press I to switch to the insert mode, and modify the configuration file. For information about the parameters, see "neutron.conf."

For Pike plugins:

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_l3_router,firewall,lbaasv2,vpnaas,qos

[service_providers]

service_provider=FIREWALL:H3C: networking_h3c.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C: networking_h3c.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C: networking_h3c.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

For Kilo 2015.1, Liberty, and Mitaka plugins (Load balancing service V1 is configured in OpenStack):

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,lbaas,vpnaas

[service_providers]

service_provider=FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCER:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginDriver:default

service_provider=VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

For Kilo 2015.1, Liberty, Mitaka, and Ocata plugins (Load balancing service V2 is configured in OpenStack):

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,vpnaas

[service_providers]

service_provider=FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

For Ocata plugins (QoS service and Load balancing service V2 are configured in OpenStack) and for Liberty and Mitaka plugins (QoS service is configured in OpenStack):

[DEFAULT]

core_plugin = ml2

service_plugins = h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,vpnaas,qos

[service_providers]

service_provider=FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

service_provider=LOADBALANCERV2:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver_v2.H3CLbaasv2PluginDriver:default

service_provider=VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

[qos]

notification_drivers = message_queue,qos_h3c

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the neutron.conf file.

4.     Modify the ml2_conf.ini configuration file.

a.     Use the vi editor to open the ml2_conf.ini configuration file.

[root@localhost ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

b.     Press I to switch to the insert mode, and set the parameters in the ml2_conf.ini configuration file. For information about the parameters, see "ml2_conf.ini."

[ml2]

type_drivers = vxlan,vlan

tenant_network_types = vxlan,vlan

mechanism_drivers = ml2_h3c

extension_drivers = ml2_extension_h3c,qos

[ml2_type_vlan]

network_vlan_ranges = physicnet1:1000:2999

[ml2_type_vxlan]

vni_ranges = 1:500

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf.ini file.

5.     Modify the local_settings configuration file.

a.     Use the vi editor to open the local_settings configuration file.

[root@localhost ~]# vi /etc/openstack-dashboard/local_settings

b.     Press I to switch to the insert mode. Set the values for the LB, FW, and VPN fields in the OPENSTACK_NEUTRON_NETWORK parameter to enable the associated configuration pages in OpenStack Web. For information about the fields, see "OPENSTACK_NEUTRON_NETWORK."

OPENSTACK_NEUTRON_NETWORK = {

    'enable_lb': True,

    'enable_firewall': True,

    'enable_quotas': True,

    'enable_vpn': True,

    # The profile_support option is used to detect if an external router can be

    # configured via the dashboard. When using specific plugins the

    # profile_support can be turned on if needed.

    'profile_support': None,

    #'profile_support': 'cisco',

}

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the local_settings file.

6.     Modify the ml2_conf_h3c.ini configuration file.

a.     Use the vi editor to open the ml2_conf_h3c.ini configuration file.

[root@localhost ~]# vi /etc/neutron/plugins/ml2/ml2_conf_h3c.ini

b.     Press I to switch to the insert mode, and set the following parameters in the ml2_conf_h3c.ini configuration file. For information about the parameters, see "ml2_conf_h3c.ini."

[VCFCONTROLLER]

url = https://127.0.0.1:8443

username = sdn

password = skyline123

domain = sdn

timeout = 1800

retry = 10

vnic_type=ovs

hybrid_vnic = True

ip_mac_binding = True

denyflow_age =300

white_list = False

binddefaultrouter = False

auto_create_tenant_to_vcfc = True

router_binding_public_vrf = False

enable_subnet_dhcp = True

dhcp_lease_time = 365

firewall_type = GATEWAY

lb_type = GATEWAY

resource_mode = NFV

auto_delete_tenant_to_vcfc = True

auto_create_resource = True

nfv_ha = True

vds_name = VDS1

enable_metadata = False

use_neutron_credential = False

enable_security_group = True

disable_internal_l3flow_offload = True

firewall_force_audit = True

enable_l3_router_rpc_notify = False

output_json_log = False

lb_enable_snat = False

empty_rule_action = deny

enable_l3_vxlan = False

l3_vni_ranges = 10000:10100

vendor_rpc_topic = VENDOR_PLUGIN

vsr_descriptor_name = VSR_IRF

vlb_descriptor_name = VLB_IRF

vfw_descriptor_name = VFW_IRF

hierarchical_port_binding_physicnets = ANY

hierarchical_port_binding_physicnets_prefix = physicnet

network_force_flat = True

directly_external = OFF

directly_external_suffix = DMZ

generate_vrf_based_on_router_name = False

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the ml2_conf_h3c.ini file.

7.     If you have set the white_list parameter to True, perform the following tasks:

¡     Delete the username, password, and domain parameters in the ml2_conf_h3c.ini configuration file.

¡     Add an authentication-free user to the controller:

On the top navigation bar of the controller Web interface, click the  icon. Then select Users > Authentication from the left navigation pane.

Click Add.

Enter the IP address of Neutron server, and specify the role as Admin.

Click OK.

8.     If you have set the binddefaultrouter parameter to True, perform the following steps to configure the default virtual router on the VCFC-DC controller.

a.     On the top navigation bar, click Tenants.

b.     From the navigation pane, select All Tenants.

c.     On the tenant list page, select the tenant named default.

d.     From the navigation pane, select Your Network > Virtual Router.

e.     Click Add.

f.     On the page that opens, enter defaultRouter as the name of the virtual router.

g.     On the Advanced Configuration tab, select Share public network VRF and then click Apply.

9.     If you have set the use_neutron_credential parameter to True, perform the following steps:

a.     On the top navigation bar of the controller Web interface, click the  icon. Then select Users from the left navigation pane.

b.     Click Add.

Configure the username as neutron and the role as Admin, and set the password to the one that is used with username neutron in OpenStack.

Click OK.

10.     Restart the neutron-server service.

[root@localhost ~]# service neutron-server restart

neutron-server stop/waiting

neutron-server start/running, process 4583

11.     Restart the h3c-agent service.

[root@localhost ~]# service h3c-agent restart

h3c-agent stop/waiting

h3c-agent start/running, process 4678

Verifying the installation

# Verify that the VCF OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

[root@localhost ~]# pip freeze | grep VCF

VCFC-DC-PLUGIN===E3103-pike-2017.10

# Verify that the neutron-server service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service neutron-server status

neutron-server start/running, process 1849

# Verify that the h3c-agent service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service h3c-agent status

h3c-agent start/running, process 4678

Parameters and fields

This section describes parameters in configuration files and fields included in parameters.

neutron.conf

Parameter

Required value

Description

core_plugin

ml2

Used for loading the core plug-in ml2 to OpenStack.

service_plugins

h3c_vcfplugin.l3_router.h3c_l3_router_plugin.H3CL3RouterPlugin,firewall,lbaas,vpnaas

Used for loading the extension plug-ins to OpenStack.

service_provider

·     FIREWALL:H3C:h3c_vcfplugin.fw.h3c_fwplugin_driver.H3CFwaasDriver:default

·     LOADBALANCER:H3C:h3c_vcfplugin.lb.h3c_lbplugin_driver.H3CLbaasPluginDriver:default

·     VPN:H3C:h3c_vcfplugin.vpn.h3c_vpnplugin_driver.H3CVpnPluginDriver:default

Directory where the extension plug-ins are saved.

notification_drivers

message_queue,qos_h3c

Name of the QoS notification driver.

 

ml2_conf.ini

Parameter

Required value

Description

type_drivers

vxlan,vlan

Driver type.

vxlan must be specified as the first driver type.

tenant_network_types

vxlan,vlan

Type of the networks to which the tenants belong.

vxlan must be specified as the first driver type.

For intranet, only vxlan is available.

For extranet, only vlan is available.

mechanism_drivers

ml2_h3c

Name of the ml2 driver.

extension_drivers

ml2_extension_h3c,qos

Name of the ml2 extension driver. Available names include ml2_extension_h3c and qos. If the QoS feature is not enabled on OpenStack, you do not need to specify the value qos for this field.

Kilo 2015.1 plugins do not support the QoS driver.

network_vlan_ranges

N/A

Value range for the VLAN ID of the extranet, for example, physicnet1:1000:2999.

vni_ranges

N/A

Value range for the VXLAN ID of the intranet, for example, 1:500.

 

OPENSTACK_NEUTRON_NETWORK

Field

Description

enable_lb

Whether to enable or disable the LB configuration page.

·     True—Enable.

·     False—Disable.

enable_firewall

Whether to enable or disable the FW configuration page.

·     True—Enable.

·     False—Disable.

enable_vpn

Whether to enable or disable the VPN configuration page.

·     True—Enable.

·     False—Disable.

 

ml2_conf_h3c.ini

Parameter

Description

url

HTTPS URL address of the controller, for example, https://127.0.0.1:8443.

username

Username for logging in to the controller, for example, sdn. You do not need to configure a username when the use_neutron_credential parameter is set to True.

password

Password for logging in to the controller, for example, skyline123. You do not need to configure a password when the use_neutron_credential parameter is set to True.

domain

Name of the domain where the controller resides, for example, sdn.

timeout

The amount of time that the Neutron server waits for a response from the controller in seconds, for example, 1800 seconds.

As a best practice, set the waiting time greater than or equal to 1800 seconds.

retry

Maximum times for sending connection requests from the Neutron server to the controller, for example, 10.

vnic_type

Default virtual NIC type:

·     ovs

·     phy

Only the Kilo 2015.1 plugins support this parameter.

hybrid_vnic

Whether to enable or disable the feature of mapping OpenStack VLAN to controller VXLAN.

·     True—Enable.

·     False—Disable.

ip_mac_binding

Whether to enable or disable IP-MAC binding.

·     True—Enable.

·     False—Disable.

denyflow_age

Anti-spoofing flow table aging time for the virtual distributed switch (VDS), an integer in the range of 1 to 3600 seconds, for example, 300 seconds.

white_list

Whether to enable or disable the authentication-free user feature on OpenStack.

·     True—Enable.

·     False—Disable.

binddefaultrouter

Whether to enable or disable networking binding to the default virtual router on the VCFC-DC controller.

·     True—Enable.

·     False—Disable.

The Ocata and Pike plugins do not support this parameter.

auto_create_tenant_to_vcfc

Whether to enable or disable the feature of automatically creating tenants on the controller.

·     True—Enable.

·     False—Disable.

router_binding_public_vrf

Whether to use the public network VRF for creating a vRouter.

·     True—Use.

·     False—Do not use.

enable_subnet_dhcp

Whether to enable or disable DHCP for creating a vSubnet.

·     True—Enable.

·     False—Disable.

dhcp_lease_time

Valid time for vSubnet IP addresses obtained from the DHCP address pool in days, for example, 365 days.

firewall_type

Mode of the firewall created on the controller.

·     GATEWAY—Gateway type firewall, which is available only when the value of the resource_mode parameter is set to NFV or NFV_SHARE.

·     CGSR—Gateway service type firewall on a context. This type of firewalls are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. Each CGSR type firewall uses an independent context.

·     CGSR_SHAREGateway service type firewall on a context. This type of firewalls are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. All CGSR_SHARE type firewalls use the same context even if they belong to different tenants.

·     NFV_CGSR—Gateway service type firewall on a VNF. This type of firewalls are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. Each NFV_CGSR type firewall uses an independent VNF.

lb_type

Mode of the load balancer created on the controller.

·     SERVICE_CHAIN—Service chain type load balancer. This type is available only when the value of the resource_mode parameter is set to NFV or NFV_SHARE. SERVICE_CHAIN load balancers that belong to one tenant use the same VNF. SERVICE_CHAIN load balancers that belong to different tenants use different VNFs.

·     SERVICE_CHAIN_SHARE—Service chain type load balancer, which is available only when the value of the resource_mode parameter is set to NFV or NFV_SHARE. All SERVICE_CHAIN_SHARE type load balancers share the same VNF even if they belong to different tenants.

·     GATEWAY—Gateway type load balancer. This type is available only when the value of the resource_mode parameter is set to NFV or NFV_SHARE.

·     CGSRGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. CGSR type load balancers that belong to one tenant use the same context. CGSR type load balancers that belong to different tenants use different contexts.

·     CGSR_SHAREGateway service type load balancer on a context. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. All CGSR_SHARE type load balancers use the same context even if they belong to different tenants.

·     NFV_CGSRGateway service type load balancer on a VNF. This type of load balancers are available only when the value of the resource_mode parameter is set to CORE_GATEWAY. NFV_CGSR type load balancers that belong to one tenant use the same VNF. NFV_CGSR type load balancers that belong to different tenants use different VNFs.

resource_mode

Type of the resource created on the controller. The available values are as follows:

·     SELF_GATEWAY—Independent gateway resource.

·     NFV—VNF resource.

·     NFV_SHARE—VNF resource, which can be shared by multiple tenants.

·     CORE_GATEWAY—Gateway service resource.

auto_delete_tenant_to_vcfc

Whether to enable or disable the feature of automatically removing tenants from the controller.

·     True—Enable.

·     False—Disable.

auto_create_resource

Whether to enable or disable the feature of automatically creating resources.

·     True—Enable.

·     False—Disable.

nfv_ha

Whether configure the NFV and NFV_SHARE resources to support stack.

·     True—Support.

·     False—Do not support.

vds_name

Name of the VDS, for example, VDS1.

After deleting a VDS and recreating a VDS with the same name, you must perform the following tasks on the controller node for the new VDS to take effect:

·     Reboot the neutron-server service.

·     Reboot the h3c-agent service.

enable_metadata

Whether to enable or disable metadata for OpenStack.

·     True—Enable.

·     False—Disable.

If you enable this feature, you must set the enable_l3_router_rpc_notify parameter to True.

use_neutron_credential

Whether to use the OpenStack Neutron username and password to communicate with the controller.

·     True—Use.

·     False—Do not use.

enable_security_group

Whether to enable or disable the feature of deploying security group rules to the controller.

·     True—Enable.

·     False—Disable.

disable_internal_l3flow_offload

Whether to enable or disable intra-network traffic routing through the gateway.

·     True—Disable.

·     False—Enable.

firewall_force_audit

Whether to audit firewall policies synchronized to the controller by OpenStack. The default value is False for the Kilo 2015.1 plugins and True for plugins of other versions.

·     TrueAudits firewall policies synchronized to the controller by OpenStack. The auditing state of the synchronized policies on the controller is True (audited).

·     FalseDoes not audit firewall policies synchronized to the controller by OpenStack. The synchronized policies on the controller retain their previous auditing state.

enable_l3_router_rpc_notify

Whether to enable or disable the feature of sending Layer 3 routing events through RPC.

·     True—Enable.

·     False—Disable.

output_json_log

Whether to output REST API messages to the OpenStack operating logs in JSON format for communication between the VCF Neutron plugins and the controller.

·     True—Enable.

·     False—Disable.

lb_enable_snat

Whether to enable or disable Source Network Address Translation (SNAT) for load balancers on the controller.

·     True—Enable.

·     False—Disable.

empty_rule_action

Set the action for security policies that do not contain any ACL rules on the controller.

·     permit

·     deny

enable_l3_vxlan

Whether to enable or disable the feature of using Layer 3 VXLAN IDs (L3VNIs) to mark Layer 3 flows between vRouters on the controller.

·     True—Enable.

·     False—Disable.

By default, this feature is disabled.

l3_vni_ranges

Set the value range for the L3VNI, for example, 10000:10100.

vendor_rpc_topic

RPC topic of the vendor. This parameter is required when the vendor needs to obtain Neutron data from the VCF Neutron plug-ins. The available values are as follows:

·     VENDOR_PLUGIN—Default value, which means that the parameter does not take effect.

·     DP_PLUGIN—RPC topic of DPtech.

The value of this parameter must be negotiated by the vendor and H3C.

vsr_descriptor_name

VNF descriptor name of the VNF virtual gateway resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the VCF controller.

vlb_descriptor_name

VNF descriptor name of the virtual load balancing resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the lb_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the VCF controller.

vfw_descriptor_name

VNF descriptor name of the virtual firewall resource created on VNF manager 3.0. This parameter is available only when the value of the resource_mode parameter is set to NFV or the value of the firewall_type parameter is set to NFV_CGSR. When you configure this parameter, make sure its value is the same as the VNF descriptor name specified on the VNF manager of the VCF controller.

hierarchical_port_binding_physicnets

Policy for OpenStack to select a physical VLAN when performing hierarchical port binding. The available values are as follows:

·     ANY—Default value, which means any VLAN is selected from all physical VLANs for VLAN ID assignment.

·     PREFIX—A VLAN is selected from all physical VLANs matching the specified prefix for VLAN ID assignment.

Only the Pike plug-ins support this parameter.

hierarchical_port_binding_physicnets_prefix

Prefix for matching physical VLANs. The default value is physicnet. This parameter is available only when you set the value of the hierarchical_port_binding_physicnets parameter to PREFIX.

Only the Pike plug-ins support this parameter.

network_force_flat

Whether to enable forcible conversion of an external network to a flat network. The value can only be set to True if the external network is a VXLAN.

directly_external

Whether traffic to the external network is directly forwarded by the gateway. The available values are as follows:

·     ANY—Traffic to the external network is directly forwarded by the gateway to the external network.

·     OFF—Traffic to the external network is forwarded by the gateway to the firewall and then to the external network.

·     SUFFIX—Traffic that matches the vRouter name suffix is forwarded by the gateway to the firewall and then to the external network.

directly_external_suffix

vRouter name suffix (DMZ for example). This parameter is available only when you set the value of the directly_external parameter to SUFFIX.

generate_vrf_based_on_router_name

Whether to use the vRouter names configured on OpenStack as the VRF names on the controller.

·     True—Use the names. Make sure each vRouter name configured on OpenStack is a case-sensitive string of 1 to 31 characters that contain only letters and digits.

·     False—Not to use the names.

By default, the vRouter names configured on OpenStack are not used as the VRF names on the controller.

 

Removing the VCF Neutron plug-ins

To remove the VCF Neutron plug-ins, first remove the VCF Neutron plug-ins and then remove the VCF OpenStack package.

To remove the VCF Neutron plug-ins:

1.     Remove the VCF Neutron plug-ins.

[root@localhost ~]# h3c-vcfplugin controller uninstall

Remove service

Removed symlink /etc/systemd/system/multi-user.target.wants/h3c-agent.service.

Restore config files

Uninstallation complete.

For the VCF Neutron plug-ins of Kilo 2015.1, Liberty, Mitaka, or Ocata version, you are prompted whether to remove the database when removing the plug-ins.

¡     To remove the database, enter y. Removing the plug-ins will simultaneously remove the connected database. Before removing the plug-ins, remove SERVICE_CHAIN type firewalls and GATEWAY or SERVICE_CHAIN type load balancers (if any) from the OpenStack.

¡     To not remove the database, enter n. When you install the VCF Neutron plug-ins of a new version, the plug-ins restore the configuration from the configuration file in the original database.

2.     Remove the VCF OpenStack package.

[root@localhost ~]# pip uninstall vcfc-dc-plugin

Uninstalling VCFC-DC-PLUGIN-E3103-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-vcfplugin

  /usr/lib/python2.7/site-packages/VCFC_DC_PLUGIN-E3103_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled VCFC-DC-PLUGIN-E3103-pike-2017.10

Upgrading the VCF Neutron plug-ins

CAUTION

CAUTION:

·     Services might be interrupted during the VCF Neutron plug-ins upgrade procedure.

·     The default parameter settings for VCF Neutron plug-ins might vary by OpenStack version (Kilo 2015.1, Liberty, Mitaka, and Ocata). Modify the default parameter settings for VCF Neutron plug-ins when upgrading the OpenStack version to ensure that the plug-ins have the same configurations before and after the upgrade.

 

To upgrade the VCF Neutron plug-ins, you need to remove the current version first, and install the new version. For information about installing the VCF Neutron plug-ins, see "Installing the VCF Neutron plug-ins." For information about removing the VCF Neutron plug-ins, see "Removing the VCF Neutron plug-ins."

Installing the Nova patch

Prerequisites

The Nova patch is included in the VCF OpenStack package. Perform the following steps to download the VCF OpenStack package from the H3C website:

1.     In the Web browser address bar, enter http://www.h3c.com/cn/Software_Download. Select SDN > H3C Virtual Converged Framework Controller, and download the VCF OpenStack package of the required version.

2.     Copy the VCF OpenStack package to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SCP.

 

 

NOTE:

If you decide to upload the VCF OpenStack package through FTP or TFTP, use the binary mode to avoid damage to the package.

 

Installation procedure

Based on your network environment, choose one step between step 3 and step 4.

To install the Nova patch on the OpenStack compute node:

1.     Change the working directory to where the VCF OpenStack package (an .egg file) is saved, and install the package on the OpenStack compute node. The name of the VCF OpenStack package is VCFC_DC_PLUGIN-version1_version2-py2.7.egg. version1 represents the version of the package. version2 represents the version of OpenStack.

In this example, the VCF OpenStack package is saved to the path /root.

[root@localhost ~]# easy_install VCFC_DC_PLUGIN-E3102_pike_2017.10-py2.7.egg

2.     Install the Nova patch.

[root@localhost ~]# h3c-vcfplugin compute install

Install the nova patch

 

modifying:

/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py

modify success, backuped at: /usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py.h3c_bak

 

 

NOTE:

The contents below the modifying: line indicate the modified open source Neutron file and the backup path of the file before modification.

 

3.     (Optional.) If the networking type of the compute node is host-based overlay, perform the following steps:

a.     Stop the neutron-openvswitch-agent service on the compute node and disable the system from starting the service at startup.

[root@localhost ~]# service neutron-openvswitch-agent stop

[root@localhost ~]# systemctl disable neutron-openvswitch-agent.service

b.     Execute the neutron agent-list command on the controller node to identify whether the agent of the compute node exists in the database.

-     If the agent of the compute node does not exist in the database, go to the next step.

-     If the agent of the compute node exists in the database, execute the neutron agent-delete id command to delete the agent. The id argument represents the agent ID.

[root@localhost ~]# neutron agent-list

| id                                   | agent_type         | host     |

| 25c3d3ac-5158-4123-b505-ed619b741a52 | Open vSwitch agent | compute3

[root@localhost ~]# neutron agent-delete 25c3d3ac-5158-4123-b505-ed619b741a52

Deleted agent: 25c3d3ac-5158-4123-b505-ed619b741a52

c.     Use the vi editor on the compute node to open the nova.conf configuration file.

[root@localhost ~]# vi /etc/nova/nova.conf

d.     Press I to switch to the insert mode, and set the parameters in the nova.conf configuration file as follows. For descriptions of the parameters, see Table 3.

If the hypervisor type of the compute node is KVM, modify the nova.conf configuration file as follows:

[s1020v]

s1020v = True

member_status = True

[neutron]

ovs_bridge = vds1-br

If the hypervisor type of the compute node is VMware vCenter, modify the nova.conf configuration file as follows:

[DEFAULT]

compute_driver = vmwareapi.VMwareVCDriver

[vmware]

host_ip = 127.0.0.1

host_username = sdn

host_password = skyline123

cluster_name = vcenter

insecure = true

[s1020v]

s1020v = True

vds = VDS2

e.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the nova.conf file.

4.     (Optional.) If the networking type of the compute node is network-based overlay, perform the following steps:

If the hypervisor type of the compute node is KVM, you do not need to install the Nova patch.

If the hypervisor type of the compute node is VMware vCenter, perform the following steps:

a.     Stop the neutron-openvswitch-agent service and disable the system to start the service at startup.

[root@localhost ~]# service neutron-openvswitch-agent stop

[root@localhost ~]# systemctl disable neutron-openvswitch-agent.service

b.     Select Provision > Network Design > Domain from the top navigation tree of the controller webpage to identify whether the compute node is online. If the compute node is online, delete the compute node.

c.     Execute the neutron agent-list command on the controller node to identify whether the agent of the compute node exists in the database.

-     If the agent of the compute node does not exist in the database, go to the next step.

-     If the agent of the compute node exists in the database, execute the neutron agent-delete id command to delete the agent. The id argument represents the agent ID.

[root@localhost ~]# neutron agent-list

| id                                   | agent_type         | host        |

| 25c3d3ac-5158-4123-b505-ed619b741a52 | Open vSwitch agent | compute3

 

[root@localhost ~]# neutron agent-delete 25c3d3ac-5158-4123-b505-ed619b741a52

Deleted agent: 25c3d3ac-5158-4123-b505-ed619b741a52

d.     Use the vi editor to open the nova.conf configuration file.

[root@localhost ~]# vi /etc/nova/nova.conf

e.     Press I to switch to the insert mode, and set the parameters in the nova.conf configuration file as follows. For descriptions of the parameters, see Table 3.

[DEFAULT]

compute_driver = vmwareapi.VMwareVCDriver

[vmware]

host_ip = 127.0.0.1

host_username = sdn

host_password = skyline123

cluster_name = vcenter

insecure = True

[s1020v]

s1020v = False

vds = VDS2

uplink_teaming_policy = loadbalance_srcid

f.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the nova.conf file.

Table 3 Parameters in the configuration file

Parameter

Description

s1020v

Whether to use the H3C S1020V vSwitch to forward the traffic between vSwitches and the traffic between the vSwitches and the external network.

·     True—Use the H3C S1020V vSwitch.

·     False—Do not use the H3C S1020V vSwitch.

member_status

Whether to enable or disable the feature of modifying the status of members on OpenStack load balancers.

·     True—Enable.

·     False—Disable.

vds

VDS to which the host in the vCenter belongs. In this example, the host belongs to VDS2. In the host overlay networking, you can only specify the VDS that the controller synchronizes to the vCenter. In the network overlay networking, you can specify an existing VDS on demand.

ovs_bridge

Name of the bridge for the H3C S1020V vSwitch. Make sure the bridges created on all H3C S1020V vSwitches use the same name.

compute_driver

Name of the driver used by the compute node for virtualization.

host_ip

IP address used to log in to the vCenter, for example, 127.0.0.1.

host_username

Username for logging in to the vCenter, for example, sdn.

host_password

Password for logging in to the vCenter, for example, skyline123.

cluster_name

Name of the team in the vCenter environment, for example, vcenter.

insecure

Whether to enable or disable security check.

·     True—Do not perform security check.

·     False—Perform security check. This value is not supported in the current software version.

uplink_teaming_policy

Uplink routing policy.

·     loadbalance_srcid—Source vPort-based routing.

·     loadbalance_ip—IP hash-based routing.

·     loadbalance_srcmac—Source MAC hash-based routing.

·     loadbalance_loadbased—Physical NIC load-based routing.

·     failover_explicit—Explicit failover order-based routing.

 

5.     Restart the openstack-nova-compute service.

[root@localhost ~]# service openstack-nova-compute restart

Verifying the installation

# Verify that the VCF OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

[root@localhost ~]# pip freeze | grep VCF

VCFC-DC-PLUGIN===E3103-pike-2017.10

# Verify that the openstack-nova-compute service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service openstack-nova-compute status

nova-compute start/running, process 184

Removing the Nova patch

You must remove the Nova patch before removing the VCF OpenStack package.

To remove the Nova patch:

1.     Remove the Nova patch.

[root@localhost ~]# h3c-vcfplugin compute uninstall

Uninstall the nova patch

2.     Remove the VCF OpenStack package.

[root@localhost ~]# pip uninstall vcfc-dc-plugin

Uninstalling VCFC-DC-PLUGIN-E3103-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-vcfplugin

  /usr/lib/python2.7/site-packages/VCFC_DC_PLUGIN-E3103_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled VCFC-DC-PLUGIN-E3103-pike-2017.10

Upgrading the Nova patch

CAUTION

CAUTION:

Services might be interrupted during the Nova patch upgrade procedure.

 

To upgrade the Nova patch, you need to remove the current version first, and install the new version. For information about installing the Nova patch, see "Installing the Nova patch." For information about removing the Nova patch, see "Removing the Nova patch."

Installing the openvswitch-agent patch

Prerequisites

The openvswitch-agent patch is included in the VCF OpenStack package. Perform the following steps to download the VCF OpenStack package from the H3C website:

1.     In the Web browser address bar, enter http://www.h3c.com/cn/Software_Download. Select SDN > H3C Virtual Converged Framework Controller, and download the VCF OpenStack package of the required version.

2.     Copy the VCF OpenStack package to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SCP.

 

 

NOTE:

If you decide to upload the VCF OpenStack package through FTP or TFTP, use the binary mode to avoid damage to the package.

 

Installation procedure

To install the openvswitch-agent patch:

1.     Change the working directory to where the VCF OpenStack package (an .egg file) is saved, and install the package on the OpenStack compute node. The name of the VCF OpenStack package is VCFC_DC_PLUGIN-version1_version2-py2.7.egg. version1 represents the version of the package. version2 represents the version of OpenStack.

[root@localhost ~]# easy_install VCFC_DC_PLUGIN-E3102_pike_2017.10-py2.7.egg

2.     Install the openvswitch-agent patch.

[root@localhost ~]# h3c-vcfplugin openvswitch install

3.     Restart the openvswitch-agent service.

[root@localhost ~]# service neutron-openvswitch-agent restart

Verifying the installation

# Verify that the VCF OpenStack package is correctly installed. If the correct software and OpenStack versions are displayed, the package is successfully installed.

[root@localhost ~]# pip freeze | grep VCF

VCFC-DC-PLUGIN===E3103-pike-2017.10

# Verify that the openvswitch-agent service is enabled. The service is enabled if its state is running.

[root@localhost ~]# service neutron-openvswitch-agent status

Redirecting to /bin/systemctl status  neutron-openvswitch-agent.service

neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent

   Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled)

   Active: active (running) since Mon 2016-12-05 16:58:18 CST; 18h ago

Main PID: 807 (neutron-openvsw)

Removing the openvswitch-agent patch

You must remove the openvswitch-agent patch before removing the VCF OpenStack package.

To remove the openvswitch-agent patch:

1.     Remove the openvswitch-agent patch.

[root@localhost ~]# h3c-vcfplugin openvswitch uninstall

2.     Remove the VCF OpenStack package.

[root@localhost ~]# pip uninstall vcfc-dc-plugin

Uninstalling VCFC-DC-PLUGIN-E3103-pike-2017.10:

  /usr/bin/h3c-agent

  /usr/bin/h3c-vcfplugin

  /usr/lib/python2.7/site-packages/VCFC_DC_PLUGIN-E3103_pike_2017.10-py2.7.egg

Proceed (y/n)? y

  Successfully uninstalled VCFC-DC-PLUGIN-E3103-pike-2017.10

Upgrading the openvswitch-agent patch

CAUTION

CAUTION:

Services might be interrupted during the openvswitch-agent patch upgrade procedure.

 

To upgrade the openvswitch-agent patch, you must remove the current version first, and install a new version. For information about installing the openvswitch-agent patch, see "Installing the openvswitch-agent patch." For information about removing the openvswitch-agent patch, see "Removing the openvswitch-agent patch."

Configuring the metadata service for network nodes

OpenStack supports obtaining metadata from network nodes for VMs through DHCP or L3 gateway. H3C supports only the DHCP method. To configure the metadata service for network nodes:

1.     Download the OpenStack installation guide from the OpenStack official website and follow the installation guide to configure the metadata service for the network nodes.

2.     Configure the network nodes to provide metadata service through DHCP.

a.     Use the vi editor to open configuration file dhcp_agent.ini.

[root@network ~]# vi /etc/neutron/dhcp_agent.ini

b.     Press I to switch to the insert mode, and modify configuration file dhcp_agent.ini as follows:

force_metadata = True

Set the value to True for the force_metadata parameter to force the network nodes to provide metadata service through DHCP.

c.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the dhcp_agent.ini configuration file.

 


FAQ

The Python tools cannot be installed using the yum command when a proxy server is used for Internet access. What should I do?

Configure HTTP proxy by performing the following steps:

1.     Make sure the server or the virtual machine can access the HTTP proxy server.

2.     At the CLI of the CentOS system, use the vi editor to open the yum.conf configuration file. If the yum.conf configuration file does not exist, this step creates the file.

[root@localhost ~]# vi /etc/yum.conf

3.     Press I to switch to the insert mode, and provide HTTP proxy information as follows:

¡     If the server does not require authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport

¡     If the server requires authentication, enter HTTP proxy information in the following format:
proxy = http://yourproxyaddress:proxyport
proxy_username=username
proxy_password=password

Table 4 describes the arguments in HTTP proxy information.

Table 4 Arguments in HTTP proxy information

Field

Description

username

Username for logging in to the proxy server, for example, sdn.

password

Password for logging in to the proxy server, for example, 123456.

yourproxyaddress

IP address of the proxy server, for example, 172.25.1.1.

proxyport

Port number of the proxy server, for example, 8080.

 

proxy = http://172.25.1.1:8080

proxy_username = sdn

proxy_password = 123456

4.     Press Esc to quit insert mode, and enter :wq to exit the vi editor and save the yum.conf file.