H3C VCFC-DC Controller Kubernetes Plug-In Installation Guide-E31xx-5W100

HomeSupportResource CenterSDNH3C SeerEngine-DCH3C SeerEngine-DCTechnical DocumentsInstallKubernetes Plug-In Installation GuideH3C VCFC-DC Controller Kubernetes Plug-In Installation Guide-E31xx-5W100
01-Text
Title Size Download
01-Text 58.32 KB

Overview

Kubernetes is an open-source container orchestration platform for automated deployment, scaling, and management of containerized applications.

Pods are the smallest deployable units of computing in Kubernetes. A Pod is a group of one or more tightly coupled containers. The containers share the networking resources, file system, and specifications for how to run the containers.

Installation of the VCF Kubernetes plug-in allows interconnection between the Kubernetes platform and the VCFC-DC controller. The VCFC-DC controller can monitor traffic, deploy security policies, and provide networking services for Pods in Kubernetes.


Preparing for installation

Hardware requirements

Table 1 shows the minimum hardware requirements for installing the VCF Kubernetes plug-in on a physical server or virtual machine.

Table 1 Minimum hardware requirements

CPU

Memory size

Disk size

Quad-core

8 GB

50 GB

 

Software requirements

Table 2 shows the software requirements for installing the VCF Kubernetes plug-in.

Table 2 Software requirements

Item

Supported versions

Kubernetes

Kubernetes 1.9/1.10

vSwitch

·     Host-based overlay—For the vSwitch version information, see the release notes for the VCFC-DC controller

·     Network-based overlay—Open vSwitch 2.9 and later

 


Configuring the Kubernetes nodes

You must configure basic settings for Kubernetes nodes before installing the VCF Kubernetes plug-in.

Configuring a node in host-based overlay model

1.     Install the S1020V vSwitch. For the installation procedure, see H3C S1020V Installation Guide.

2.     Configure a VDS on the VCFC-DC controller and add the VDS configuration to the node.

In the following example, vds1-br is the vSwitch name, eth1 is the uplink interface of  the vSwitch, vxlan_vds1-br is the VXLAN tunnel name, and 100.0.100.100 is the VTEP's IP.

$ ovs-vsctl add-br vds1-br

$ ovs-vsctl add-port vds1-br eth1

$ ovs-vsctl add-port vds1-br vxlan_vds1-br -- set interface vxlan_vds1-br type=vxlan options:remote_ip=flow options:local_ip=100.0.100.100 options:key=flow

3.     Configure a KVM-type compute domain on the controller and associate the domain with the VDS.

4.     Add the node to the hosts in the compute domain.

Configuring a node in network-based overlay model

1.     Install the Open vSwitch.

$ yum install -y openvswitch

$ systemctl enable openvswitch.service

$ systemctl start openvswitch.service

2.     Install and start lldpad

$ yum install -y lldpad

$ systemctl enable lldpad.service

$ systemctl start lldpad.service

3.     Enable LLDP message sending on the uplink interface (eno2 for example).

$ lldptool set-lldp -i eno2 adminStatus=rxtx;

$ lldptool -T -i eno2 -V sysName enableTx=yes;

$ lldptool -T -i eno2 -V portDesc enableTx=yes;

$ lldptool -T -i eno2 -V sysDesc enableTx=yes;

$ lldptool -T -i eno2 -V sysCap enableTx=yes;

4.     Add an Open vSwitch bridge (br-eno2 for example) on the node, specify the OpenFlow version, and set the fail mode to secure.

$ ovs-vsctl add-br br-eno2

$ ovs-vsctl set bridge br-eno2 protocols=OpenFlow13

$ ovs-vsctl set-fail-mode br-eno2 secure

5.     Add an uplink interface (eno2 for example) to the Open vSwitch bridge and configure the interface settings.

$ ovs-vsctl add-port br-eno2 eno2

$ ovs-vsctl br-set-external-id br-eno2 uplinkInterface eno2


Installing the VCF Kubernetes plug-in

Loading the plug-in Docker image

Follow these steps to load the plug-in Docker image on the master and nodes, respectively:

1.     Obtain the VCF Kubernetes plug-in Docker image. Then copy the image to the installation directory on the server or virtual machine, or upload it to the installation directory through FTP, TFTP, or SFTP.

The name of the image is in the VCFC_DC_NET_PLUGIN-version.tar format, where version represents the image version number.

 

CAUTION

CAUTION:

To avoid damaging the image, use the binary mode when you upload the image through FTP or TFTP.

 

2.     Load the Docker image.

$ docker load -i VCFC_DC_NET_PLUGIN-E3103.tar

Installing the plug-in

You must install the plug-in on the master.

To install the plug-in:

1.     Obtain the configuration file of the VCF Kubernetes plug-in. The configuration file name is in the VCFC_DC_NET_PLUGIN-version.yaml format, where version represents the version number.

2.     Copy the file to the installation directory of the master, or upload it to the installation directory through FTP, TFTP, or SFTP.

 

CAUTION

CAUTION:

To avoid damaging the image, use the binary mode when you upload the image through FTP or TFTP.

 

3.     Modify the configuration file.

a.     Use the vi editor to open the configuration file.

$ vi VCFC_DC_NET_PLUGIN-E3103.yaml

b.     Press I to switch to insert mode, and then modify the configuration file. For information about the parameters, see Table 3.

kind: ConfigMap

apiVersion: v1

metadata:

  name: vcfc-net-plugin

  namespace: kube-system

data:

  etcd_servers: "http://192.168.0.10:2379"

  k8s_api_server: "https://192.168.0.20:6443"

  k8s_ca: "/etc/vcfc-net-plugin/ca.crt"

  k8s_key: "/etc/vcfc-net-plugin/client.key"

  k8s_cert: "/etc/vcfc-net-plugin/client.crt"

  k8s_token: ""

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: vcfc-net-master

  namespace: kube-system

data:

  vcfc_url: https://192.168.0.32:8443

  vcfc_username: "sdn"

  vcfc_password: "sdn123456"

  vcfc_domain: "sdn"

  vcfc_client_timeout: "1800"

  vcfc_client_retry: "10"

  netoverlay_vlan_ranges: "node01:1:100,node02:101:200"

  log_dir: "/var/log/vcfc-net-plugin/"

  log_level: "1"

  bind_host: "0.0.0.0"

  bind_port: "9797"

  protocol: "http"

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: vcfc-net-agent

  namespace: kube-system

data:

  net_masters: "auto"

  net_master_app_name: "vcfc-net-master"

  net_master_app_namespace: "kube-system"

  net_master_protocol: "http"

  net_master_port: "9797"

  overlay_mode: "auto"

  log_dir: "/var/log/vcfc-net-plugin/"

  log_level: "1"

  default_security_policy: "permit"

4.     Install the plug-in.

$ kubectl create -f VCFC_DC_NET_PLUGIN-E3103.yaml

5.     Verify the installation. If the Pods are in Running state, the plug-in is installed correctly.

$ kubectl get pods -n kube-system | grep vcfc
vcfc-net-agent-mtwkl 1/1 Running 0 5d7h
vcfc-net-agent-rt2s6 1/1 Running 0 5d7h
vcfc-net-master-79bc68885c-2s9jm 1/1 Running 0 5d7h

The following table describes parameters in the configuration file of the VCF Kubernetes plug-in.

Table 3 Parameters in the configuration file of the VCF Kubernetes plug-in

Parameter

Description

etcd_servers

etcd service API address.

k8s_api_server

K8s API server interface address.

k8s_ca

Client CA file of the K8s API server. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address.

k8s_key

Client X.509 certificate private key file of the K8s API server. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address. It is used together with k8s_cert.

This parameter is not required when k8s_token authentication is used.

k8s_cert

Client X.509 certificate file of the K8s API server. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address.

This parameter is not required when k8s_key and k8s_cert authentication is used.

k8s_token

Client authentication token of the K8s API server. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address.

This parameter is not required when k8s_key and k8s_cert authentication is used.

vcfc_url

HTTPS URL address of the VCFC-DC controller.

vcfc_username

Username for logging in to the VCFC-DC controller.

vcfc_password

Password for logging in to the VCFC-DC controller.

vcfc_domain

Name of the domain where the VCFC-DC controller resides.

vcfc_client_timeout

The amount of time waiting for a response from the VCFC-DC controller, in seconds.

vcfc_client_retry

Maximum transmissions of connection requests to the VCFC-DC controller.

netoverlay_vlan_ranges

VLAN range for the node in network-based overlay mode, in the format of VLAN_min:VLAN_max. For more than one VLAN range, use a comma to separate them.

log_dir

Log directory.

log_level

Log level.

bind_host

API-bound address.

bind_port

API-bound port number. As a best practice, bind the API to the net_master_port.

protocol

API protocol. Only HTTP is supported.

net_masters

IP address of the VCFC network master. auto means automatically obtaining the IP address.

net_master_app_name

Application name of the VCFC network master.

net_master_app_namespace

Application namespace of the VCFC network master.

net_master_protocol

API protocol of the VCFC network master.

net_master_port

API port number of the VCFC network master.

overlay_mode

Overlay mode of the node:

·     net—Network-based overlay

·     host—Host-based overly

·     auto—Automatic selection of overlay mode based on the open vSwitch configuration

default_security_policy

Default security policy. This parameter takes effect only in network-based overlay mode.

·     permit

·     deny

 

Removing the plug-in

To remove the plug-in, execute the following command:

$ kubectl delete -f VCFC_DC_NET_PLUGIN-E3103.yaml

Upgrading the plug-in

1.     Install the plug-in Docker image. For the installation procedure, see "Loading the plug-in Docker image."

2.     Upload and modify the plug-in configuration file. For the procedure, see "Installing the plug-in."

3.     Upgrade the plug-in.

$ kubectl apply -f VCFC_DC_NET_PLUGIN-E3104.yaml


Configuration example

Installation of the VCF Kubernetes plug-in allows interconnection between the VCFC-DC controller and the Kubernetes platform. The following example describes the configuration procedure for Pods to be managed by the VCFC-DC controller.

For the Pods to be managed by the VCFC-DC controller:

1.     Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller.

2.     Create and modify a Pod configuration file, postgres-pod.yaml for example.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  labels:

    h3c.io/network_id: 9e9af886-e038-4c94-8573-11b89079196d

    h3c.io/tenant_id: 14ac7fc1-50d4-409a-ad76-4a0c35f429f3

    h3c.io/qos_policy_id: 38b51db9-cc1d-4b07-872e-cf2644bfc057

    h3c.io/security_group_id: 39b70d60-8bfd-4b27-bb4d-4b8f8955a2e6

spec:

  containers:

  - name: postgres

image: postgres

Parameter description:

¡     h3c.io/network_idUUID of the VCFC-DC virtual network. This parameter is required.

¡     h3c.io/tenant_idUUID of the VCFC-DC tenants. This parameter is required.

¡     h3c.io/qos_policy_idUUID of the VCFC-DC network policy. This parameter is optional.

¡     h3c.io/security_group_idUUID of the VCFC-DC security policy. This parameter is optional.

3.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

4.     Verify that the Pod is displayed and manageable on the Virtual Port page on the VCFC-DC controller.