H3C SeerEngine-DC Controller Kubernetes Plug-Ins Installation Guide-E61xx-5W105

HomeSupportResource CenterAD-NET(SDN)Application-Driven Data CenterSeerEngine-DCTechnical DocumentsConfigure & DeployInteroperability GuidesH3C SeerEngine-DC Controller Kubernetes Plug-Ins Installation Guide-E61xx-5W105
01-Text
Title Size Download
01-Text 160.63 KB

Overview

Kubernetes is an open-source container orchestration platform for automated deployment, scaling, and management of containerized applications.

Pods are the smallest deployable units of computing in Kubernetes. A Pod is a group of one or more tightly coupled containers, with shared network resources and file system and specifications for how to run the containers.

Installation of the SDN Kubernetes plug-in allows Pods in the Kubernetes to come online on the SeerEngine-DC controller for the controller to monitor traffic, deploy security policies, and provide networking services for the Pods.


Preparing for installation

Hardware requirements

Table 1 shows the minimum hardware requirements for installing the SDN Kubernetes plug-in on a physical server or virtual machine.

Table 1 Minimum hardware requirements

CPU

Memory size

Disk size

Quad-core

8 GB

50 GB

 

Software requirements

Table 2 shows the software requirements for installing the SDN Kubernetes plug-in.

Table 2 Software requirements

Item

Supported versions

Kubernetes

Kubernetes 1.9x-1.21x.

vSwitch

·     Host-based overlay—For the vSwitch version information, see the release notes for the SeerEngine-DC controller.

·     Network-based overlay—Open vSwitch 2.9 and later. For the compatibility between the Open vSwitch and operating system kernel versions, see Table 3.

 

Table 3 Compatibility between the Open vSwitch and operating system kernel versions

Open vSwitch version

Linux kernel version

2.9.x

3.10 to 4.13

2.10.x

3.16 to 4.17

2.11.x

3.16 to 4.18

2.12.x

3.16 to 5.0

2.13.x

3.16 to 5.0

2.14.x

3.16 to 5.5

2.15.x

3.16 to 5.8

 


Configuring the Kubernetes nodes

You must configure basic settings for Kubernetes nodes before installing the SDN Kubernetes plug-in.

Configuring a node in host-based overlay model

1.     Install the S1020V vSwitch. For the installation procedure, see H3C S1020V Installation Guide.

2.     Configure a VDS on the SeerEngine-DC controller and add the VDS configuration to the node.

The following configuration example uses vds1-br as the vSwitch name, eth1 as the uplink interface, vxlan_vds1-br as the VXLAN tunnel name, and 100.0.100.100 as the VTEP's IP. Make sure the host-based overlay nodes are reachable to each other over VTEP IPs.

$ ovs-vsctl add-br vds1-br

$ ovs-vsctl add-port vds1-br eth1

$ ovs-vsctl add-port vds1-br vxlan_vds1-br -- set interface vxlan_vds1-br type=vxlan options:remote_ip=flow options:local_ip=100.0.100.100 options:key=flow

$ ip link set vds1-br up

$ ip addr add 100.0.100.100/16 dev vds1-br

To avoid loss of the VDS bridge's IP address after a node reboot, configure the IP address for the VDS bridge as follows for a root user:

a.     Use the vi editor to open the /etc/profile file, press i to switch to insert mode, and then add the following two lines at the end of the file.

ip link set vds1-br up

ip addr add 100.0.100.100/16 dev vds1-br

b.     Press Esc to enter command mode and enter :wq to save the file and quit the vi editor.

For a non-root user to perform this operation, it must have write access to the /etc/profile file and add sudo before the ip link set vds1-br up and ip addr add 100.0.100.100/16 dev vds1-br commands.

3.     Configure a KVM-type compute domain on the controller and associate the domain with the VDS.

4.     Add the node to the hosts in the compute domain.

Configuring a node in network-based overlay model

1.     Install a version of Open vSwitch compatible with the kernel version of the operating system. See Table 3 for the compatibility between the Open vSwitch and operating system kernel versions.

$ yum install -y openvswitch

$ systemctl enable openvswitch.service

$ systemctl start openvswitch.service

2.     Install and start lldpd.

$ yum install -y lldpd

$ systemctl enable lldpd.service

$ systemctl start lldpd.service

3.     Add an Open vSwitch bridge (br-eno2 for example) on the node, specify the OpenFlow version, and set the fail mode to secure.

$ ovs-vsctl add-br br-eno2

$ ovs-vsctl set bridge br-eno2 protocols=OpenFlow13

$ ovs-vsctl set-fail-mode br-eno2 secure

4.     Add an uplink interface (eno2 for example) to the Open vSwitch bridge and configure the interface settings.

$ ovs-vsctl add-port br-eno2 eno2

$ ovs-vsctl br-set-external-id br-eno2 uplinkInterface eno2

5.     (Optional.) To deploy K8s on a bare metal, add the UUID of the bare metal vPort (coming online on the controller), for example, 1e10786f-f894-533f-838c-23c2766ed1d1, the UUID of the virtual link layer network where the vPort resides, for example, 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6, and the management network gateway of the K8s cluster, for example, 10.10.0.254, to the OVS bridge.

$ ovs-vsctl br-set-external-id br-eno2 uplinkNetworkId 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6

$ ovs-vsctl br-set-external-id br-eno2 uplinkPortId 1e10786f-f894-533f-838c-23c2766ed1d1

$ ovs-vsctl br-set-external-id br-eno2 managementNetworkGw 10.10.0.254

6.     (Optional.) To deploy K8s on a VM created in trunk port mode on OpenStack, perform the following tasks:

a.     Add the following settings for the OVS bridge:

-     UUID of the trunk port on OpenStack, for example, 1e10786f-f894-533f-838c-23c2766ed1d1.

-     UUID of the virtual link layer network where the trunk port resides, for example, 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6.

-     Management network gateway of the K8s cluster, for example, 10.10.0.254.

-     Connected cloud scenario. The value can only be OpenStack.

-     Virtualization type. The value can only be KVM.

-     Access type. The value can only be netoverlay.

b.     Configure lldpd on the host where the VM resides. For the configuration procedure, see steps 2 and 3.

$ ovs-vsctl br-set-external-id br-eno2 uplinkNetworkId 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6

$ ovs-vsctl br-set-external-id br-eno2 uplinkPortId 1e10786f-f894-533f-838c-23c2766ed1d1

$ ovs-vsctl br-set-external-id br-eno2 managementNetworkGw 10.10.0.254

$ ovs-vsctl br-set-external-id br-eno2 cloud openstack

$ ovs-vsctl br-set-external-id br-eno2 virtType kvm

$ ovs-vsctl br-set-external-id br-eno2 accessType netoverlay

7.     To deploy K8s in Ironic mode on OpenStac, perform the following tasks:

a.     Add the following settings for the OVS bridge:

-     UUID of the corresponding uplink port on OpenStack, for example, 1e10786f-f894-533f-838c-23c2766ed1d1.

-     UUID of the virtual link layer network where the uplink port resides, for example, 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6.

-     Management network gateway of the K8s cluster, for example, 10.10.0.254.

-      Cloud scenario with which the OVS bridge interoperates. The value can only be OpenStack.

-     Virtualzation type. The value can only be Ironic.

-     Overly mode. The value can only be Netoverlay.

b.     Configure the lldpd service for the host where the VM resides. For the configuration procedure, see "2" and "3."

$ ovs-vsctl br-set-external-id br-eno2 uplinkNetworkId 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6

$ ovs-vsctl br-set-external-id br-eno2 uplinkPortId 1e10786f-f894-533f-838c-23c2766ed1d1

$ ovs-vsctl br-set-external-id br-eno2 managementNetworkGw 10.10.0.254

$ ovs-vsctl br-set-external-id br-eno2 cloud openstack

$ ovs-vsctl br-set-external-id br-eno2 virtType ironic

$ ovs-vsctl br-set-external-id br-eno2 accessType netoverlay

 


Installing the SDN Kubernetes plug-in

Loading the plug-in Docker image

Follow these steps to load the plug-in Docker image on the master and nodes, respectively:

1.     Obtain the SDN Kubernetes plug-in Docker image package. Then save the package to the installation directory on the server or virtual machine The name of the package is in the SeerEngine_DC_NET_PLUGIN-version.tar format, where version represents the version number.

 

CAUTION

CAUTION:

Alternatively, you can upload the package to the installation directory through FTP, TFTP, or SFTP. To avoid damaging the package, use the binary mode when you upload the package through FTP or TFTP.

 

IMPORTANT

IMPORTANT:

The image package required depends on the server architecture:

·     x86_64 serverSeerEngine_DC_NET_PLUGIN-version.tar.gz

·     ARM serverSeerEngine_DC_NET_PLUGIN-version-ARM64.tar.gz

 

2.     Decompress the package.

$ tar -xzvf SeerEngine_DC_NET_PLUGIN-E3606.tar.gz

SeerEngine_DC_NET_PLUGIN-E3606.tar

SeerEngine_DC_NET_PLUGIN-E3606.yaml

SeerEngine_DC_NET_PLUGIN-E3606.crd.yaml

webhook-create-signed-cert.sh

3.     Load the Docker image.

$ docker load -i SeerEngine_DC_NET_PLUGIN-E3606.tar

Installing the plug-in

After loading the docker image on the master and nodes, install the plug-in on the master.

To install the plug-in on the master:

1.     Upload and execute the preprocessing script to provide a certificate for the webhook service in the sdnc-net-master plug-in.

a.     Obtain preprocessing script webhook-create-signed-cert.sh and upload it to the installation directory on the master.

b.     Execute the script.

You only need to execute the script on one master when multiple masters exist.

$ sh webhook-create-signed-cert.sh

2.     Obtain and copy the configuration files of the SDN Kubernetes plug-in SeerEngine_DC_NET_PLUGIN-version.yaml and SeerEngine_DC_NET_PLUGIN-version.crd.yaml to the installation directory on the master. version in the file names represents the version number.

You only need to perform this task on one master when multiple masters exist.

 

CAUTION

CAUTION:

Alternatively, you can upload the files to the installation directory through FTP, TFTP, or SFTP. To avoid damaging the files, use the binary mode when you upload the files through FTP or TFTP.

 

3.     Modify the configuration file.

a.     Use the vi editor to open the configuration file.

$ vi SeerEngine_DC_NET_PLUGIN-E3606.yaml

b.     Press I to switch to insert mode, and then modify the configuration file. For information about the parameters, see Table 4.

kind: ConfigMap

apiVersion: v1

metadata:

  name: sndc-net-plugin

  namespace: kube-system

data:

  etcd_servers: "https://192.168.0.10:2379,https://192.168.0.11:2379,https://192.168.0.12:2379"

  etcd_certfile: "/etc/sndc-net-plugin/etcd.crt"

  etcd_keyfile: "/etc/sndc-net-plugin/etcd.key"

  etcd_cafile: "/etc/sndc-net-plugin/etcd-ca.crt"

  k8s_api_server: "https://192.168.0.20:6443"

  k8s_ca: "/etc/sndc-net-plugin/ca.crt"

  k8s_key: "/etc/sndc-net-plugin/client.key"

  k8s_cert: "/etc/sndc-net-plugin/client.crt"

  k8s_token: ""

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: sndc-net-master

  namespace: kube-system

data:

  sndc_url: http://192.168.0.32:30000

  sndc_domain: "sdn"

  sndc_client_timeout: "60"

  sndc_client_retry: "3"

  openstack_url: "http://99.0.88.40:5000/v3"

  openstack_username: "admin"

  openstack_password: "123456"

  openstack_projectname: "admin"

  openstack_projectdomain: "Default"

  netoverlay_vlan_ranges: "node01:1:100,node02:101:200"

  log_dir: "/var/log/sndc-net-plugin/"

  log_level: "1"

  bind_host: "0.0.0.0"

  bind_port: "9797"

  protocol: "http"

  webhook_bind_port: "9898"

  default_network_id: ""

---

apiVersion: v1

kind: Secret

metadata:

  name: sdn-master-secret

  namespace: kube-system

type: Opaque

data:

  username: YWRtaW4=

  password: YWRtaW5AMTIz

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: sndc-net-agent

  namespace: kube-system

data:

  net_masters: "auto"

  net_master_app_name: "sndc-net-master"

  net_master_app_namespace: "kube-system"

  net_master_protocol: "http"

  net_master_port: "9797"

  overlay_mode: "auto"

  log_dir: "/var/log/sndc-net-plugin/"

  log_level: "1"

  default_security_policy: "permit"

  host_networks: "192.168.10.0/24,192.168.2.0/24"

  host_to_container_network: "172.70.0.0/16"

  container_to_host_network: "172.60.0.0/16"

  node_port_net_id: ""

  default_mtu: "0"

  sync_flows_interval: “0”

  service_strategy: "0"

  service_ip_cidr: "10.68.0.0/16"

  enable_flows_pre_distribution: "true"

  clusterIp_publish : “false”

cloud_region_name: ""

apiVersion: admissionregistration.k8s.io/v1beta1

kind: ValidatingWebhookConfiguration

metadata:

  name: validation-webhook-cfg

  labels:

    app: admission-webhook-ippool

webhooks:

  - name: validate.sdnc.io

    failurePolicy: Fail

    clientConfig:

      service:

        name: sndc-net-master-webhook

        namespace: kube-system

        path: "/v1.0/validate"

      caBundle: ""

    rules:

      - operations: [ "CREATE", "UPDATE", "DELETE" ]

        apiGroups: ["sdnc.io"]

        apiVersions: ["v1"]

        resources: ["ipv4pools","ipv6pools"]

4.     Install the plug-in.

 

IMPORTANT

IMPORTANT:

Before installing the plug-in, modify the apiVersion parameter for the resources in the configuration files according to the K8s cluster version.

 

$ kubectl create -f SeerEngine_DC_NET_PLUGIN-E3606.crd.yaml

$ kubectl create -f SeerEngine_DC_NET_PLUGIN-E3606.yaml

5.     Verify the installation. If the Pods are in Running state, the plug-in is installed correctly.

$ kubectl get pods -n kube-system | grep sndc
sndc-net-agent-mtwkl 1/1 Running 0 5d7h
sndc-net-agent-rt2s6 1/1 Running 0 5d7h
sndc-net-master-79bc68885c-2s9jm 1/1 Running 0 5d7h

The following table describes parameters in the configuration file of the SDN Kubernetes plug-in.

Table 4 Parameters in the configuration file of the SDN Kubernetes plug-in

Parameter

Description

etcd_servers

etcd service API address. You can configure multiple addresses for an etcd cluster and use commas to separate the addresses. You can configure multiple addresses for an etcd cluster and use commas to separate the addresses. The address can be an HTTP or HTTPS address. To use an HTTPS address (for example, https://192.168.0.10:2379), you must configure the etcd_certfile, etcd_keyfile, and etcd_cafile parameters.

etcd_certfile

etcd client x509 certificate file. The value is the certificate file path, for example, /etc/sndc-net-plugin/etcd.crt. If the path does not exist, create it and set the permission to 755.

This parameter is valid only when the etcd_servers value is an HTTPS address.

etcd_keyfile

Private key file for the etcd client x509 certificate. The value is the private key file path, for example, /etc/sndc-net-plugin/etcd.key. If the path does not exist, create it and set the permission to 755.

This parameter is valid only when the etcd_servers value is an HTTPS address.

etcd_cafile

etcd client CA file. The value is the CA file path. If the path does not exist, create it and set the permission to 755.

This parameter is valid only when the etcd_servers value is an HTTPS address.

k8s_api_server

K8s API server interface address.

k8s_ca

Client CA file of the K8s API server. The value is the CA file path, for example, /etc/sndc-net-plugin/ca.crt. If the path does not exist, create it and set the permission to 755.

This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address.

k8s_key

Client X.509 certificate private key file of the K8s API server. The value is the private key file, for example, /etc/sndc-net-plugin/client.key. If the path does not exist, create it and set the permission to 755.

This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address. It is used together with k8s_cert.

This parameter is not required when k8s_token authentication is used.

k8s_cert

Client X.509 certificate file of the K8s API server. The value is the certificate file path, for example, /etc/sndc-net-plugin/client.crt. If the path does not exist, create it and set the permission to 755.

This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address.

This parameter is not required when k8s_key and k8s_cert authentication is used.

k8s_token

Client authentication token of the K8s API server. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address.

This parameter is not required when k8s_key and k8s_cert authentication is used.

sndc_url

URL address for logging in to Unified Platform, for example, http://192.168.0.32:30000.

sndc_domain

Name of the domain where the SeerEngine-DC controller resides.

sndc_client_timeout

The amount of time waiting for a response from the SeerEngine-DC controller, in seconds.

sndc_client_retry

Maximum transmissions of connection requests to the SeerEngine-DC controller.

openstack_url

OpenStack Keystone authentication address.

openstack_username

Username for accessing OpenStack.

openstack_password

Password for accessing OpenStack.

openstack_projectname

OpenStack project name.

openstack_projectdomain

Name of the domain where the OpenStack project resides.

netoverlay_vlan_ranges

VLAN range for the node in network-based overlay mode, in the format of VLAN_min:VLAN_max. For more than one VLAN range, use a comma to separate them.

log_dir

Log directory.

log_level

Log level.

bind_host

API-bound address.

bind_port

API-bound port number. As a best practice, bind the API to the net_master_port.

protocol

API protocol. Only HTTP is supported.

webhook_bind_port

Webhook service port number. As a best practice, set the same value as the webhook-port parameter.

default_network_id

UUID of the default virtual link layer network where the containers come online. If no virtual link layer network is configured for the containers, the containers come online on the default network.

username(sdnc-master-secret)

Username for logging in to Unified Platform. You must encode and provide the username in its Base64 format by using the echo -n 'username' |base64 command. For example, if the username is admin, provide its Base64 format of YWRtaW4= for this parameter. To decode the YWRtaW4= username, execute the echo -n 'YWRtaW4='|base64d command.

password(sdnc-master-secret)

Password for logging in to Unified Platform. You must encode and provide the password in its Base64 format by using the echo -n 'password' |base64 command. For example, if the password is admin@123, provide its Base64 format of YWRtaW5AMTIzNDU2 for this parameter. To decode the YWRtaW5AMTIzNDU2 password, execute the echo -n 'YWRtaW5AMTIzNDU2'|base64 -d command.

net_masters

IP address of the SDNC network master. auto means automatically obtaining the IP address.

net_master_app_name

Application name of the SDNC network master.

net_master_app_namespace

Application namespace of the SDNC network master.

net_master_protocol

API protocol of the SDNC network master.

net_master_port

API port number of the SDNC network master.

overlay_mode

Overlay mode of the node:

·     net—Network-based overlay

·     host—Host-based overly

·     auto—Automatic selection of overlay mode based on the open vSwitch configuration

default_security_policy

Default security policy. This parameter takes effect only in network-based overlay mode.

·     permit

·     deny

host_networks

Network segment where the host is located. For multiple network segments, use commas to separate them.

host_to_container_network

NAT address network segment used by the host for accessing a container. This network segment cannot conflict with other network segments.

You can leave this parameter unconfigured in a bare metal scenario.

container_to_host_network

NAT address network segment used by a container for accessing the host. This network segment cannot conflict with other network segments.

You can leave this parameter unconfigured in a bare metal scenario.

node_port_net_id

Virtual link layer network UUID for the NodePort function. After the UUID is specified, a vPort will automatically come online for each node to provide NodePort services.

You are not required to configure this parameter if NodePort is not used.

default_mtu

Default MTU of a container NIC. The value is 0 by default, indicating that the default MTU is 1500. You can set the MTU to 1450 when the controller interoperates with OpenStack.

sync_flows_interval

Interval at which the plug-in polls the endpoints and member Pods for changes. If a change is detected, the flow table is refreshed.

The value is an integer in the range of 0 to 60, in seconds. The default value is 0. The smaller the value, the more frequently the flow table is refreshed. As a best practice, configure the value in the range of 0 to 10.

service_strategy

Load balancing policy for the ClusterIp service:

·     0—IP address-based load balancing policy.

·     1—IP address- and port number-based load balancing policy.

The default value is 0.

The load balancing policy for the ClusterIp service is disabled if flow table flow table pre-distribution is enabled.

This parameter takes effect only when the Pod accesses the service of ClusterIp type.

service_ip_cidr

ClusterIp service IP address segment of the K8s cluster.

enable_flows_pre_distribution

Whether to enable flow table pre-distribution for Service ClusterIP.

The default value is True.

The load balancing policy for the ClusterIp service is disabled if flow table flow table pre-distribution is enabled.

clusterIp_publish

Whether to enable publishing of ClusterIP routes to EVPN Fabric.

The default value is False. When the value is True, the upstream subnet needs to be bound to a virtual router, and a routing table is bound to the virtual router.

The value is not supported to true under OpenStack scenarios. IPv6 static routes, container networking for host backends, and host overlay are not supported.

When the value is switched from true to false, the routing table entry delivered by the controller will not be deleted.

cloud_region_name

Name of the cloud with which the K8s cluster interoperates. This parameter is used to differentiate K8s clusters.

If only one K8s cluster exists, you can leave the parameter unconfigured. By default, the parameter is unconfigured.

caBundle

Certificate authorization data of the K8s cluster. You can obtain the value by executing the kubectl config view --raw --flatten -o json command on the master or retrieve the data from the /etc/kubernetes/admin.conf file on the master node.

 

IMPORTANT

IMPORTANT:

·     After configuring the etcd_certfile, etcd_keyfile, and etcd_cafile parameters, you must save the etcd client certificate file, client certificate key file, and certificate CA file to the specified path on the master node and copy the path to all nodes. The path depends on the deployment tool and the tool version. For example, the etcd certificate file in the K8s environment set up by using Kubeadm is saved in the /etc/kubernetes/pki/ path.

·     After configuring the k8s_ca, k8s_key, and k8s_cert parameters, you must save the K8s API server client certificate file, client certificate key file, and certificate CA file to the specified path on the master node and copy the path to all nodes. The path depends on the deployment tool and the tool version. For example, the K8s API server certificate file in the K8s environment set up by using Kubeadm is saved in the /etc/kubernetes/pki/ path.

 

Managing log files for the plug-in

The log files for the plug-in are saved in the /var/log/sdnc-net-plugin directory on each node by default. You can delete the log files as needed.

 

CAUTION

CAUTION:

Do not delete the source and destination files for the soft links in the log directory. If you do so, you must restart the plug-in.

 

Removing the plug-in

IMPORTANT

IMPORTANT:

Before removing the plug-in, remove all Pods created by using the plug-in.

 

To remove the plug-in, execute the following commands:

$ kubectl delete -f SeerEngine_DC_NET_PLUGIN-E3606.yaml

$ kubectl delete -f SeerEngine_DC_NET_PLUGIN-E3606.crd.yaml

Upgrading the plug-in

1.     Install the plug-in Docker image. For the installation procedure, see "Loading the plug-in Docker image."

2.     Upload and modify the plug-in configuration file. For the procedure, see "Installing the plug-in."

3.     Upgrade the plug-in.

$ kubectl delete -f SeerEngine_DC_NET_PLUGIN-E3606.yaml

$ kubectl apply -f SeerEngine_DC_NET_PLUGIN-E6103.crd.yaml

$ kubectl create -f SeerEngine_DC_NET_PLUGIN-E6103.yaml

IMPORTANT

IMPORTANT:

When the original image name starts with vcfc, uninstall the container plugin on the master node first, and run the script vcfc2sdnc.sh on the master node.

 


Configuration example

Configuring Pod network parameters

This section describes the procedure for creating a Pod in Kubernetes and onboarding the Pod onto the SeerEngine-DC controller. As a best practice to ensure stable network access during service expansion, configure probe monitoring for Pods on Kubernetes.

Configuring Pod network parameters by using the default network

1.     Create a tenant, virtual link layer network, subnet, security policy, and network policy on the controller.

2.     Identify the default_network_id parameter in the configuration file and specify the default network. For the configuration method, see "Installing the plug-in."

3.     Create and edit the Pod configuration file, for example, postgres-pod.yaml, on the master.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  annotations:

spec:

  containers:

  - name: postgres

    image: postgres

4.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

5.     Verify the Pod online status on the controller vPort page.

Configuring Pod network parameters by using labels

1.     Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller. To deploy a K8s cluster on a VM in network-based overlay model or on an OpenStack bare metal, create tenants, virtual link layer networks, and subnets on OpenStack.

2.     Create and edit the Pod configuration file, for example, postgres-pod.yaml, on the master.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  labels:

    sdnc.io/network_id: 9e9af886-e038-4c94-8573-11b89079196d

    sdnc.io/tenant_id: 14ac7fc1-50d4-409a-ad76-4a0c35f429f3

    sdnc.io/qos_policy_id: 38b51db9-cc1d-4b07-872e-cf2644bfc057

    sdnc.io/security_group_id: 39b70d60-8bfd-4b27-bb4d-4b8f8955a2e6

spec:

  containers:

  - name: postgres

    image: postgres

Parameter description:

¡     sdnc.io/network_idUUID of the SeerEngine-DC or OpenStack virtual network. This parameter must be configured.

¡     sdnc.io/tenant_idUUID of the SeerEngine-DC or OpenStack tenant. This parameter must be configured.

¡     sdnc.io/qos_policy_idUUID of the SeerEngine-DC network policy. This parameter is optional. OpenStack VMs do not support this parameter.

¡     sdnc.io/security_group_idUUID of the SeerEngine-DC security policy. This parameter is optional. OpenStack VMs do not support this parameter.

3.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

4.     Verify the Pod online status on the controller vPort page.

Configuring Pod network parameters by using annotations

1.     Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller. To deploy a K8s cluster on a VM in network-based overlay model or on an OpenStack bare metal, create tenants, virtual link layer networks, and subnets on OpenStack.

2.     Create and edit the Pod configuration file, for example, postgres-pod.yaml, on the master.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  annotations:

    sdnc.io/network_id: 9e9af886-e038-4c94-8573-11b89079196d

    sdnc.io/tenant_id: 14ac7fc1-50d4-409a-ad76-4a0c35f429f3

    sdnc.io/qos_policy_id: 38b51db9-cc1d-4b07-872e-cf2644bfc057

    sdnc.io/security_group_id: 39b70d60-8bfd-4b27-bb4d-4b8f8955a2e6

spec:

  containers:

  - name: postgres

    image: postgres

Parameter description:

¡     sdnc.io/network_idUUID of the SeerEngine-DC or OpenStack virtual network. This parameter must be configured.

¡     sdnc.io/tenant_idUUID of the SeerEngine-DC or OpenStack tenant. This parameter must be configured.

¡     sdnc.io/qos_policy_idUUID of the SeerEngine-DC network policy. This parameter is optional. OpenStack VMs do not support this parameter.

¡     sdnc.io/security_group_idUUID of the SeerEngine-DC security policy. This parameter is optional. OpenStack VMs do not support this parameter.

3.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

4.     Verify the Pod online status on the controller vPort page.

Configuring Pod network parameters by using NetworkConfiguration CRD

1.     Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller. To deploy a K8s cluster on a VM in network-based overlay model or on an OpenStack bare metal, create tenants, virtual link layer networks, and subnets on OpenStack.

2.     Create network configuration resources on the cluster.

apiVersion: "sdnc.io/v1"

kind: NetworkConfiguration

metadata:

  name: example

  namespace: default

spec:

  config: '{

    "network": {

        "network_id": "bbdf64ec-73c7-4038-b134-b792cacf43cf"

    },

    "tenant": {

        "tenant_id": "115d0dcc-f5a7-407f-b0d1-9da3431df26b"

    },

    "qos_policy": {

        "qos_policy_id": "bbdf64cc- f5c7-407f-b0d1-9da3431df26b"

    },

    "security_group": {

        "security_group_id": "132d0dec-737f-407f-b0d1-9da3431df26b"

    }

}'

Parameter description:

¡     network_idUUID of the SeerEngine-DC or OpenStack virtual network. This parameter must be configured.

¡     tenant_idUUID of the SeerEngine-DC or OpenStack tenant. This parameter must be configured.

¡     qos_policy_idUUID of the SeerEngine-DC network policy. This parameter is optional. OpenStack VMs do not support this parameter.

¡     security_group_idUUID of the SeerEngine-DC security policy. This parameter is optional. OpenStack VMs do not support this parameter.

3.     Create and edit the Pod configuration file, for example, postgres-pod.yaml on the master.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  annotations:

    sdnc.io/network_conf: example

spec:

  containers:

  - name: postgres

    image: postgres

4.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

5.     Verify the Pod online status on the controller vPort page.

Configuring Pod network parameters by using NetworkAttachmentDefinition CRD

1.     Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller. To deploy a K8s cluster on a VM in network-based overlay model or on an OpenStack bare metal, create tenants, virtual link layer networks, and subnets on OpenStack.

2.     Create NetworkAttachmentDefinition resources in the cluster.

apiVersion: "k8s.cni.cncf.io/v1"

kind: NetworkAttachmentDefinition

metadata:

  name: pod-network

spec:

  tenant_id: 665cdcaa9b3f4183824bf551c909429c

  network_id: cf2be36f-555d-49f7-bb85-a124b068b2bc=

  subnets:

  - name: pod_sub

    subnet_id: 3c6af174-5bcb-45b8-906b-24f23f92adde

    gateway_ip:

    cidr: 4.0.0.0/24

    ip_version: 4

    enable_dhcp: true

  - name: pod_sub2

    subnet_id: d09bc4c8-ba41-4344-9a38-062cd0934727

    gateway_ip:

    cidr: 212::/16

    ip_version: 6

    enable_dhcp: true

  static_ip: false

Parameter description:

¡     name—CRD resource name. The value can contain only uppercase and lowercase letters and hyphens (-).

¡     tenant_idUUID of the SeerEngine-DC or OpenStack tenant. This parameter must be configured.

¡     network_idUUID of the SeerEngine-DC or OpenStack virtual network. This parameter must be configured.

¡     subnets—Subnet information, including the name, subnet ID, CIDR, IP version, and DHCP enabling status (the value must be true), which must be consistent with the settings on the cloud platform. Multiple subnets of the same IP version can be configured.

¡     static_ipWhether to retain a static IP. When the value is true, the static IPs keep unchanged for the Pods in a StatefulSet and Pods in migration. After the static IP retaining feature takes effect, you are not allowed to change the value to false.

3.     Create and edit the Pod configuration file, for example, postgres-pod.yaml on the master.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  annotations:

    k8s.v1.cni.cncf.io/networks: pod-network/pod_sub1/pod_sub2

spec:

  containers:

  - name: postgres

image: postgres

4.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

5.     Verify the Pod online status on the controller vPort page.

Configuring an IP address for a Pod

CAUTION

CAUTION:

·     Make sure the static IP address and IP address pool of a Pod do not conflict.

·     Make sure the static IP address and the IP address pool of a Pod do not conflict with the DHCP address pool of the controller's subnets.

 

After configuring Pod network parameters, you can specify a static IP address for the Pod or configure the Pod to obtain an IP address automatically from the IP address pool.

Specifying a static IP address for a Pod

1.     Create a tenant, virtual link layer network, subnet, security policy, and network policy on the controller

2.     Create and edit the Pod configuration file, for example postgres-pod.yaml on the master.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  annotations:

    sdnc.io/ipv4addr: 10.10.0.1

    sdnc.io/ipv6addr: 201::1

spec:

  containers:

  - name: postgres

    image: postgres

Parameter description:

¡     sdnc.io/ipv4addrIPv4 address of the Pod. This parameter is optional.

¡     sdnc.io/ipv6addrIPv6 address of the Pod. This parameter is optional.

3.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

4.     Verify the Pod online status on the controller vPort page.

Configuring an IP address pool

1.     Create a tenant, virtual link layer network, subnet, security policy, and network policy on the controller

2.     Create an IPv4 or IPv6 address pool on the cluster.

¡     IPv4:

apiVersion: sdnc.io/v1

kind: IpV4Pool

metadata:

  name: v4-ippool

spec:

  network_id: 5a25bc62-c8b4-4645-b194-2fa83bf7d91d

  ip_ranges:

  - start: 10.10.1.3

    end: 10.10.1.10

  - start: 10.10.2.3

    end: 10.10.2.10

¡     IPv6:

apiVersion: sdnc.io/v1

kind: IpV6Pool

metadata:

  name: v6-ippool

spec:

  network_id: 5a25bc62-c8b4-4645-b194-2fa83bf7d91d

  ip_ranges:

  - start: 201::2:1

    end: 201::2:5

  - start: 201::3:1

end: 201::4:1

Parameter description:

¡     network_idUUID of the controller virtual link layer network. This parameter must be configured.

¡     ip_ranges—Address segment of the Pod IP address pool.

¡     startStart IP address of the address segment.

¡     endEnd IP address of the address segment

3.     Create and edit the Pod configuration file, for example postgres-pod.yaml on the master.

$ vi postgres-pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: postgres

  annotations:

    sdnc.io/ipv4pool: v4-ippool

    sdnc.io/ipv6pool: v6-ippool

spec:

  containers:

  - name: postgres

    image: postgres

4.     Use the configuration file to create a Pod.

$ kubectl create -f postgres-pod.yaml

5.     View the Pod online status on the controller vPort page and verify that the IP address of the Pod is from the address pool configured for the Pod.

Using NetworkPolicies

IMPORTANT

IMPORTANT:

A NetworkPolicy takes effect only on backend Pods in the container network, and is not effective for backend Pods in the host network.

 

A NetworkPolicy specifies how Pods communicate with various network entities over the network. A NetworkPolicy resource uses labels to select Pods and defines the communication rules for the pods. The CNI plug-in supports using NetworPolicies to control Pod traffic.

To create a NetworkPolicy resource in the cluster, configure the settings as follows:

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: demo-np

  namespace: default

spec:

  podSelector:

    matchLabels:

      hello: world

  ingress:

  - from:

    - ipBlock:

        cidr: 5.0.0.0/24

  egress:

  - to:

    - ipBlock:

        cidr: 5.0.0.0/24

    ports:

    - port: 8080

      protocol: TCP

  policyTypes:

  - Egress

  - Ingress

Parameter description:

¡     nameName of the NetworkPolicy resource.

¡     namespacNamespace to which the NetworkPolicy resource belongs.

¡     podSelectorSelects the Pods to which the policy applies.

¡     ingressSpecifies ingress rules.

¡     egressSpecifies egress rules.

¡     policyTypesType of the policy, indicating whether or not the policy applies to ingress traffic to selected pods, egress traffic from selected pods, or both.

After the NetworkPolicy is created, the traffic of Pods matching the policy will be controlled by the policy.

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become a Partner
  • Partner Resources
  • Partner Business Management
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网