- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
01-Text | 137.88 KB |
Contents
Configuring the Kubernetes nodes
Configuring a node in host-based overlay model
Configuring a node in network-based overlay model
Installing the SDN Kubernetes plug-in
Loading the plug-in Docker image
Configuring Pod network parameters
Configuring Pod network parameters by using the default network
Configuring Pod network parameters by using labels
Configuring Pod network parameters by using annotations
Configuring Pod network parameters by using CRD
Configuring an IP address for a Pod
Specifying a static IP address for a Pod
Configuring an IP address pool
Overview
Kubernetes is an open-source container orchestration platform for automated deployment, scaling, and management of containerized applications.
Pods are the smallest deployable units of computing in Kubernetes. A Pod is a group of one or more tightly coupled containers, with shared network resources and file system and specifications for how to run the containers.
Installation of the SDN Kubernetes plug-in allows Pods in the Kubernetes to come online on the SeerEngine-DC controller for the controller to monitor traffic, deploy security policies, and provide networking services for the Pods.
Preparing for installation
Hardware requirements
Table 1 shows the minimum hardware requirements for installing the SDN Kubernetes plug-in on a physical server or virtual machine.
Table 1 Minimum hardware requirements
CPU |
Memory size |
Disk size |
Quad-core |
8 GB |
50 GB |
Software requirements
Table 2 shows the software requirements for installing the SDN Kubernetes plug-in.
Item |
Supported versions |
Kubernetes |
Kubernetes 1.9.x1.21.x. |
vSwitch |
· Host-based overlay—For the vSwitch version information, see the release notes for the SeerEngine-DC controller. · Network-based overlay—Open vSwitch 2.9 and later. For the compatibility between the Open vSwitch and operating system kernel versions, see Table 3. |
Table 3 Compatibility between the Open vSwitch and operating system kernel versions
Open vSwitch version |
Linux kernel version |
2.9.x |
3.10 to 4.13 |
2.10.x |
3.16 to 4.17 |
2.11.x |
3.16 to 4.18 |
2.12.x |
3.16 to 5.0 |
2.13.x |
3.16 to 5.0 |
2.14.x |
3.16 to 5.5 |
2.15.x |
3.16 to 5.8 |
Configuring the Kubernetes nodes
You must configure basic settings for Kubernetes nodes before installing the SDN Kubernetes plug-in.
Configuring a node in host-based overlay model
1. Install the S1020V vSwitch. For the installation procedure, see the installation guide for the S1020V vSwitch.
2. Configure a VDS on the SeerEngine-DC controller and add the VDS configuration to the node.
The following configuration example uses vds1-br as the vSwitch name, eth1 as the uplink interface, vxlan_vds1-br as the VXLAN tunnel name, and 100.0.100.100 as the VTEP's IP. After the configuration, the VTEP IPs are reachable to each other.
$ ovs-vsctl add-br vds1-br
$ ovs-vsctl add-port vds1-br eth1
$ ovs-vsctl add-port vds1-br vxlan_vds1-br -- set interface vxlan_vds1-br type=vxlan options:remote_ip=flow options:local_ip=100.0.100.100 options:key=flow
$ ip link set vds1-br up
$ ip addr add 100.0.100.100/16 dev vds1-br
IMPORTANT: To avoid VDS bridge IP address loss after a node restart, configure a VDS IP address as follows: 1. Use the vi editor to edit the /etc/profile file as a root user: press I to enter edit mode, and add the following lines to the end of the file: 2. Press Esc to exit edit mode, enter :wq, press Enter, save the configuration file, and exit the vi editor. |
3. Configure a KVM-type compute domain on the controller and associate the domain with the VDS.
4. Add the node to the hosts in the compute domain.
Configuring a node in network-based overlay model
1. Install a version of Open vSwitch compatible with the kernel version of the operating system. See Table 3 for the compatibility between the Open vSwitch and operating system kernel versions.
$ yum install -y openvswitch
$ systemctl enable openvswitch.service
$ systemctl start openvswitch.service
$ yum install -y lldpad
$ systemctl enable lldpad.service
$ systemctl start lldpad.service
3. Add an Open vSwitch bridge (br-eno2 for example) on the node, specify the OpenFlow version, and set the fail mode to secure.
$ ovs-vsctl add-br br-eno2
$ ovs-vsctl set bridge br-eno2 protocols=OpenFlow13
$ ovs-vsctl set-fail-mode br-eno2 secure
4. Add an uplink interface (eno2 for example) to the Open vSwitch bridge and configure the interface settings.
$ ovs-vsctl add-port br-eno2 eno2
$ ovs-vsctl br-set-external-id br-eno2 uplinkInterface eno2
5. (Optional.) To deploy K8s on a bare metal, add the vPort UUID of the bare metal, for example, 1e10786f-f894-533f-838c-23c2766ed1d1, the UUID of the virtual link layer network where the vPort resides, for example, 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6, and the management network gateway of the K8s cluster, for example, 10.10.0.254, to the OVS bridge.
$ ovs-vsctl br-set-external-id br-eno2 uplinkNetworkId 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6
$ ovs-vsctl br-set-external-id br-eno2 uplinkPortId 1e10786f-f894-533f-838c-23c2766ed1d1
$ ovs-vsctl br-set-external-id br-eno2 managementNetworkGw 10.10.0.254
6. (Optional.) To deploy K8s on a VM created in trunk port mode on OpenStack, perform the following tasks:
a. Add the following settings to the OVS bridge:
- UUID of the trunk port on OpenStack, for example, 1e10786f-f894-533f-838c-23c2766ed1d1.
- UUID of the virtual link layer network where the trunk port resides, for example, 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6.
- Management network gateway of the K8s cluster, for example, 10.10.0.254.
- Connected cloud scenario. The value can only be OpenStack.
- Virtualization type. The value can only be KVM.
- Access type. The value can only be netoverlay.
b. Configure lldpad on the host where the VM resides. For the configuration procedure, see steps 2 and 3.
$ ovs-vsctl br-set-external-id br-eno2 uplinkNetworkId 3c07b72c-4ee8-4b2a-aff2-cacb3d84c8f6
$ ovs-vsctl br-set-external-id br-eno2 uplinkPortId 1e10786f-f894-533f-838c-23c2766ed1d1
$ ovs-vsctl br-set-external-id br-eno2 managementNetworkGw 10.10.0.254
$ ovs-vsctl br-set-external-id br-eno2 cloud openstack
$ ovs-vsctl br-set-external-id br-eno2 virtType kvm
$ ovs-vsctl br-set-external-id br-eno2 accessType netoverlay
Installing the SDN Kubernetes plug-in
Loading the plug-in Docker image
Follow these steps to load the plug-in Docker image on the master and nodes, respectively:
1. Obtain the SDN Kubernetes plug-in Docker image package. Then save the package to the installation directory on the server or virtual machine The name of the package is in the SeerEngine_DC_NET_PLUGIN-version.tar format, where version represents the version number.
CAUTION: Alternatively, you can upload the package to the installation directory through FTP, TFTP, or SFTP. To avoid damaging the package, use the binary mode when you upload the package through FTP or TFTP. |
IMPORTANT: Select the Docker image package specific to the server architecture. · x86_64 server—SeerEngine_DC_NET_PLUGIN-version.tar.gz. · ARM server—SeerEngine_DC_NET_PLUGIN-version-ARM64.tar.gz. |
2. Decompress the package.
$ tar -xzvf SeerEngine_DC_NET_PLUGIN-E3606.tar.gz
SeerEngine_DC_NET_PLUGIN-E3606.tar
SeerEngine_DC_NET_PLUGIN-E3606.yaml
SeerEngine_DC_NET_PLUGIN-E3606.crd.yaml
webhook-create-signed-cert.sh
3. Load the Docker image.
$ docker load -i SeerEngine_DC_NET_PLUGIN-E3606.tar
Installing the plug-in
After loading the docker image on the master and nodes, install the plug-in on the master.
To install the plug-in on the master:
a. Upload and execute the preprocessing script.
b. Obtain preprocessing script webhook-create-signed-cert.sh and upload it to the installation directory on the master.
c. Execute the script.
$ sh webhook-create-signed-cert.sh
2. Obtain the configuration files of the SDN Kubernetes plug-in SeerEngine_DC_NET_PLUGIN-version.yaml and SeerEngine_DC_NET_PLUGIN-version.crd.yaml. version in the file names represents the version number.
3. Save the files to the installation directory on the master.
CAUTION: Alternatively, you can upload the files to the installation directory through FTP, TFTP, or SFTP. To avoid damaging the files, use the binary mode when you upload the files through FTP or TFTP. |
4. Modify the configuration file.
a. Use the vi editor to open the configuration file.
$ vi SeerEngine_DC_NET_PLUGIN-E3606.yaml
b. Press I to switch to insert mode, and then modify the configuration file. For information about the parameters, see Table 4.
kind: ConfigMap
apiVersion: v1
metadata:
name: sdnc-net-plugin
namespace: kube-system
data:
etcd_servers: "http://192.168.0.10:2379"
etcd_certfile: "/etc/sdnc-net-plugin/etcd.crt"
etcd_keyfile: "/etc/sdnc-net-plugin/etcd.key"
etcd_cafile: "/etc/sdnc-net-plugin/etcd-ca.crt"
k8s_api_server: "https://192.168.0.20:6443"
k8s_ca: "/etc/sdnc-net-plugin/ca.crt"
k8s_key: "/etc/sdnc-net-plugin/client.key"
k8s_cert: "/etc/sdnc-net-plugin/client.crt"
k8s_token: ""
---
kind: ConfigMap
apiVersion: v1
metadata:
name: sdnc-net-master
namespace: kube-system
data:
sdnc_url: http://192.168.0.32:10080
sdnc_username: "admin"
sdnc_password: "admin@123"
sdnc_domain: "sdn"
sdnc_client_timeout: "60"
sdnc_client_retry: "3"
openstack_url: "http://99.0.88.40:5000/v3"
openstack_username: "admin"
openstack_password: "123456"
openstack_projectname: "admin"
openstack_projectdomain: "Default"
netoverlay_vlan_ranges: "node01:1:100,node02:101:200"
log_dir: "/var/log/sdnc-net-plugin/"
log_level: "1"
bind_host: "0.0.0.0"
bind_port: "9797"
protocol: "http"
webhook_bind_port: "9898"
default_network_id: ""
---
kind: ConfigMap
apiVersion: v1
metadata:
name: sdnc-net-agent
namespace: kube-system
data:
net_masters: "auto"
net_master_app_name: "sdnc-net-master"
net_master_app_namespace: "kube-system"
net_master_protocol: "http"
net_master_port: "9797"
overlay_mode: "auto"
log_dir: "/var/log/sdnc-net-plugin/"
log_level: "1"
default_security_policy: "permit"
host_networks: "192.168.10.0/24,192.168.2.0/24"
host_to_container_network: "172.70.0.0/16"
container_to_host_network: "172.60.0.0/16"
node_port_net_id: ""
default_mtu: "0"
service_strategy: "0"
service_ip_cidr: "10.68.0.0/16"
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: validation-webhook-cfg
labels:
app: admission-webhook-ippool
webhooks:
- name: validate.h3c.io
failurePolicy: Fail
clientConfig:
service:
name: sdnc-net-master-webhook
namespace: kube-system
path: "/v1.0/validate"
caBundle: ""
rules:
- operations: [ "CREATE", "UPDATE", "DELETE" ]
apiGroups: ["sdnc.io"]
apiVersions: ["v1"]
resources: ["ipv4pools","ipv6pools"]
5. Install the plug-in.
IMPORTANT: Before installing the plug-in, modify the apiVersion parameter for the resources in the configuration files according to the K8s cluster version. |
$ kubectl create -f SeerEngine_DC_NET_PLUGIN-E3606.crd.yaml
$ kubectl create -f SeerEngine_DC_NET_PLUGIN-E3606.yaml
6. Verify the installation. If the Pods are in Running state, the plug-in is installed correctly.
$ kubectl get pods -n
kube-system | grep sdnc
sdnc-net-agent-mtwkl 1/1 Running 0 5d7h
sdnc-net-agent-rt2s6 1/1 Running 0 5d7h
sdnc-net-master-79bc68885c-2s9jm 1/1 Running 0 5d7h
The following table describes parameters in the configuration file of the SDN Kubernetes plug-in.
Table 4 Parameters in the configuration file of the SDN Kubernetes plug-in
Parameter |
Description |
etcd_servers |
etcd service API address. The address can be an HTTP or HTTPS address. To use an HTTPS address (for example, https://192.168.0.10:2379), you must configure the etcd_certfile, etcd_keyfile, and etcd_cafile parameters. |
etcd_certfile |
etcd client x509 certificate file. The value is the certificate file path, for example, /etc/sdnc-net-plugin/etcd.crt. If the path does not exist, create it and set the permission to 755. This parameter is valid only when the etcd_servers value is an HTTPS address. |
etcd_keyfile |
Private key file for the etcd client x509 certificate. The value is the private key file path, for example, /etc/sdnc-net-plugin/etcd.key. If the path does not exist, create it and set the permission to 755. This parameter is valid only when the etcd_servers value is an HTTPS address. |
etcd_cafile |
etcd client CA file. The value is the CA file path. If the path does not exist, create it and set the permission to 755. This parameter is valid only when the etcd_servers value is an HTTPS address. |
k8s_api_server |
K8s API server interface address. |
k8s_ca |
Client CA file of the K8s API server. The value is the CA file path, for example, /etc/sdnc-net-plugin/ca.crt. If the path does not exist, create it and set the permission to 755. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address. |
k8s_key |
Client X.509 certificate private key file of the K8s API server. The value is the private key file, for example, /etc/sdnc-net-plugin/client.key. If the path does not exist, create it and set the permission to 755. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address. It is used together with k8s_cert. This parameter is not required when k8s_token authentication is used. |
k8s_cert |
Client X.509 certificate file of the K8s API server. The value is the certificate file path, for example, /etc/sdnc-net-plugin/client.crt. If the path does not exist, create it and set the permission to 755. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address. This parameter is not required when k8s_key and k8s_cert authentication is used. |
k8s_token |
Client authentication token of the K8s API server. This parameter is valid only when the value of the k8s_api_server parameter is an HTTPS address. This parameter is not required when k8s_key and k8s_cert authentication is used. |
sdnc_url |
URL address for logging in to SNA Center |
sdnc_username |
Username for logging in to SNA Center. |
sdnc_password |
Password for logging in to SNA Center. |
sdnc_domain |
Name of the domain where the SeerEngine-DC controller resides. |
sdnc_client_timeout |
The amount of time waiting for a response from the SeerEngine-DC controller, in seconds. |
sdnc_client_retry |
Maximum transmissions of connection requests to the SeerEngine-DC controller. |
openstack_url |
OpenStack Keystone authentication address. |
openstack_username |
Username for accessing OpenStack. |
openstack_password |
Password for accessing OpenStack. |
openstack_projectname |
OpenStack project name. |
openstack_projectdomain |
Name of the domain where the OpenStack project resides. |
netoverlay_vlan_ranges |
VLAN range for the node in network-based overlay mode, in the format of VLAN_min:VLAN_max. For more than one VLAN range, use a comma to separate them. |
log_dir |
Log directory. |
log_level |
Log level. |
bind_host |
API-bound address. |
bind_port |
API-bound port number. As a best practice, bind the API to the net_master_port. |
protocol |
API protocol. Only HTTP is supported. |
webhook_bind_port |
Webhook service port number. As a best practice, set the same value as the webhook-port parameter. |
default_network_id |
UUID of the default virtual link layer network where the containers come online. If no virtual link layer network is configured for the containers, the containers come online on the default network. |
net_masters |
IP address of the SDNC network master. auto means automatically obtaining the IP address. |
net_master_app_name |
Application name of the SDNC network master. |
net_master_app_namespace |
Application namespace of the SDNC network master. |
net_master_protocol |
API protocol of the SDNC network master. |
net_master_port |
API port number of the SDNC network master. |
overlay_mode |
Overlay mode of the node: · net—Network-based overlay · host—Host-based overly · auto—Automatic selection of overlay mode based on the open vSwitch configuration |
default_security_policy |
Default security policy. This parameter takes effect only in network-based overlay mode. · permit · deny |
node_port_net_id |
Virtual link layer network UUID for the NodePort function. After the UUID is specified, a vPort will automatically come online for each node to provide NodePort services. You are not required to configure this parameter if NodePort is not used. |
default_mtu |
Default MTU of a container NIC. The value is 0 by default, indicating that the default MTU is 1500. |
service_strategy |
Load balancing policy for the ClusterIp service: · 0—IP address-based load balancing policy. · 1—IP address- and port number-based load balancing policy. The default value is 0. |
service_ip_cidr |
ClusterIp service IP address segment of the K8s cluster. This parameter is used in the bare metal scenario. |
caBundle |
Certificate authorization data of the K8s cluster. You can obtain the value by executing the kubectl config view --raw --flatten -o json command on the master. |
IMPORTANT: · After configuring the etcd_certfile, etcd_keyfile, and etcd_cafile parameters, you must save the etcd client certificate file, client certificate key file, and certificate CA file to the specified path on the master node and copy the path to all nodes. The path depends on the deployment tool and the tool version. For example, the etcd certificate file in the K8s environment set up by using Kubeadm is saved in the /etc/kubernetes/pki/ path. · After configuring the k8s_ca, k8s_key, and k8s_cert parameters, you must save the K8s API server client certificate file, client certificate key file, and certificate CA file to the specified path on the master node and copy the path to all nodes. The path depends on the deployment tool and the tool version. For example, the K8s API server certificate file in the K8s environment set up by using Kubeadm is saved in the /etc/kubernetes/pki/ path. |
Removing the plug-in
IMPORTANT: Before removing the plug-in, remove all Pods created by using the plug-in. |
To remove the plug-in, execute the following commands:
$ kubectl delete -f SeerEngine_DC_NET_PLUGIN-E3606.yaml
$ kubectl delete -f SeerEngine_DC_NET_PLUGIN-E3606.crd.yaml
Upgrading the plug-in
1. Install the plug-in Docker image. For the installation procedure, see "Loading the plug-in Docker image."
2. Upload and modify the plug-in configuration file. For the procedure, see "Installing the plug-in."
3. Upgrade the plug-in.
$ kubectl delete -f SeerEngine_DC_NET_PLUGIN-E3606.yaml
$ kubectl apply -f SeerEngine_DC_NET_PLUGIN-E3606.crd.yaml
$ kubectl create -f SeerEngine_DC_NET_PLUGIN-E3606.yaml
IMPORTANT: Because the name of the original plug-in Docker image starts with "VCFC", you must first remove the plug-in from the master and then run the vcfc2sdnc.sh script on the master. |
Configuration example
Configuring Pod network parameters
Configuring Pod network parameters by using the default network
1. Create a tenant, virtual link layer network, subnet, security policy, and network policy on the controller.
2. Identify the default_network_id parameter in the configuration file and specify the default network. For the configuration method, see "Installing the plug-in."
3. Create and edit the Pod configuration file, for example, postgres-pod.yaml, on the master.
$ vi postgres-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres
annotations:
spec:
containers:
- name: postgres
image: postgres
4. Use the configuration file to create a Pod.
$ kubectl create -f postgres-pod.yaml
5. Verify the Pod online status on the controller vPort page.
Configuring Pod network parameters by using labels
1. Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller. To deploy a K8s cluster on a VM in network-based overlay model, create tenants, virtual link layer networks, and subnets on OpenStack.
2. Create and edit the Pod configuration file, for example, postgres-pod.yaml, on the master. The original labels h3c.io/network_id, h3c.io/tenant_id, h3c.io/qos_policy_id, and h3c.io/security_group_id remain effective.
$ vi postgres-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
sdnc.io/network_id: 9e9af886-e038-4c94-8573-11b89079196d
sdnc.io/tenant_id: 14ac7fc1-50d4-409a-ad76-4a0c35f429f3
sdnc.io/qos_policy_id: 38b51db9-cc1d-4b07-872e-cf2644bfc057
sdnc.io/security_group_id: 39b70d60-8bfd-4b27-bb4d-4b8f8955a2e6
spec:
containers:
- name: postgres
image: postgres
Parameter description:
¡ sdnc.io/network_id—UUID of the SeerEngine-DC or OpenStack virtual network. This parameter must be configured.
¡ sdnc.io/tenant_id—UUID of the SeerEngine-DC or OpenStack tenant. This parameter must be configured.
¡ sdnc.io/qos_policy_id—UUID of the SeerEngine-DC network policy. This parameter is optional. OpenStack VMs do not support this parameter.
¡ sdnc.io/security_group_id—UUID of the SeerEngine-DC security policy. This parameter is optional. OpenStack VMs do not support this parameter.
3. Use the configuration file to create a Pod.
$ kubectl create -f postgres-pod.yaml
4. Verify the Pod online status on the controller vPort page.
Configuring Pod network parameters by using annotations
1. Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller. To deploy a K8s cluster on a VM in network-based overlay model, create tenants, virtual link layer networks, and subnets on OpenStack.
2. Create and edit the Pod configuration file, for example, postgres-pod.yaml, on the master. The original labels h3c.io/network_id, h3c.io/tenant_id, h3c.io/qos_policy_id, and h3c.io/security_group_id remain effective.
$ vi postgres-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres
annotations:
sdnc.io/network_id: 9e9af886-e038-4c94-8573-11b89079196d
sdnc.io/tenant_id: 14ac7fc1-50d4-409a-ad76-4a0c35f429f3
sdnc.io/qos_policy_id: 38b51db9-cc1d-4b07-872e-cf2644bfc057
sdnc.io/security_group_id: 39b70d60-8bfd-4b27-bb4d-4b8f8955a2e6
spec:
containers:
- name: postgres
image: postgres
Parameter description:
¡ sdnc.io/network_id—UUID of the SeerEngine-DC or OpenStack virtual network. This parameter must be configured.
¡ sdnc.io/tenant_id—UUID of the SeerEngine-DC or OpenStack tenant. This parameter must be configured.
¡ sdnc.io/qos_policy_id—UUID of the SeerEngine-DC network policy. This parameter is optional. OpenStack VMs do not support this parameter.
¡ sdnc.io/security_group_id—UUID of the SeerEngine-DC security policy. This parameter is optional. OpenStack VMs do not support this parameter.
3. Use the configuration file to create a Pod.
$ kubectl create -f postgres-pod.yaml
4. Verify the Pod online status on the controller vPort page.
Configuring Pod network parameters by using CRD
1. Create tenants, virtual link layer networks, subnets, security policies, and network policies on the controller. To deploy a K8s cluster on a VM in network-based overlay model, create tenants, virtual link layer networks, and subnets on OpenStack.
2. Create network configuration resources on the cluster.
apiVersion: "sdnc.io/v1"
kind: NetworkConfiguration
metadata:
name: okok
namespace: default
spec:
config: '{
"network": {
"network_id": "bbdf64ec-73c7-4038-b134-b792cacf43cf"
},
"tenant": {
"tenant_id": "115d0dcc-f5a7-407f-b0d1-9da3431df26b"
},
"qos_policy": {
"qos_policy_id": "bbdf64cc- f5c7-407f-b0d1-9da3431df26b"
},
"security_group": {
"security_group_id": "132d0dec-737f-407f-b0d1-9da3431df26b"
}
}'
Parameter description:
¡ network_id—UUID of the SeerEngine-DC or OpenStack virtual network. This parameter must be configured.
¡ tenant_id—UUID of the SeerEngine-DC or OpenStack tenant. This parameter must be configured.
¡ qos_policy_id—UUID of the SeerEngine-DC network policy. This parameter is optional. OpenStack VMs do not support this parameter.
¡ security_group_id—UUID of the SeerEngine-DC security policy. This parameter is optional. OpenStack VMs do not support this parameter.
3. Create and edit the Pod configuration file, for example, postgres-pod.yaml on the master.
$ vi postgres-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres
annotations:
sdnc.io/network_conf: example
spec:
containers:
- name: postgres
image: postgres
4. Use the configuration file to create a Pod.
$ kubectl create -f postgres-pod.yaml
5. Verify the Pod online status on the controller vPort page.
Configuring an IP address for a Pod
CAUTION: · Make sure the static IP address and IP address pool of a Pod do not conflict. · Make sure the static IP address and the IP address pool of a Pod do not conflict with the DHCP address pool of the controller's subnets. |
After configuring Pod network parameters, you can specify a static IP address for the Pod or configure the Pod to obtain an IP address automatically from the IP address pool.
Specifying a static IP address for a Pod
1. Create a tenant, virtual link layer network, subnet, security policy, and network policy on the controller
2. Create and edit the Pod configuration file, for example postgres-pod.yaml on the master. The original labels h3c.io/ipv4addr and h3c.io/ipv6addr remain effective.
$ vi postgres-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres
annotations:
sdnc.io/ipv4addr: 10.10.0.1
sdnc.io/ipv6addr: 201::1
spec:
containers:
- name: postgres
image: postgres
Parameter description:
¡ sdnc.io/ipv4addr—IPv4 address of the Pod. This parameter is optional.
¡ sdnc.io/ipv6addr—IPv6 address of the Pod. This parameter is optional.
3. Use the configuration file to create a Pod.
$ kubectl create -f postgres-pod.yaml
4. Verify the Pod online status on the controller vPort page.
Configuring an IP address pool
1. Create a tenant, virtual link layer network, subnet, security policy, and network policy on the controller
2. Create an IPv4 or IPv6 address pool on the cluster.
¡ IPv4:
apiVersion: sdnc.io/v1
kind: IpV4Pool
metadata:
name: v4-ippool
spec:
network_id: 5a25bc62-c8b4-4645-b194-2fa83bf7d91d
ip_ranges:
- start: 10.10.1.3
end: 10.10.1.10
- start: 10.10.2.3
end: 10.10.2.10
¡ IPv6:
apiVersion: sdnc.io/v1
kind: IpV6Pool
metadata:
name: v6-ippool
spec:
network_id: 5a25bc62-c8b4-4645-b194-2fa83bf7d91d
ip_ranges:
- start: 201::2:1
end: 201::2:5
- start: 201::3:1
end: 201::4:1
Parameter description:
¡ network_id—UUID of the controller virtual link layer network. This parameter must be configured.
¡ ip_ranges—Address segment of the Pod IP address pool.
¡ start—Start IP address of the address segment.
¡ end—End IP address of the address segment
3. Create and edit the Pod configuration file, for example postgres-pod.yaml on the master.
$ vi postgres-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres
annotations:
sdnc.io/ipv4pool: v4-ippool
sdnc.io/ipv6pool: v6-ippool
spec:
containers:
- name: postgres
image: postgres
4. Use the configuration file to create a Pod.
$ kubectl create -f postgres-pod.yaml
5. View the Pod online status on the controller vPort page and verify that the IP address of the Pod is from the address pool configured for the Pod.