In CloudOS, each compute node represents a collection of compute resources.
To use external compute resource pools for compute service provisioning, for example, to create cloud hosts, you must abstract the pools or clusters in the pools into compute nodes.
After you create a compute node, you can create compute AZs on the compute node for compute resource allocation. The following information describes only tasks for compute node management. For information about creating compute AZs, see "Create a compute AZ."
CloudOS supports the following types of compute nodes:
CAS—A CAS compute node represents a CAS host pool or a cluster in the CAS host pool in a CAS virtualization platform.
VMware—A VMware compute node represents a VMware ESXi cluster managed in the vCenter of a VMware virtualization platform.
Compute node management contains the following tasks:
Delete a compute node from CloudOS
Interoperation between a compute node and RBD requires using specific CAS and ONEStor versions. For more information, see Key Features and Software Platforms Compatibility Matrix.
On the top navigation bar, click Resources.
From the left navigation pane, select Virtualization.
At the upper right, click Compute Nodes.
Click Create.
Select a hypervisor type (virtualization platform type), the management IP address of the platform, and the network mode used for the compute node.
Table-1 Network service parameters for the compute node
Parameter |
Description |
Network Mode |
Options are VLAN mode, VXLAN network overlay, and VXLAN host overlay. Select VLAN mode for a VLAN VPC network deployment. Select VXLAN network overlay or VXLAN host overlay for a flat VXLAN VPC deployment. Select VXLAN network overlay for a hierarchical VXLAN VPC network deployment. Select any one of the modes as needed for a controllerless VLAN network deployment. After the system creates the compute node, you cannot change its network mode. |
vSwitch Type |
Type of the vSwitch that provides forwarding services for VMware cloud hosts. Options are standard vSwitch and distributed vSwitch. A standard vSwitch is deployed on one host and cannot provide services for cloud hosts on other hosts. Use standard vSwitches only if you have only a limited number of ESXi hosts. If you use standard vSwitches, you must make sure the vSwitch settings are consistent across all ESXi hosts. A distributed vSwitch provides centralized management and monitoring of the networking configuration across its associated hosts. It ensures that you can move a cloud host between hosts to provide uninterrupted services without having to change port group settings. |
Click Verify.
Configure parameters for the compute node, and click Create in the network egress list area to configure network egresses.
Table-2 Compute node parameters for CAS cloud host
Parameter |
Description |
|
Host Pool |
Select the host pool in the CVM node. A host pool is a group of hosts and clusters. |
|
Cluster Name |
Abstract clusters into compute nodes or abstract the entire host pool into a compute node. To abstract a cluster into a compute node, select that cluster. To abstract the entire host pool into a compute node, select all. If you select this option, you will be unable to create compute nodes for any clusters in the host pool. |
|
Cinder AZ |
Assign a Cinder AZ name to the compute node. |
|
Back-End Storage Type |
Available options are CAS and ONEStor. To use RBD, select and configure ONEStor. Do not change the storage pool name on the virtualization platform after you add the storage type for the compute node. |
|
ONEStor Settings |
Enter the management IP address of the ONEStor system and the username and password for accessing the system, and then click Verify. To access the ONEStor system successfully, its access password cannot contain special characters such as the question mark (?) and the pound sign (#). Enter the name of the target storage pool in the ONEStor system. Select a PV that has a minimum of 50 GB storage space to provide shared storage for storing images. You must assign a unique label to the PV and configure the shared storage on the storage page for the container cluster. |
|
CPU Overcommitment |
To use CPU overcommitment, select Enable CPU Overcommitment, and then configure the CPU overcommitment rate in the range of 0.1 to 2.0. |
|
Memory Overcommitment |
To use memory overcommitment, select Enable Memory Overcommitment, and then configure the CPU overcommitment rate in the range of 0.1 to 2.0. |
|
System Disk Overcommitment |
To use system disk overcommitment, select Enable System Disk Overcommitment, and then configure the system disk overcommitment rate in the range of 0.1 to 1.0. |
|
Architecture |
Architecture of the compute node. Options are x86 and ARM. |
|
VM Initialization Mode |
Options include standard mode and professional mode. Standard mode—In this mode, you can deploy initial settings such as host name, root user password, and network address when you provision a cloud host. Professional mode—In this mode, you can use the Cloud-Init tool during the boot process of a cloud host to configure its initial settings. In this mode, you can deploy more settings than in standard mode. For example, you can deploy host routes. |
|
Driver Format |
Options: VFAT—Mount a 64 MB disk to cloud hosts. ISO—Mount a CD-ROM drive to cloud hosts. Select this option if the cloud host initialization mode is professional. |
|
Host Name |
Assign a host name in cpn-name format to the compute node. When creating a Nova AZ, use this name to select the compute node. Make sure the host name of a compute node is unique among all compute nodes. If the host name of a new compute node is already used by an existing compute node, the new compute node overwrites the existing one. |
|
Create Network Egress |
Physical Network Name |
Specify a name for the network egress. Compute nodes configured with egresses of the same name can communicate with each other. You can select an existing network egress, or manually specify a name to create a new egress. After configuring the compute node, configure a VLAN range for the network egress on the network planning page. For more information about configuring network egresses on the network planning page, see "Create a compute node network egress." |
|
Egress Type |
Type of the egress that provides external connectivity for the compute node. Options are vSwitch and passthrough NIC. If you select Passthrough NIC, make sure the SR-IOV NICs on all CVK hosts have the same name and configuration.
|
|
vSwitch Name |
Name of the egress vSwitch. All cloud hosts that use the specified vSwitch as an egress will connect to the service network through it or the passthrough NIC. |
Table-3 Compute node parameters for VMware
Parameter |
Description |
|
Cluster Name |
Select a VMware ESXi cluster. |
|
Cinder AZ |
Assign a Cinder AZ name to the compute node. |
|
cloud host Initialization Mode |
Select a mode to deploy initial cloud host settings. Options include standard mode and professional mode.
|
|
Host Name |
Assign a host name in cpn-name format to the compute node. When creating a Nova AZ, use this name to select the compute node. Make sure the host name of a compute node is unique among all compute nodes. If the host name of a new compute node is already used by an existing compute node, the new compute node overwrites the existing one. |
|
Create a network egress |
Physical Network Name |
Specify a name for the network egress. Compute nodes configured with egresses of the same name can communicate with each other. You can select an existing network egress, or manually specify a name to create a new egress. After configuring the compute node, configure a VLAN range for the network egress on the network planning page. For more information about configuring network egresses on the network planning page, see "Create a compute node network egress." |
Egress Type |
Type of the egress that provides external connectivity for the compute node. The egress type can only be vSwitch. cloud hosts connect to the service network through a vSwitch. |
|
vSwitch Name |
Name of the egress vSwitch. All cloud hosts that use the specified vSwitch as an egress will connect to the service network through it. |
To avoid cloud host creation failures, make sure the vSwitch or passthrough NIC is configured on all hosts in the system when you create the network egress. |
In the dialog box that opens, click OK.
Perform this task to find compute nodes of interest by username, supervisor type, IP, host pool, host name, network egress name, cinder AZ name, or cluster name.
On the top navigation bar, click Resources.
From the left navigation pane, select Virtualization.
At the upper right, click Compute Nodes.
Select a filtering criterion, and then enter a keyword string.
Perform this task to change the cloud host initialization mode or the network egress settings of a compute node.
On the top navigation bar, click Resources.
From the left navigation pane, select Virtualization.
At the upper right, click Compute Nodes.
Click Edit for a compute node.
Change the cloud host initialization mode or network egress settings, and then click OK.
Delete a compute node from CloudOS if you do not need to use it to provide compute services from CloudOS.
To delete a CAS compute node successfully, make sure it is not in abnormal state.
Before you can delete a compute node, you must perform the following tasks:
Delete all cloud services running on the compute node.
Remove the AZs that contain the compute node from organization quota.
Delete the AZs.
On the top navigation bar, click Resources.
From the left navigation pane, select Virtualization.
At the upper right, click Compute Nodes.
Click Delete for a compute node.