You can deploy a bare metal compute node on a physical server or VM.
The system supports only one bare metal compute node. |
This section contains the following topics:
Log in to CAS to create a VM as a bare metal compute node to provide bare metal computing capabilities.
· For more information about the configuration on CAS, see CAS-related documentation. · For more information VM specification requirements, see H3C CloudOS 5.0 Deployment Guide. |
Perform the following steps to load a PLAT image to install a compute node after the VM is created successfully:
Select software package type H3C CloudOS Node.
As a best practice, configure partitions manually when you install the operating system for a bare metal compute node. Delete docker and mysql partitions and spare their capacity to the / partition.
Figure-1 Clicking INSTALLATION DESTINATION
Figure-2 Selecting I will configure partitioning
Figure-3 Deleting docker and mysql partitions
Figure-4 Sparing the capacity of the deleted partitions to the / partition
Figure-5 Configuring the / partition
Figure-6 Manual partitioning is completed
Specify a host name and IP address for the compute node.
Specify the master node as the NTP server.
Figure-1 Installing a bare metal compute node
On the top navigation bar, click System.
From the left navigation pane, select System Settings > Security Settings > Database Whitelists.
Click Add.
In the dialog box that opens, select IP, and then enter the IP address of the bare metal compute node specified in "Install a system compute node."
Click OK.
· Only English punctuation marks are supported. · The bare metal compute node version must match the controller node version. · All executed scripts will be deployed to the corresponding configuration file. To edit a script, re-install the script. If you use a VM on CAS as a bare metal compute node, create a snapshot for the VM before installing a script. If any error occurs during script installation, restore the VM from the snapshot and then re-install the script. · The VXLAN ID range is 1 to 4094 for a bare metal network in flat mode. As a best practice, use bare metal service in a non-hierarchical scenario. |
Log in to the controller node through SSH.
Transfer the openstack-compute-standalone.tar.gz file in the /opt/openstack directory to the root directory of the compute node. The IP address is the IP address of the bare metal node.
[root@node-0cbcd0 ~]# cd /opt/openstack
[root@node-0cbcd0 openstack]# ls
manila-share openstack-compute openstack-compute-standalone.tar.gz
[root@node-0cbcd0 openstack]# scp openstack-compute-standalone.tar.gz root@172.25.50.150:/root
Use SSH to log in to the bare metal compute node.
Execute the following commands to enter the root directory and find the openstack-compute-standalone.tar.gz file:
[root@ironic-c ~]# pwd
/root
[root@ironic-c ~]# ls
openstack-compute-standalone.tar.gz
Execute the following command to decompress the openstack-compute-standalone.tar.gz file:
[root@ironic-c ~]# tar -zxvf openstack-compute-standalone.tar.gz
Execute the following commands to enter the openstack-compute-standalone directory, and find the install-compute-pike.sh installation script:
[root@ironic-c ~]# cd openstack-compute-standalone
[root@ironic-c openstack-compute-standalone]# ls
compute images install-compute-pike.sh others packages readme.txt tools upgrade upgrade-ironic-mitaka2pike.sh yum.repos.d
Execute the following command to upload an installation package, for example, CloudOS-PLAT-E5102H01-V500R001B01D030SP02-RC4.iso to the openstack-compute-standalone/ directory.
[root@ironic-c openstack-compute-standalone]# ls
CloudOS-PLAT-E5102H01-V500R001B01D030SP02-RC4.iso images others readme.txt upgrade yum.repos.d
compute install-compute-pike.sh packages tools upgrade-ironic-mitaka2pike.sh
Execute the following command to run the install-compute-pike.sh script:
[root@ironic-c openstack-compute-standalone]# sh install-compute-pike.sh
Enter the required configuration:
Please enter the number of hypervisor type (0 qemu, 1 VMWare, 2 cas, 3 KVM, 4 novadocker, 5 ironic): 5
Your choice of VMTTYPE is : [ 5 ] continue...
Please enter Manage Network IP address of the compute node(172.25.50.150): 172.25.50.150
Input IP address is: 172.25.50.150 Verifying Connection....
IP connection is OK
Please enter Inspection Network IP address of the compute node(172.25.50.150): 172.25.50.150
Input IP address is: 172.25.50.150 Verifying Connection....
IP connection is OK
Please enter Provision Network IP address of the compute node(172.25.50.150): 172.25.50.150
Input IP address is: 172.25.50.150 Verifying Connection....
IP connection is OK
Please enter the Mode of Network(1 flat , 2 mutitenant):1
Please enter the Manage Network IP address of the controller node (or vitrual IP if it's cluster): 172.25.17.53
Input IP address is: 172.25.17.53 Verifying Connection....
IP connection is OK
Please enter the Public Network IP address of the controller node (or vitrual IP if it's cluster): 172.25.17.53
Input IP address is: 172.25.17.53 Verifying Connection....
IP connection is OK
Is the controller node a cluster environment?(1 yes, 2 no): 1
You have already confirmed the controller node you will connect is a Cluster mode! continue....
Please enter the IP address of the Matrix, for the configuration of ntpd server: 172.25.17.50
Input IP address is: 172.25.17.50 Verifying Connection....
IP connection is OK
Please enter the cinder storage type (0 lvm, 1 3par, 2 onestor): 1
Please enter the cinder storage availability zone (eg:cinder_az): cinder_3par
Please enter the volume type of 3par driver (0 fc, 1 iscsi): 0
Please enter the San IP of 3par server (eg: 192.103.10.250): 192.103.10.250
Please enter the user name of 3par server (eg: username): username
Please enter the password of 3par server (eg: password): password
Please enter the chosen CPG of 3par server(eg: cpg1): cpg1
Please enter the dhcp range (eg: 172.25.50.100,172.25.50.200): 172.25.50.151,172.25.50.160
Please enter the dhcp netmask (eg: 255.255.255.0): 255.255.240.0
Please enter the multi-export config bridge_mappings according to the usage(eg: physnet2:vswitch2,physnet3:vswitch3): vxlan:ironic
Enter information in the script as follows:
Number of hypervisor type—Enter 5, which represents a bare metal server.
Manage Network IP address of the compute node—Enter the IP address of the bare metal compute node for the node to communicate with the system and the bare metal server.
Inspection Network IP address of the compute node—Enter the IP address for the server to discover the compute node. By default, this IP address is the same as the IP address of the compute node.
Provision Network IP address of the compute node—Enter the provision network IP address for deploying the bare metal node.
Mode of Network(1 flat , 2 mutitenant)—Enter a network mode. 1 represents a flat network. 2 represents a multitenant network.
Manage Network IP address of the controller node—Enter the management IP address of a controller node.
Public Network IP address of the controller node—Enter the public network IP address of a controller node.
Cluster environment—Enter 1 if the system is deployed in cluster mode. Enter 2 if the system is not deployed in cluster mode.
IP address of the Matrix—Enter the IP address of the Matrix node.
cinder storage type—Enter the storage type of the bare metal storage AZ. In this example, the storage type is 3par.
cinder storage availability zone—Enter the name of the bare metal storage AZ.
volume type of 3par driver—Enter the type of the storage volume, for example, fc and iscsi.
San IP of 3par server—Enter the San IP of the 3par server.
user name/password of 3par server—Enter the username and password of the 3par server.
chosen CPG of 3par server—Enter the selected CPG on the 3par server.
Dhcp range—Enter the IP address range for IP address assignment through DHCP. The DHCP server must be reachable to the system and the bare metal compute node. The IP addresses assigned by the DHCP server are used for communication between RAMdisk and Ironic-conductor during bare metal node deployment. The IP addresses are reclaimed after deployment.
Dhcp netmask—Enter an IP address mask.
Multi-export config bridge_mappings according to the usage—Specify the network egress for the bare metal compute node, in the egress name:egress device format. If you specify multiple egresses, separate them with commas.
Script execution succeeded when the information shown in the following figure occurs.
Figure-2 Execution succeeded
To configure bare metal service when the PXE-enabled port on the bare metal server and the bare metal compute node belong to different subnets, see "Network planning when the PXE-enabled port on the bare metal server and the bare metal compute node belong to different subnets." |