Create a compute node

The system does not support power compute node deployment through the Web interface. You must convert VMware compute nodes into power compute nodes as follows:

  1. Create a VMware compute node

  1. Convert a VMware compute node into a power compute node

Create a VMware compute node

Create a VMware compute node on VMware

If you have access to VMware, create a VMware compute node on VMware as follows:

  1. Create virtualization settings.

  1. Create a compute node.

For more information, see "Manage compute nodes."

Create a virtual VMware compute node

If you do not have access to VMware, create a VMware compute node as follows:

  1. Use an HTTP request tool to request token information. The IP address in the URL is the management IP address of the system. Please replace the IP address based on your environment.

Post http://172.25.51.97:8000/sys/identity/v1/tokens

Header

Accept:application/json

Content-Type:application/json

Body

{

"identity":

{"username":"admin",

"password":"xxx"          */Password of the current administrator/*

}

}

  1. Create a VMware compute node. The IP address in the URL is the management IP address of the system. Please replace the IP address based on your environment.

Post http://172.25.51.97:8000/os/compute/v1/xxos/computenode

Header:

Accept:application/json

Content-Type:application/json

    X-Auto-Token: {Token obtained in the previous step}

Body:

{

  "allPass" : "xxos",

  "resourceNode" : "host",

  "useLocalStorage" : "True",

  "linkClone" : "False",

  "mode" : "expand",

  "ip" : "172.25.51.97",                  */System management IP address/*

  "hostName" : "host818181",            */Compute node name/*

  "vmType" : 1,                        */VMware:1/*

  "hostIp" : "172.25.17.181",             */Compute node IP/*

  "userName" : "[email protected]",   */Compute node username/*

  "password" : "vmware",                */Compute node password/*

  "poolName" : "xxos",      

  "vxlanOverlayMode" : "3",               */Networking mode: VLAN—3, VXLAN host overlay—2, VXLAN network overlay—1/*

  "storageZone" : "cinder818181",         

  "clusterName" : "xxos",

  "clusterId" : "domain-c7",

  "initMode" : "vmware_api",

  "vswitch" : [ {

    "netName" : "dmxhbg==",     */Value encoded by Base64/*

    "device" : "vSwitch0"

  } ],

  "pci" : [ ],

  "vmwareVersion" : "2"

,"pci":[]}

  1. Check the service status in the proper Nova container and verify that the pods named with the compute node name are running.

  1. Log in to the control node through SSH and load environment variables.

source /opt/bin/common/tool.sh

  1. Locate the Nova container ID.

docker ps | grep host name

  1. Enter the Nova container.

docker exec –it container_id bash

  1. Load environment variables.

source /root/admin-openrc.sh

  1. View the service status.

nova service-list

Convert a VMware compute node into a power compute node

·          As a best practice, back up the configuration of the original environment to prevent misoperations from causing data loss. In the configuration files, the first letters of words are in lowercase.

·          Please obtain the newest power driver from your power service provider or copy it from a deployed power host.

 

  1. Log in to the system control node through SSH and upload the newest power driver file to /tmp.

  1. Load environment variables.

[root@d013rc6-c203 ~]# source /opt/bin/common/tool.sh

  1. Locate the container ID based on the host name of the compute node.

[root@d013rc4-c202 ~]# pod  | grep com61

default   com234rc-fht5n     1/1       Running   3          5d        10.101.29.28    172.25.48.203

  1. Enter the compute node container.

[root@d013rc6-c203 ~]# docker exec -it  container_id  bash

  1. Use a VI editor to add the following information to the /etc/nova.conf file of the compute node container. Please replace the URL, username, and password used for accessing the PowerCenter console according to your environment.

compute_driver = pcenter.Pcenter.Driver

[pcenter]

pcenter_url = http://lingcloudpower.ticp.net:25549/rest   # Contact LingCloud Power to obtain the port number.##

pcenter_username = nova

pcenter_password = nova1234

group = cec

version = 1

  1. Use a VI editor to add the following information to the /etc/cinder/cinder.conf file of the compute node container. Please replace the URL, username, and password used for accessing the PowerCenter console according to your environment.

enabled_backends = pcenter

[pcenter]

volume_group = pcenter

volume_driver = cinder.volumedrivers.pcenter.PcenterDriver

volume_backend_name = pcenter

pcenter_url = http://114.242.9.246:25549/rest

pcenter_name = cinder

pcenter_password = cinder1234

scheme = vglv

group = cec

version = 1

  1. Use a VI editor to add the following information to the /etc/ceilometer.conf file of the compute node container. Please replace the URL, username, and password used for accessing the PowerCenter console according to your environment.

hypervisor_inspector = pcenter

[pcenter]

pcenter_url = http://114.242.9.246:25549/rest

pcenter_username = ceilometer

pcenter_password = ceilometer1234

version = 1

  1. Use a VI editor to add the following information to the /usr/lib/python2.7/site-packages/ceilometer-9.0.1-py2.7-egg-info/entry_points.txt file of the compute node container.

pcenter = ceilometer:compute.virt.pcenter.inspector:PcenterInspector

  1. Create folders to store driver files of PowerCenter in the compute node container.

  1. Create a folder for Nova driver files.

[root@com61rc /]# mkdir /usr/lib/python2.7/site-packages/nova/virt/pcenter

  1. Create a folder for Cinder driver files.

[root@com61rc /]# mkdir /usr/lib/python2.7/site-packages/cinder/volume/drivers/pcenter

  1. Create a folder for Ceilometer driver files.

[root@com61rc /]# mkdir usr/lib/python2.7/site-packages/ceilometer/compute/virt/pcenter

  1. Copy the Nova, Cinder, and Ceilometer driver files for PowerCenter to the proper folders in the compute node container.

[root@d013rc6-c203 ~]# docker cp /tmp/driver file name  container_id:/target folder path

  1. Restart the openstack-nova-compute, openstack-cinder-volume, and openstack-ceilometer-polling services for the compute node container.

[root@com61rc /]#service openstack-nova-compute restart

[root@com61rc /]#service openstack-cinder-volume restart

[root@com61rc /]#service openstack-ceilometer-polling restart

  1. Verify that the openstack-nova-compute, openstack-cinder-volume, and openstack-ceilometer-polling services are running in the compute node container.

[root@com61rc /]#service openstack-nova-compute status

[root@com61rc /]#service openstack-cinder-volume status

[root@com61rc /]#service openstack-ceilometer-polling status