The deployment scenario you select during system initialization determines the type of clusters you can create. If you select HCI, you can create HCI and computer virtualization clusters. If you select compute virtualization, you can create only computer virtualization clusters.
After the system starts creating a cluster, the system displays the cluster creation progress page. During cluster creation, do not perform any operations.
After a cluster is created, perform the following tasks:
On the pool management page for ONEStor, create storage pools in node pools and disk pools.
Configure RBD network storage for the cluster.
Make sure the IP version of the hosts in the cluster matches the network IP version planned for Space Console.
From the left navigation pane, select Data Center > Virtualization.
Click Add Cluster.
Click Start.
Select a deployment scenario.
Configure network parameters, and then click Next.
Click OK to discover hosts.
By default, the system displays the hosts discovered through scanning by MAC address.
You can configure the system to discover hosts through scanning by IP address as follows:
Click Scan for Hosts, click IP Address, and then enter discrete or continuous IP addresses, for example, 10.125.36.100,10.125.36.105,10.125.36.111 or 10.125.36.100-10.125.36.111. Then, click OK.
Click Rescan. In the dialog box that opens, click OK to discover hosts.
| If the system scans hosts by MAC address, the hosts to be added are on the same network segment with Space Console. If the system scans hosts by IP address, to avoid adding failure, the hosts to be added must be on the same network segment with the management network of the clusters. |
Click Configure NICs for each host.
| You must assign a minimum of one host to a compute virtualization cluster and a minimum of two hosts to an HCI cluster. If an HCI cluster contains only two hosts, you must configure external monitor nodes. |
Click to select physical interfaces for networks. For an HCI cluster, you must select physical interfaces for the management network, service network, storage front-end network, and storage back-end network.
| Make sure the number of NIC physical interfaces and network reuse settings on the hosts are the same. If you configure network settings of a host to be inconsistent with those on other hosts, the network settings on those hosts do not take effect. You must reconfigure the network settings on those hosts. For example, if the network settings of Host A are as follows:
If you configure the following settings on Host B, the network settings on Host A do not take effect:
|
Click Select Hosts to Add to Cluster, verify that the host configuration is correct, and then click OK.
| The configuration of a compute virtualization cluster finishes at this step. For an HCI cluster, you must finish the subsequent steps. |
Verify that the hosts have been added to the HCI cluster, and then click Next.
Configure basic storage parameters, and specify racks for the hosts.
Click Edit for a host to edit its purpose, cache disk, and data disk.
|
|
Click Next, and then click OK.
Click Finish.
Host Name Prefix: Configure the host name prefix.
Host Name Starting Number: Configure the initial host number.
Starting IP: Specify the starting IP address for management, storage front-end, and storage back-end networks. The management network IP address cannot be 172.17.0.0/16 or a subnet on this network. This network is reserved for other services.
Subnet Mask: Specify the subnet mask for management, storage front-end, and storage back-end networks.
Gateway: Configure the management network gateway.
VLAN ID: Configure the VLAN ID for the management, storage front-end, and storage back-end networks.
NIC Template: After you enable this feature for a host, the system automatically configures network settings based on those of the host for other hosts that meet the following requirements:
Have physical NICs with the same names as those of the host, and the physical NICs are active.
The physical NICs meet the minimum rate requirements of corresponding networks.
Assume that you enable this feature for Host A, which uses eth0 as the management network NIC, eth1 as the service network NIC, and eth2 and eth3 as the storage network NICs. The system automatically configures network settings for a host only when the host meets the following requirements:
Has physical NICs eth0, eth1, eth2, and eth3, and the physical NICs are active.
The rate of eth0 and eth1 is not lower than 1000 Mbps.
The rate of eth2 and eth3 is not lower than 10000 Mbps.
IP: If you do not configure IP addresses for the hosts, the system automatically assigns IP addresses to the hosts in sequence. The starting IP address is assigned to the management node. You do not need to configure a service network IP address for a host.
Physical NIC: Specify physical NICs for the management, storage back-end, and storage front-end networks. The service network NIC is optional. If you do not specify a service network NIC, the system does not create a service network virtual switch for the host after the host joins the cluster. In this case, you must manually configure a service network virtual switch for the host.
LAGG Mode: Select a link aggregation mode for the physical NICs. Options include Static and Dynamic. As a best practice, use dynamic link aggregation mode. If you select the dynamic link aggregation mode, you must enable LACP on the physical switch. This parameter is available only when you select multiple physical interfaces. Follow these restrictions and guidelines:
Select two physical NICs with the same speed for link aggregation.
If you select dynamic link aggregation on the host, you must configure dynamic link aggregation on the server-facing interfaces of the physical switch. If the host cannot be discovered after you configure dynamic link aggregation on the physical switch, configure the server-facing aggregate interface connected to the management network NIC as an edge aggregate interface and scan hosts again.
If you select static link aggregation and set the LB mode to advanced or basic on the host, you must configure static link aggregation on the server-facing interfaces of the physical switch. If the host cannot be discovered after you configure dynamic link aggregation on the physical switch, shut down the server-facing interface connected to the management network NI not assigned to vSwitch0 and scan hosts again.
If you select static link aggregation and set the LB mode to active/standby on the host, do not configure link aggregation on the server-facing interfaces of the physical switch.
LB Mode: Set the load balancing mode of physical NICs. This parameter is configurable only when multiple physical interfaces exist.
Advanced—Performs load balancing for packets based on the Ethernet type, IP protocol, source IP address, destination IP address, source port number, and destination port number.
Basic—Performs load balancing for packets based on the source MAC address and VLAN tag. Use this mode as a best practice to increase bandwidth and ensure stability and availability.
Active/Standby—Performs load balancing for packets based on the primary and backup roles of physical NICs. If the primary NIC fails, traffic is automatically switched to the backup NIC. This mode is available only for static link aggregation.
Storage Management IP: Configure the management IP address of the storage cluster, which is the management HA IP address of the storage cluster.
Deployment Mode: Storage cluster deployment mode. Options include SSD Caches+HDDs, All SSDs, All HDDs, and HDDs+SSDs.
SSD caches+HDDs—Deploy HDDs as data disks and deploy SSDs as read or write caches. Make sure the following requirement is met on each server:
SSD count:HDD count ≥ 1:5
All SSDs—Deploy SSDs as data disks. Use this mode to provide high-performance storage. In this node, no read or write caches are used.
All HDDs—Deploy HDDs as data disks without read or write caches. Use this mode to provide regular storage services.
HDDs+SSDs—Deploy SSDs and HDDs as data disks in high-performance storage pools and slow-performance storage pools, respectively, to provide storage services for applications that require different storage performance.
Replicas: Set the number of replicas. For important services and scenarios that require high reliability, use three or more replicas as a best practice.
Provisioning: Select a block device provisioning mode.
Thick provisioning—The available capacity of a block device in the data pool of the disk pool is the same as that assigned when the block device is created, and the total capacity of the block devices cannot exceed the actual available capacity of the data pool. If this mode is selected, the predefined storage pool will be configured as an iSCSI shared file system.
Thin provisioning—The capacity assigned to a block device when it is created can exceed the available capacity of the data pool. If this mode is selected, the predefined storage pool will be configured as an RBD storage pool.
Cache Size: Set the cache size. This parameter is available only when you select the SSD Caches+HDDs deployment mode. The system divides SSDs into partitions based on the number of data disks and assigns a partition to each data disk as its cache. The default cache size is 100 GB. A larger cache partition provides better performance. You can increase the cache size if the amount of service data is large.
Fault Domain: Fault domain type of the storage cluster. By using fault domains and a redundancy policy together, a storage cluster saves the replicas or blocks of data to different fault domains to ensure data security and high availability.
Rack: Each rack is a fault domain. Use this fault domain type when the cluster is large and contains a large number of racks.
Host: Each host is a fault domain.
Cache Disks: Select cache disks for the host. Make sure the following requirement is met on each server for SSD caches+HDDs deployment: SSD count:HDD count ≥ 1:5
Rack: Select a rack for the host. A cluster can contain a maximum of 32 racks. A rack name cannot be identical to a disk pool name or host name.
Data Disks: Select data disks for the host. Make sure the following requirement is met on each server for SSD caches+HDDs deployment: SSD count:HDD count ≥ 1:5. If an HCI cluster contains two hosts, select at least three data disks on each host. If an HCI cluster contains three or more hosts, select at least two data disks on each host.
Node IP Address: Enter the IP address of the external monitor node in a two-node HCI cluster.
Root Username: Enter the root username of the external monitor node.
Root Password: Enter the root password of the external monitor node.