This issue might occur when the following conditions exist:
The VM disk image file does not exist.
The VM is not mounted with a CD-ROM drive.
The vSwitches connected to the NICs of the VM do not exist.
The VM disk file type is incorrect.
The host does not have sufficient memory.
To resolve this issue:
Verify that the VM disk file exists with a non-zero file size:
In the navigation tree in CVM, expand Cloud Resources, and then navigate to vm01.
In the Summary tab, identify the path of the VM disk file. In this example, the file path is /vms/target3/vm01.
In the navigation tree, navigate to the host that contains vm01. Then, select the target at /vms/target3 in the Storage tab to verify that a VM disk file exists for vm01 and the file size is not zero.
If a CD/DVD drive has been mounted on the VM, make sure the drive exists, whether it is a physical CD/DVD drive (/dev/cdrom) or an .iso virtual drive file (,/vms/isos/winxpsp3.iso for example), as follows:
From the navigation tree, select the VM under Cloud Resources.
Click Edit VM.
In the navigation pane on the VM modification page, select CD-ROM, and verify the CD/DVD drive setting:
If the right pane displays a Connect button, no CD/DVD drive has been mounted. You can rule out CD/DVD drive issues.
If the right pane displays a Disconnect button, a physical or .iso virtual CD/DVD drive has been mounted. You must verify whether the drive exists. If the physical drive or the .iso file does not exist, disconnect the CD/DVD drive. To verify the existence of the .iso file, see the VM disk file verification procedure described previously.
Verify that the vSwitches connected to the NICs of the VM exist and the profiles applied to the NICs exist and are correct.
Verify that the VM disk file type is correct:
Identify the disk file type (qcow2 for example) as described in step 1.
Open the VM modification page, select the Disk option in the navigation pane, and check the Storage Format field in the right pane. Typically, the storage format is intelligent for a qcow2 file and high-speed for a raw disk file.
Verify that the host has sufficient memory for the VM. If memory runs low, add physical memory or free up memory by temporarily shutting down the idle VMs that are running.
A Windows VM enabled with memory limit cannot be started.
This symptom might occur if the memory size configured for a Windows VM is larger than the memory size specified in the memory limit feature. After you enable memory limit for a VM, the limited memory will be switched to the Swap partition. If the Swap partition does not have sufficient space, the VM cannot be started.
Before enabling memory limit, make sure the Swap partition has sufficient space.
I cannot reinstall a Windows guest OS for a VM after an unexpected host shutdown event caused an installation failure.
To resolve this issue, use either of the following methods:
Restart the VM and press any key without delay to continue with the OS installation.
Re-create the VM and install its guest OS.
The VM console does not respond to the keyboard input after a failed attempt at pressing Ctl+Alt+Del to restart a Windows VM that has failed to start up.
This issue typically occurs if the VM disk is corrupt or no guest OS is installed.
To resolve this issue:
If the Widows disk is corrupt, mount the .iso repair image file onto the VM, and configure the VM to start from the CD/DVD drive. Then, restart the VM, and at the same time, open its console. In the console, press any key at prompt to choose starting the VM from CD/DVD for a reparation or reinstallation.
If the VM does not have a guest OS, mount the .iso guest OS file onto the VM or configure the VM to obtain a guest OS from a remote server over the network. Configure the VM to start from the CD/DVD drive or the network. Then, restart the VM, and at the same time, open its console to install the guest OS.
The system displays a failure message if you perform a VM operation (start, restart, shutdown, or put-to-sleep for example) on an HA cluster or on a cluster on which HA is being enabled or disabled.
This issue is most likely to occur in the following situations:
The host or the network is so busy that the VM operation result fails to reach CVM before the waiting timer (1 minute) expires.
The HA mechanism automatically brings up a VM that has failed to start up because of a storage or network misconfiguration after the misconfiguration is corrected.
To resolve this issue:
Avoid performing VM operations on an HA cluster member host while the host or the network is busy.
Make sure the VM has a correct storage and network configuration before you start it.
I receive the " Disabling IRQ #10 BUG:Soft lockup -CPU#0 stuck for 67s! [migration/0:5]" error message when I attempt to shut down or restart some VMs that use a Linux guest OS.
This issue typically occurs if a VM uses a virtio NIC, which has poor compatibility with some versions of Linux.
When this error message occurs, the VM shutdown procedure is almost finished. You can power off the VM from CAS to force the VM down without incurring problems.
A cluster process not started error message is displayed when a VM is started, shut down, or moved in an HA cluster.
This issue occurs if HA service is not started on the host that contains that VM.
To resolve this issue, start HA service on the host by using one of the following methods:
In CVM, disable and then re-enable HA on the cluster that contains the VM.
Log in to the host as a root user, and then execute the service corosync start command.
Restart the host.
The "internal error pool iSCSITarget8-lun1 has asynchronous jobs running" error message is displayed when storage volumes are added or a VM is cloned.
This issue occurs if you perform an operation that requires concurrent accesses to multiple volumes in a storage pool, including:
Bulk-create volumes in a storage pool.
Bulk-clone VMs whose image files are stored in the same storage pool.
Bulk-clone multiple VMs and store their image files in the same storage pool.
To avoid this issue, create volumes or clone VMs one by one if they are in the same storage pool.
The system displays that the destination storage volume file already exists during a VM operation (migration, clone, deployment, or template creation through clone or conversion).
This issue occurs when the system detects that the target storage pool contains a volume file that has the same name as the source file.
To resolve this issue, delete the volume file contained in the target storage pool if that file will not be used any longer, and then perform the VM operation again. If that file will still be used, perform the VM operation again and does the following during the procedure:
If you are moving the VM, change the target storage pool.
If you are performing other VM operations than VM migration, change the VM name, the VM template name, or the target storage pool.
A VM failed to find its disk at startup after a template-based VM deployment, VM backup, VM import, or import of P2V or V2V files is performed between CVMs.
This issue typically occurs if a VM (an openSUSE-based VM for example) depends on the CID file of CVM to identify disk partitions. Different CVMs have different CID files. The VM will fail to find its disk when it is managed between different CVMs.
To resolve this issue, change the disk partition identification method from by-id to by-uuid or by-path in the /etc/fstab file.
CVM displays that the storage file already exists when I attempt to import a VM that has failed to be imported from a local directory because of an unknown error.
This issue typically occurs if the network connection to a host is lost while CVM is importing VMs from a local directory to that host but the script is still being executed in the back end. After the VMs are imported successfully in the back end, the front end does not receive a message and reports an unknown exception. The front end calls the script but does not obtain the success result, so it will not save the VM data to the database or refresh the storage pool.
To resolve this issue:
Connect to the host.
Delete the VMs that failed to be imported to the host.
Refresh the storage pool that contains the image files of those VMs, and then delete the VM image files.
Import the VMs again after the network gets stable.
VM operation failure, unexpected VM suspension, or unexpected VM status change to unknown occurred.
This issue typically occurs if the root directory of the file system on the host runs out of disk space.
To resolve these VM issues, delete or move unused files from the root directory on the host until sufficient free space is available. As a best practice, make sure the root directory has a minimum of 2G space.
This issue occurs if the number of Libvirt connections to the host exceeds the limit. To resolve this issue, wait a while and then try again.
Online VM migration failed and an error message is displayed after the HugePages memory is enabled for the VM.
To resolve this issue, shut down the VM and set its memory size to an integer multiple of the memory page size of the destination host. To add memory resources to the VM online after the VM is enabled with HugePages memory, make sure the size of the added memory resources is also an integer multiple of the memory page size of the destination host.
A network communication error occurs or the system jumps to the CVM login page when VMs are bulk-operated on a host.
This issue might result from frequent console refresh operations.
To avoid automatic jumping to the CVM login page, reduce the number of VMs in each bulk operation after you re-log in to the CVM.
To avoid communication errors, refresh the page, re-log in to CVM, and reduce the number of VMs in each bulk operation.
A VM enters unknown state after a DRX task is executed in a stateful failover system.
The VM enters unknown state because the following conditions exist:
The reclaim mode of the DRX service is Delete VM.
The DRX service reclaims the VM.
The back-end data is not synchronized to the front-end.
A primary/backup switchover occurs in the stateful failover system.
To resolve this issue:
On the top navigation bar, click Resources.
From the left navigation pane, select Host Pool Name > Cluster Name > Host Name or Host Pool Name > Host Name to enter the configuration page of the host to which the VM belongs.
Click More, and then select Connect Host.
Click OK.
The VM will be deleted from both the host and the DRX service after the host is reconnected to CVM.
I cannot set up the network by using CAStools after a VM is deployed based on a Red Hat Enterprise Linux 7 template.
This issue occurs because the VM cannot establish a session with the NetworkManager daemon, which is by default used in Red Hat Enterprise Linux 7 to monitor and manage the network setup. When you use CAStools to change the IP address of the VM, the CAStools cannot establish a session between the VM and NetworkManager. Network setup will fail after the VM restarts.
To resolve this issue, install the most recent version of CAStools on the VM, use the chkconfig NetworkManager off command to disable NetworkManager, and then reconfigure the template.
This issue occurs because the serial port cannot identify which VM is communicating.
To resolve this issue, make sure only one of the VMs is running.