Migrating Xen domainU Guests Between Host Systems
Previous | Table of Contents | Next |
Xen Monitoring Tools and Techniques | Solving Common Xen Problems |
<google>BUY_XEN_ESSENTIALS</google>
One of the most compelling features of Xen virtualization is the ability to migrate a domainU guest from one host system to another. Of particular significance, however, is Xen's ability to perform live migrations whereby a running guest domainU system is moved from one system to another such that there is only an imperceptible interruption in service. In this chapter we will discuss the requirements for performing live Xen domainU migrations before walking through an example live migration.
Requirements for Xen domainU Migration
Before a Xen guest system can be migrated from one host to another a number of requirements must be met:
- Both the source and destination hosts must have access to the root filesystem (and swap if specified) via the same path name. For example if the root filesystem is contained in a disk image with a path of /xen/xenguest.img then that image file must also be accessible at the same location on the target host. This is most commonly achieved by placing the image files on a file server such that it can be mounted via NFS.
- Both systems must be running compatible processors.
- The target host must have sufficient memory to accommodate the migrated guest domain.
- The source and destination machines must reside on the same subnet.
- The two systems need to be running compatible versions of Xen.
- Firewall settings (and SELinux if enabled) must be configured to permit communication between the source and destination hosts.
- Both systems must be configured to allow migration of virtual machines.
Enabling Xen Guest Migration
By default guest domain migration is disabled in the Xen configuration. A number of changes are necessary, therefore, prior to performing a migration. The required settings are located in the /etc/xen/xend-config.sxp configuration file and the necessary changes need to be implemented on both the source and target host systems. The first modification involves setting the xend_relocation_server value to yes:
(xend-relocation-server yes)
Secondly, the xend-relocation-hosts-allow value must be changed to define the hosts from which relocation requests will be accepted. This can be a list of hostnames or IP addresses including wildcards. An empty value may also be specified (somewhat insecurely) to accept connections from any host. The following example allows migration requests only from a host with an IP address of 192.168.2.20:
(xend-relocation-hosts-allow '192.168.2.20')
Finally, the xend-relocation-address and xend-relocation-port settings on the source and destination systems must match. Leaving these values commented out with '#' characters so that the default value is used is also a viable option as shown below:
#(xend-port 8000) #(xend-relocation-port 8002)
Xen Migration Firewall Configuration
Before a Xen guest domain can be migrated it is important to ensure that the firewall settings on the destination host are configured appropriately. As outlined above, Xen uses port 8002 by default for performing migrations. If this port is blocked by the firewall on the destination host then the migration will fail with the following error message:
Error: can't connect: No route to host
In order to resolve this problem it is essential that the firewall on the destination host be configured to allow TCP traffic on either port 8002 or, if the default value was not used, the port number specified for xend-relocation-port in the /etc/xen/xend-config.sxp file.
Preparing the Xen Migration Environment
The next requirement for performing live migrations involves the creation of a Xen domainU guest to migrate. For the purposes of this chapter, we will use a disk image based domainU guest environment installed in the /xen directory of a server. The steps required to create such a guest are outlined in detail in the chapter entitled Building a Xen Virtual Guest Filesystem on a Disk Image (Cloning Host System), which provides steps on how to create and configure the following files:
XenGuest1.img XenGuest1.swap XenGuest1.cfg
Throughout this tutorial we will also work on the assumption that the source host has an IP address of 192.168.2.20 and the target host has an IP address of 192.168.2.21.
Once the domainU guest has been created, the server needs to be configured to allow the /xen sub-directory on the source host to be mounted via NFS on the target host. On Linux systems this typically involves adding an entry to the /etc/exports file. As an example, the following entry enables the target host (IP address 192.168.2.21) to mount the /xen file directory located on the source host (192.168.2.20) via NFS:
/xen 192.168.2.21(rw,no_root_squash,async)
Of particular importance in the above configuration is the no_root_squash which allows the target host to write to the /xen directory via NFS as super-user, a key requirement for performing the live migration. Once the /etc/exports file has been modified the change needs to be exported using the exportfs -a command:
exportfs -a
With the exports configured, /xen can be mounted on the target system as follows:
mkdir /xen mount 192.168.2.20:/xen /xen
Running the DomainU Guest
Before attempting to perform a live migration of the Xen guest it is worth first checking that the guest will run successfully on both the source and target hosts. This will verify, for example, that both systems are configured with enough memory to execute the guest. Beginning on the source host change directory to /xen and run the guest domain as follows:
cd /xen xm create XenGuest1.cfg -c
Assuming the guest boots successfully execute the appropriate shutdown command and wait for the guest to exit. Once you are certain the guest has exited repeat the above steps on the target system. If any problems occur on either system, rectify the issues before attempting the migration. Be sure to shutdown the guest on the target system before proceeding.
Performing the Migration
The first step in performing the migration is to start up the guest on the source host:
xm create XenGuest1.cfg -c
Once the system has booted exit from the console by pressing Ctrl+]. We can now view the list of guests running on the host:
# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 864 2 r----- 3995.3 centos.5-1 1 127 1 -b---- 82.6
As shown above our guest domain has been assigned an ID of 1. To perform the live migration we need to use the xm migrate command, the syntax for which is as follows:
xm migrate domain id target host -l
In the above syntax outline domain id represents the ID of the domain to be migrated (obtainable using xm list), target host is the host name or IP address of the destination server and the -l flag indicates that a live migration is being performed.
In order to migrate our guest (domain ID 1) to our target host (IP address 192.168.2.21) we therefore need to execute the following command:
xm migrate 1 192.168.2.21 -l
After a short period of time the guest domain will no longer appear on the list of guests on the source host and will now be running on the target host.
Checking the Xen Log for Migration Errors
The migration from a user's perspective is a silent process. This can make tracking down problems difficult since a failed migration will often not report anything on the command-line. If problems are encountered during the migration the first place to look for information is the Xen log file which can be viewed by running the xm log command. The following shows partial output from a successful live migration:
[2008-06-11 15:46:02 xend 2404] DEBUG (balloon:127) Balloon: 132008 KiB free; need 131072; done. [2008-06-11 15:46:02 xend 2404] DEBUG (XendCheckpoint:215) [xc_restore]: /usr/lib/xen/bin/xc_restore 16 1 1 2 0 0 0 [2008-06-11 15:46:04 xend 2404] INFO (XendCheckpoint:351) xc_domain_restore start: p2m_size = 8800 [2008-06-11 15:46:04 xend 2404] INFO (XendCheckpoint:351) Reloading memory pages: 0% [2008-06-11 15:46:47 xend 2404] INFO (XendCheckpoint:351) Received all pages (0 races) [2008-06-11 15:46:47 xend 2404] INFO (XendCheckpoint:3100% [2008-06-11 15:46:47 xend 2404] INFO (XendCheckpoint:351) Memory reloaded (4440 pages) [2008-06-11 15:46:47 xend 2404] INFO (XendCheckpoint:351) Domain ready to be built. [2008-06-11 15:46:47 xend 2404] INFO (XendCheckpoint:351) Restore exit with rc=0 [2008-06-11 15:46:47 xend 2404] DEBUG (XendCheckpoint:322) store-mfn 5108 [2008-06-11 15:46:47 xend 2404] DEBUG (XendCheckpoint:322) console-mfn 128698 [2008-06-11 15:46:48 xend.XendDomainInfo 2404] DEBUG (XendDomainInfo:691) XendDomainInfo.completeRestore [2008-06-11 15:46:49 xend.XendDomainInfo 2404] DEBUG (XendDomainInfo:791) Storing domain details: {\047console/ring-ref\047: \047128698\047, \047console/port\047: \0472\047, \047name\047: \047centos.5-1\047, \047console/limit\047: \0471048576\047, \047vm\047: \047/vm/bd0c2520-1094-0b71-a3ed-c6a5f917f235\047, \047domid\047: \0471\047, \047cpu/0/availability\047: \047online\047, \047memory/target\047: \047131072\047, \047store/ring-ref\047: \0475108\047, \047store/port\047: \0471\047} [2008-06-11 15:46:49 xend.XendDomainInfo 2404] DEBUG (XendDomainInfo:707) XendDomainInfo.completeRestore done [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices vif. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:149) Waiting for 0. [2008-06-11 15:46:49 xend.XendDomainInfo 2404] DEBUG (XendDomainInfo:988) XendDomainInfo.handleShutdownWatch [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:476) hotplugStatusCallback /local/domain/0/backend/vif/1/0/hotplug-status. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:490) hotplugStatusCallback 1. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices usb. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices vbd. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices irq. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices vkbd. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices vfb. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices pci. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices ioports. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices tap. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:149) Waiting for 51713. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:476) hotplugStatusCallback /local/domain/0/backend/tap/1/51713/hotplug-status. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:490) hotplugStatusCallback 1. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:149) Waiting for 51714. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:476) hotplugStatusCallback /local/domain/0/backend/tap/1/51714/hotplug-status. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:490) hotplugStatusCallback 1. [2008-06-11 15:46:49 xend 2404] DEBUG (DevController:143) Waiting for devices vtpm.
If the migration fails the log output will likely differ in some significant way from the above log output. A common cause for the migration failing is the result of there being insufficient memory on the target system to accommodate the guest domain. In this situation the Xen log file will likely contain output similar to the following:
[2008-06-12 10:53:48 xend 2414] DEBUG (balloon:133) Balloon: 31656 KiB free; 0 to scrub; need 524288; retries: 20. [2008-06-12 10:54:10 xend.XendDomainInfo 2414] DEBUG (XendDomainInfo:1557) XendDomainInfo.destroy: domid=4 [2008-06-12 10:54:10 xend.XendDomainInfo 2414] DEBUG (XendDomainInfo:1566) XendDomainInfo.destroyDomain(4) [2008-06-12 10:54:10 xend 2414] ERROR (XendDomain:268) Restore failed Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/xen/xend/XendDomain.py", line 263, in domain_restore_fd return XendCheckpoint.restore(self, fd) File "/usr/lib/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 207, in restore balloon.free(memory + shadow) File "/usr/lib/python2.4/site-packages/xen/xend/balloon.py", line 166, in free raise VmError( VmError: I need 524288 KiB, but dom0_min_mem is 262144 and shrinking to 262144 KiB would leave only 248744 KiB free.
<google>BUY_XEN_ESSENTIALS_BOTTOM</google>
Previous | Table of Contents | Next |
Xen Monitoring Tools and Techniques | Solving Common Xen Problems |