practical libvirt migration guides/best practices

anyone have any good pointers for hand-rolling this? I have a bunch of VMs on a linux+zfs host I want to move to another linux+zfs host. I can do it by hand but figured I might as well get to grips with virsh migrate but that immediately exposes my assumptions about roles and where I have implicitly crossed permission boundaries with my uid being in libvirt and then liberal zfs root usage (it need not be zfs, I'm sure the same would be true in other volume/storage management).

All the guides from the big boys (IBMHat, SUSE etc,) don't seem to touch on this and go to great lengths about kernel tunables and whatnot, but I'm looking for something way more prosaic:- no, I'm not adding SSH keys to the root account on the destination nor am I allowing root access just for qemu+ssh://

Do folks have any such links? share the goodness pretty please:geek:
:eng101:
 

koala

Ars Tribunus Angusticlavius
7,579
My feeling is that libvirt is a bit bare, and it's more normal to have someting on top (Proxmox, Kubevirt, etc.) which would streamline the migration.

I assume this is more of a learning exercise- in that case I think you'll have to dig (but likely it will be quite interesting!). If this is not a learning exercise, I would consider moving to a more complete VM platform- you mentioned ZFS, so Proxmox would be a nice candidate- I believe migrating VMs is a supported, documented, "easy" process.
 
i have integrated libvirt/qemu with Kerberos via SASL, and added my id to local authorization in qemu.conf. this allows me to perform most, if not all, functions without having to be added to the libvirt posix group. i use iSCSI for disk allocations and those disks need to be configured and accessible to both hypervisors for the migration to work. also, all the same networks have to be configured and available on both hypervisors.
 
  • Like
Reactions: wobblytickle
So after an awful lot of reading around the internetz this took a surprising (or not) turn:
Code:
[root@headnode (woodville) ~]# sdc-server list
HOSTNAME             UUID                                 VERSION    SETUP    STATUS      RAM  ADMIN_IP
worker0              a94f3738-0304-484e-8c77-d3157aaa645c     7.0     true   running    16383  192.168.100.35
worker1              b658dd03-fb43-6745-a033-ce4c9471fd4b     7.0     true   running    16383  192.168.100.36
headnode             f7e66ae2-8825-484f-9a86-d06e750ad875     7.0     true   running    16383  192.168.100.2
[root@headnode (woodville) ~]#

Cluster instantiation was absolutely trivial and all difficulties I actually have had up until this point have been because this is running inside kvm. If I can manage to get the thing to do a live migration (whist in kvm) I'll be pumped and then my very last possible outstanding feature would be PCIE pass through.

1713391616141.png
1713391674357.png
 

malor

Ars Legatus Legionis
16,093
This reply is a month late, but:

I'm pretty sure that the virt-manager GUI (which doesn't look as nice as the GUI you're showing up there) has the ability to migrate live VMs from one host to another. I've never actually tried, and true live migration would probably require a pretty sophisticated network setup, but I definitely saw the ability to move VMs between hosts.

If you don't need live migration, and if virt-manager isn't obvious about transferring VMs that have been shut down, then you can do it manually without too much trouble. Libvirt sticks XML files with definitions for your virtual machines somewhere under /etc/libvirt (I don't have a box running right now, so I can't be more precise), so if you set up a new system that looks enough like the old system, you can just physically copy across the XML config files and the disk images to the new host, and start them there.

It's not automated, and you may have to edit the XML files by hand to get them working on the new host, but it shouldn't be fundamentally difficult. Fiddly and painstaking, yes, but not actually hard. I found libvirt's design to be nicely straightforward, so it was very easy to tinker with VM definitions and do things that the GUI didn't directly handle.

Remember that the really key thing about most VMs is their disk image file. The Linux kernel is very flexible, so if you move an image and then create a new VM that uses that image, it will often just work. Importing the images into, say, Proxmox would also probably be very easy, just a matter of copying the image files and then building a new VM in Prox that was pointed at that image as its primary disk.

It's way less complex and scary than it looks.
 
i have migrated a VM from one host to another via virt-manager GUI. the nuance i found is that the VM has to be powered on, in order to migrate. otherwise, you can "import" the VM if it is powered off. since i use iSCSI for disk, all i would have to do to "imporrt" the VM is sync the configs (the XML files, etc that Malor referred to). with iSCSI, i do have to have the storage configured on the destination hypervisor as well. also of note, migrating requires the same network be configured on both the source and destination hypervisors. it is pretty much a "like for like" kind of config required.
 
  • Like
Reactions: wobblytickle
Can you run it on Linux? Are you using SmartOS?

Which procedure are you following for installation?
That's a 3 node triton cluster all running in KVM on my current virt platform (Ubuntu on ZFS). The install to KVM was via their ISO installer, first of the headnode and then the workers. The only thing really that it needed was a thorough reading of the docs as it has some strict requirements and things go wrong if you don't meet them. For example, the libvirt defaults for generic linux don't work; the installer just hangs if the chipset is wrong for example. That and a bit of understanding what the headnode actually does: like, the installer by default looks for a PXE server that the headnode starts (along with around 20 other zones) which the admin network I'd created via libvirt was breaking because the DHCP server was enabled there: a 50/50 chance of the installer on a new node not getting PXE. It does seem to support linux worker nodes but that's not particularly interesting to me; what I'm keen about here is the tight coupling of bhyve and ZFS, which speaks to what @malor was talking about.

It's been really slick so far, but (understandably) it's quite compute intensive. You need at least 2 nodes (1 headnode, 1 worker) for a MVP cluster and at least 3 to make the cluster HA. I've 'only' got two virtualisation nodes available to me so if I nuke one for a baremetal SmartOS install to actually test it properly I need to be creative; perhaps keep the headnode in KVM and repurpose another computer to see if I can do a byhve+zfs live migration. Either that or eBay for a handful of micro optiplexes or something...
 
I'm pretty sure that the virt-manager GUI (which doesn't look as nice as the GUI you're showing up there) has the ability to migrate live VMs from one host to another. I've never actually tried, and true live migration would probably require a pretty sophisticated network setup, but I definitely saw the ability to move VMs between hosts.
As @brendan_kearney has just ninj'd me yeah I've just looked and as he says it requires shared storage and for the VM to be powered on

As per the OP what I was looking for is something tighter than what virt-manager can apparently do with ZFS. Everyone has said Proxmox and perhaps I should look at what they have/how they do it but it's always felt slightly not-right to me (can't really put a finger on how tho) which is why I was pleasantly surprised to find that Joyent had opened SDC (now Triton).
 

koala

Ars Tribunus Angusticlavius
7,579
I think Proxmox gives more love to their web interface and having a nice console than to automation, for instance. Creating VMs from cloud images is more manual work than I'd love. They're also quirky (e.g. numbering the VMs...), but I find it works well for very small use cases (like mine).

I'd prefer something more automatable like OpenStack or Triton, but really they take a significant chunk of resources, so Proxmox is good for now- plus, it's got everything-on-ZFS (which OpenStack doesn't do, for example).
 
  • Like
Reactions: wobblytickle

Burn24

Smack-Fu Master, in training
53
tl;dr: KVM libvirt VM migration has been easy for years, and I suggest don't being so precious about root.

Back in CentOS 6 era it was trivial to migrate a running (I think KVM as well as Xen..) VM between nodes with no or a barely perceptible skip. It was relatively easy to configure a pacemaker or corosync cluster that would migrate VM loads around at will, but there were definitely some caveats. Both host nodes needed to see the device storage (DRBD, SAN..) and if the system definition mapped in USB devices, well, those don't magically teleport in meatspace, so migrations would fail.

It was easy to export/import VM configurations with dumpxml & define with virsh (iirc..). And, yes, the nodes could be configured to ssh as root between systems to migrate running state (memory), I recall there was another mechanism to transmit state, perhaps it was built into Xen, I forget.

You can still do this today I'm pretty sure. Within the past few years I had test nodes running libvirt and KVM VMs on Fedora and CentOS hosts, with the backing disk store on NFS, and I could live-migate VMs between hosts without extra configuration... sometimes (sometimes the VM didn't migrate successfully due to CPU configuration, but I didn't sweat it since I was migrating between nodes that differed significantly in hardware). All of this was just using stock CentOS/Fedora installs, libvirt, virt-manager. I do believe I was prompting musical chairs for VMs between hosts just in the virt-manager GUI. I got a surprising amount of use out of it both for work and a hobbyist, I wish it was more popular and maintained, but that was a different time. I am pretty sure any modern clown-suit front end you're using to juggle VMs, deep down it's just using the regular built in libvirt/KVM stuff. I am pretty sure rhel/fedora gave up on virt-manager for some kind of cockpit thing.
 
back over with Triton I got my volume stuff all sorted out (you know, actually Read TFM) and 5 minutes later got a bhyve Ubuntu up



1713972197097.png

and a couple of minutes after that:


Bash:
[root@headnode (woodville) ~]# date;sdc-migrate migrate -n f1740666-e11e-b749-9895-ccd408ea1ea3   1e3fae65-0288-45af-ba8a-eba3f8f45dd7;date
24 April 2024 at 15:26:02 UTC
# Migration begin running in job a2427d2e-ca33-478d-b6fb-f3b401322c86
 - reserving instance
 - syncing data
  - running: 100%  137.7MB/s
 - syncing data
 - stopping the instance
 - syncing data
  - running: 96%  2.0MB/s
 - switching instances
 - reserving the IP addresses for the instance
 - setting up the target filesystem
 - hiding the original instance
 - promoting the migrated instance
 - removing sync snapshots
 - starting the migrated instance
OK - switch was successful
24 April 2024 at 15:27:21 UTC
[root@headnode (woodville) ~]# sdc-migrate finalize 1e3fae65-0288-45af-ba8a-eba3f8f45dd7
Done - the migration is finished.
[root@headnode (woodville) ~]#

That it works in KVM is a miracle but this seems like a goer to me. A bit more playing and then I think I am going to dip my foot in the water with bare metal...
 

chalex

Ars Legatus Legionis
11,286
Subscriptor++
I use the proxmox webui for this, I don't think it does much proxmox-specific; there is a disk image and a config file per vm and the CLI version is "qm migrate [vm_id] [target]"
proxmox "cluster" uses corosync and etcd underneath so I think it's all very standard

but yes obviously you need some kind of passwordless ssh or equivalent between the cluster machines to move the VMs
 
  • Like
Reactions: wobblytickle