On Wed, Jul 20, 2022 at 01:09:13PM +0200, Laszlo Ersek wrote:
We currently don't generate any @check attribute for the
/domain/cpu
element, which causes the following libvirtd behavior
<
https://libvirt.org/formatdomain.html#cpu-model-and-topology>:
> Once the domain starts, libvirt will automatically change the check
> attribute to the best supported value to ensure the virtual CPU does not
> change when the domain is migrated to another host
Vera Wu reports that in practice, at least when the source CPU model is
explicitly "qemu64", libvirtd sets @check='partial'. That's defined
as:
I think my main confusion is why is the _source_ model "qemu64"?
Doesn't that imply the source hypervisor is qemu and so we shouldn't
use virt-v2v at all? In fact from the BZ:
# virt-v2v -i libvirt -ic qemu:///system esx6.7-rhel8.6-nvme-disk ...
(Vera is doing this for a good reason - she wants to test the -oo
compressed option, and using a local qemu guest is the quickest way.)
Anyway let's go with the assumption this is a bad idea, but for
testing purposes we often do this so we should try to make it work.
> Libvirt will check the guest CPU specification before starting a
domain
This is a problem: the default "qemu64" CPU model exposes the SVM CPU
flag, and that's unsupportable on Intel hosts. SVM is the AMD counterpart
of VT-x; IOW, the flag effectively advertizes AMD-specific nesting to
guests.
With @check='partial', libvirt prevents the converted domain from starting
on Intel hosts; but with @check='none',
> Libvirt does no checking and it is up to the hypervisor to refuse to
> start the domain if it cannot provide the requested CPU. With QEMU this
> means no checking is done at all since the default behavior of QEMU is
> to emit warnings, but start the domain anyway.
We don't care about the migratability of the converted domain, so relax
the libvirtd checks, by generating the @check='none' attribute.
For testing we don't, but in general -o libvirt conversions we might
care about migratability.
Consider adding @check='none' in two cases:
(1) When the source domain specifies a CPU model.
Generating @check='none' in this case fixes the issue reported by Vera.
(2) When the source domain does not specify a CPU model, and the guest OS
is assumed to work well with the default CPU model.
Generating @check='none' in this case is actually a no-op. Going from "no
CPU element" to "<cpu check='none'/>" does not change how
libvirtd
augments the domain config. Namely,
(2.1) for x86 guests, we get
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu64</model>
</cpu>
either way,
(2.2) for aarch64 guests, we get
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>cortex-a15</model>
</cpu>
either way.
Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=2107503
Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
---
output/create_libvirt_xml.ml | 1 +
1 file changed, 1 insertion(+)
diff --git a/output/create_libvirt_xml.ml b/output/create_libvirt_xml.ml
index 531a4f75bf3e..0343d3194268 100644
--- a/output/create_libvirt_xml.ml
+++ b/output/create_libvirt_xml.ml
@@ -192,6 +192,7 @@ let create_libvirt_xml ?pool source inspect
List.push_back cpu_attrs ("mode", "host-passthrough");
| Some model ->
List.push_back cpu_attrs ("match", "minimum");
+ List.push_back cpu_attrs ("check", "none");
So should we make this attribute conditional on the source CPU model
being qemu64, just so the change is minimal and we don't break
migratability for non testcases?
Rich.
(match source.s_cpu_vendor with
| None -> ()
| Some vendor ->
--
2.19.1.3.g30247aa5d201
_______________________________________________
Libguestfs mailing list
Libguestfs(a)redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v