We currently don't generate any @check attribute for the /domain/cpu
element, which causes the following libvirtd behavior
<
https://libvirt.org/formatdomain.html#cpu-model-and-topology>:
Once the domain starts, libvirt will automatically change the check
attribute to the best supported value to ensure the virtual CPU does not
change when the domain is migrated to another host
Vera Wu reports that in practice, at least when the source CPU model is
explicitly "qemu64", libvirtd sets @check='partial'. That's defined
as:
Libvirt will check the guest CPU specification before starting a
domain
This is a problem: the default "qemu64" CPU model exposes the SVM CPU
flag, and that's unsupportable on Intel hosts. SVM is the AMD counterpart
of VT-x; IOW, the flag effectively advertizes AMD-specific nesting to
guests.
With @check='partial', libvirt prevents the converted domain from starting
on Intel hosts; but with @check='none',
Libvirt does no checking and it is up to the hypervisor to refuse to
start the domain if it cannot provide the requested CPU. With QEMU this
means no checking is done at all since the default behavior of QEMU is
to emit warnings, but start the domain anyway.
We don't care about the migratability of the converted domain, so relax
the libvirtd checks, by generating the @check='none' attribute.
Consider adding @check='none' in two cases:
(1) When the source domain specifies a CPU model.
Generating @check='none' in this case fixes the issue reported by Vera.
(2) When the source domain does not specify a CPU model, and the guest OS
is assumed to work well with the default CPU model.
Generating @check='none' in this case is actually a no-op. Going from "no
CPU element" to "<cpu check='none'/>" does not change how
libvirtd
augments the domain config. Namely,
(2.1) for x86 guests, we get
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu64</model>
</cpu>
either way,
(2.2) for aarch64 guests, we get
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>cortex-a15</model>
</cpu>
either way.
Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=2107503
Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
---
output/create_libvirt_xml.ml | 1 +
1 file changed, 1 insertion(+)
diff --git a/output/create_libvirt_xml.ml b/output/create_libvirt_xml.ml
index 531a4f75bf3e..0343d3194268 100644
--- a/output/create_libvirt_xml.ml
+++ b/output/create_libvirt_xml.ml
@@ -192,6 +192,7 @@ let create_libvirt_xml ?pool source inspect
List.push_back cpu_attrs ("mode", "host-passthrough");
| Some model ->
List.push_back cpu_attrs ("match", "minimum");
+ List.push_back cpu_attrs ("check", "none");
(match source.s_cpu_vendor with
| None -> ()
| Some vendor ->
--
2.19.1.3.g30247aa5d201