Traditionally if you did live migration (KVM to KVM), you had to
ensure that cache=none was set on all disks of the guest up front.
This was because of quirks in how NFS works (I think the close-to-open
consistency and the fact that during live migration both qemus have
the file open), and we had to assume the worst case that a guest might
be backed by NFS.
Because of this when virt-v2v converts a guest to run on KVM using
libvirt it sets cache=none.
This is not necessary with modern qemu. If qemu supports the
drop-cache property of the file block driver, which libvirt will
automatically detect for us, then libvirt live migration is able to
tell qemu to drop cached data at the right time even if the backing is
NFS.
It also had a significant performance impact. In some synthetic
benchmarks it could show 2 or 3 times slower performance.
Thanks: Ming Xie, Peter Krempa.
---
v2v/create_libvirt_xml.ml | 1 -
1 file changed, 1 deletion(-)
diff --git a/v2v/create_libvirt_xml.ml b/v2v/create_libvirt_xml.ml
index 05553c4f7d..5a1fba0fd6 100644
--- a/v2v/create_libvirt_xml.ml
+++ b/v2v/create_libvirt_xml.ml
@@ -336,7 +336,6 @@ let create_libvirt_xml ?pool source targets target_buses guestcaps
e "driver" [
"name", "qemu";
"type", t.target_format;
- "cache", "none"
] [];
(match pool with
| None ->
--
2.26.2