On Tue, Jan 11, 2022 at 08:00:57AM +0100, Laszlo Ersek wrote:
(The following thought occurred to me last evening.)
In modular v2v, we use multi-threaded nbdkit instances, and
multi-threaded nbdcopy instances. (IIUC.) I think: that should result in
quite a bit of thrashing, on both source and destination disks, no? That
should be especially visible on HDDs, but perhaps also on SSDs
(dependent on request size as you mention above).
This is very possible. Also the HDD machine has only 2 cores / 4
threads (the SSD machine has 12 cores / 24 threads) so we are heavily
overcommitting software threads to hardware threads.
In contrast, in the VDDK to local disk case (which if you recall
performs fine) nbdcopy won't be using multiple threads. This is
because multi-conn is not enabled on the input (VDDK) side [see:
https://listman.redhat.com/archives/libguestfs/2021-December/msg00172.html]
so nbdcopy will use a single thread and a single NBD connection to
both input and output sides. Within nbdkit-vddk-plugin we're using
some threads to handle parallel requests on the same NBD connection
(using VDDK's Async method). Within nbdkit-file-plugin on the output
side it will also use multiple threads to handle parallel requests,
again on a single NBD connection. But the copy is generally
sequential.
The worst is likely when both nbdcopy processes operate on the same
physical HDD (i.e., spinning rust).
qemu-img is single-threaded, so even if reads from and writes to the
same physical hard disk, it kind of generates two "parallel" request
streams, which both the disk and the kernel's IO scheduler could cope
with more easily. According to the nbdcopy manual, the default thread
count is "number of processor cores available", the "sliding window of
requests" with a high thread count is likely undistinguishable from real
random access.
Also I (vaguely?) gather that nbdcopy bypasses the page cache (or does
it only sync automatically at the end? I don't remember).
Yes, both nbdcopy and nbdkit-file-cache + cache=none (which we are
using) will attempt to minimize use of the page cache.
If the page cache is avoided, then the page cache has no chance to
mitigate the thrashing, especially on HDDs -- but even on SSDs, if
the drive's internal cache is not large enough (considering the
individual request size and the number of random requests flying in
parallel), the degradation should be visible.
I don't understand the mechanism by which this could happen. We're
reading and writing blocks which are much larger than the filesystem
block size (fs block size is probably 1K or 4K, minimum nbdcopy block
size in any test is 256K). And we read and write each block exactly
once, and we never revisit that data after reading/writing. Can the
page cache help?
Can you tweak (i.e., lower) the thread count of both nbdcopy
processes; let's say to "1", for starters?
Using nbdcopy -T 1 and leaving --request-size at the default does
actually improve performance quite a bit, although not as much as
increasing the request size.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v