Am 16.11.2017 um 15:47 schrieb Stefan Hajnoczi:
On Wed, Nov 15, 2017 at 11:52:46AM +0000, Richard W.M. Jones wrote:
> [CC to qemu-devel since I'm obviously doing something wrong here,
> I'm just not sure what.]
>
> I was getting ready to add multiple threads to ‘qemu-img convert’ (the
> longest part of v2v conversions) when I noticed that it had them
> already! (To be fair this was only added in February this year so no
> wonder we didn't notice.)
>
> To enable parallel convert we would need to use the ‘qemu-img convert
> -W’ option (which allows out-of-order writes to the destination) and
> ‘qemu-img convert -m <#num-coroutines>’ option to select the
> parallelism (defaults to 8). The documentation refers to coroutines
> but I verified from strace that it is using real threads.
The threads you observed are the thread pool that performs
preadv(2)/pwritev(2) syscalls. The Linux AIO API could be used instead
and does not use threads for read and write operations. So these
threads are just an implementation detail. The caller doing the reads
and writes is not multi-threaded but a number of coroutines executing in
a single thread.
The qemu-img convert logic runs in coroutines from just one main loop
thread in qemu-img.c:convert_do_copy():
for (i = 0; i < s->num_coroutines; i++) {
s->co[i] = qemu_coroutine_create(convert_co_do_copy, s);
s->wait_sector_num[i] = -1;
qemu_coroutine_enter(s->co[i]);
}
while (s->running_coroutines) {
main_loop_wait(false);
}
> I did some testing to see what effect this has. For this I used a
> large guest image which is approximately a third full of random data
> (the rest being sparsely allocated):
>
> Source format: raw
> Source virtual size: 100 GB
> Source used space: 31 GB
> Target format: qcow2
> Version: qemu-img-2.10.0-7.fc28.x86_64
> Conversion command:
> rm -f /to/target
> time qemu-img convert [-W] [-m ##] -f raw source.img -O qcow2 /to/target
>
> Source and target are regular files on two different disks. The test
> machine is a Xeon E5 with 16 real cores.
>
> ----------------------------------------------------------------------
> Non-preallocated output
> (times are in seconds)
> without -W -W
>
> -m 1 153 -
>
> -m 4 155 157
>
> -m 8 [default] 158 231
>
> -m 16 [max] 166 166
> ----------------------------------------------------------------------
>
> The documentation for ‘-W’ notes that this is only recommended for
> preallocated outputs (which the test above does not use). So let's
> try using a preallocated qcow2 output.
>
> Conversion command:
> # the same target file is reused each time
> time qemu-img convert -n [-W] [-m ##] -f raw source.img -O qcow2 /to/target
>
> ----------------------------------------------------------------------
> Preallocated output
> (times are in seconds)
> without -W -W
>
> -m 1 147 -
>
> -m 4 146 145
>
> -m 8 [default] 146 199
>
> -m 16 [max] 147 146
> ----------------------------------------------------------------------
>
> Based on this there seems to be some issue with the ‘-W’ option -- I
> even thought I might have it backwards, but checking the code it does
> seem like ‘-W’ enables (rather than disables) out of order writes.
> Also some bizarre interaction between ‘-W’ and ‘-m 8’.
Interesting. Did you perform multiple runs of each setting to verify
that the benchmark results are stable with little volatility?
Which command-line did you use to create the preallocated qcow2 file?
Are the source and target files on the same file system and host block
device? The benefit of using multiple coroutines depends on the
performance characteristics of the source and target files.
I don't think this qemu-img convert mode has been heavily tested, so it
wouldn't be surprising if you encounter unexpected behavior. Hopefully
it can be fixed to get even better performance.
The scenario where I tested this patch was with reading a QCOW2
from an NFS (with libnfs) and writing to a RAW iSCSI Target (with libiscsi).
Both having no cache on the local host.
Afaik all writes to the same QCOW2 serialize because of the s->lock that
is held during the write. So its not suprising that there is no benefit from
mutliple threads as long as reading from the RAW file involves no delay.
Which is likely due to readahead of the OS.
Peter