On Mon, Feb 14, 2022 at 3:01 PM Richard W.M. Jones <rjones(a)redhat.com> wrote:
On Mon, Feb 14, 2022 at 12:53:01PM +0100, Laszlo Ersek wrote:
> On 02/14/22 10:56, Richard W.M. Jones wrote:
> > This change slowed things down (slightly) for me, although the change
> > is within the margin of error so it probably made no difference.
> >
> > Before:
> >
> > $ time ./run virt-v2v -i disk /var/tmp/fedora-35.qcow2 -o rhv-upload -oc
https://ovirt4410/ovirt-engine/api -op /tmp/ovirt-passwd -oo rhv-direct -os ovirt-data -on
test14 -of raw
> > [ 0.0] Setting up the source: -i disk /var/tmp/fedora-35.qcow2
> > [ 1.0] Opening the source
> > [ 6.5] Inspecting the source
> > [ 10.5] Checking for sufficient free disk space in the guest
> > [ 10.5] Converting Fedora Linux 35 (Thirty Five) to run on KVM
> > virt-v2v: warning: /files/boot/grub2/device.map/hd0 references unknown
> > device "vda". You may have to fix this entry manually after
conversion.
> > virt-v2v: This guest has virtio drivers installed.
> > [ 57.0] Mapping filesystem data to avoid copying unused and blank areas
> > [ 59.0] Closing the overlay
> > [ 59.6] Assigning disks to buses
> > [ 59.6] Checking if the guest needs BIOS or UEFI to boot
> > [ 59.6] Setting up the destination: -o rhv-upload -oc
https://ovirt4410/ovirt-engine/api -os ovirt-data
> > [ 79.3] Copying disk 1/1
> > █ 100% [****************************************]
> > [ 89.9] Creating output metadata
> > [ 94.0] Finishing off
> >
> > real 1m34.213s
> > user 0m6.585s
> > sys 0m11.880s
> >
> >
> > After:
> >
> > $ time ./run virt-v2v -i disk /var/tmp/fedora-35.qcow2 -o rhv-upload -oc
https://ovirt4410/ovirt-engine/api -op /tmp/ovirt-passwd -oo rhv-direct -os ovirt-data -on
test15 -of raw
> > [ 0.0] Setting up the source: -i disk /var/tmp/fedora-35.qcow2
> > [ 1.0] Opening the source
> > [ 7.4] Inspecting the source
> > [ 11.7] Checking for sufficient free disk space in the guest
> > [ 11.7] Converting Fedora Linux 35 (Thirty Five) to run on KVM
> > virt-v2v: warning: /files/boot/grub2/device.map/hd0 references unknown
> > device "vda". You may have to fix this entry manually after
conversion.
> > virt-v2v: This guest has virtio drivers installed.
> > [ 59.6] Mapping filesystem data to avoid copying unused and blank areas
> > [ 61.5] Closing the overlay
> > [ 62.2] Assigning disks to buses
> > [ 62.2] Checking if the guest needs BIOS or UEFI to boot
> > [ 62.2] Setting up the destination: -o rhv-upload -oc
https://ovirt4410/ovirt-engine/api -os ovirt-data
> > [ 81.6] Copying disk 1/1
> > █ 100% [****************************************]
> > [ 91.3] Creating output metadata
> > [ 96.0] Finishing off
> >
> > real 1m36.275s
> > user 0m4.700s
> > sys 0m14.070s
>
> My ACK on Nir's v2 patch basically means that I defer to you on its
> review -- I don't have anything against it, but I understand it's
> (perhaps a temporary) workaround until we find a more sustainable (and
> likely much more complex) solution.
Sure, I don't mind taking this as a temporary solution. The code
itself is perfectly fine. The request size here is essentially an
optimization hint, it doesn't affect the architecture.
An architectural problem that affects both nbdkit & nbdcopy is that
NBD commands drive the nbdkit backend and the nbdcopy loop. If we
make the nbdcopy --request-size larger, NBD commands ask for more
data, nbdkit-vddk-plugin makes larger VixDiskLib_ReadAsynch requests,
which at some point breaks the VMware server. (This is fairly easy to
solve in nbdkit-vddk-plugin or with a filter.)
But nbdcopy needs to be reworked to make the input and output requests
separate, so that nbdcopy will coalesce and split blocks as it copies.
This is difficult.
Another problem I'm finding (eg
https://bugzilla.redhat.com/show_bug.cgi?id=2039255#c9) is that
performance of new virt-v2v is extremely specific to input and output
mode, and hardware and network configurations. For reasons that I
don't fully understand.
It would be interesting to test this patch in the same environment and see
how it affects the results.
Do we see slow down only when using vddk? maybe it is related to the slow
extent calls?
Nir