On Sat, Jul 3, 2021 at 4:53 PM Richard W.M. Jones <rjones(a)redhat.com> wrote:
On Sat, Jul 03, 2021 at 04:34:09PM +0300, Nir Soffer wrote:
> For some reason I did not see this change in the mailing list:
>
https://github.com/libguestfs/virt-v2v/commit/18084f90d9dd9092831cb348703...
I didn't post it ... However the commit on its own is correct because
-oa preallocated was rejected with an error. But see below because I
do intend to add the feature back (unbroken).
> The commit message claims:
>
> > Using -oa preallocated with -o rhv-upload always gave an error. We
> > should be able to implement this properly in modular virt-v2v, but as
> > this option did nothing here remove it to simplify things.
>
> But I used -oa preallocated and -oa sparse and it worked fine after removing
> the unhelpful validation, and using:
>
> sparse=params["output_sparse"]
>
> The code was already broken even before this change with block storage domain
> (iSCSI/FC) and raw format (the default), since raw-sparse is not supported in
> block base storage, but now there is no way to fix the code.
>
> We can always use qcow2-sparse - this combination is most useful in RHV
> but using raw preallocated volume can give better performance or reliability
> and this is still the default format for block storage RHV.
>
> Since RHV does not support the feature of selecting the image format and
> allocation policy for the user, and does not make the available combinations
> or the system defaults available via the API, virt-v2v must pass the decision
> to the user of the program.
In modular virt-v2v[1] we will be using nbdcopy to copy to a disk
pipeline.
For rhv-upload it will actually make no difference. It will work the
same as current code, ie. nbdkit + python + rhv-upload-plugin.
However I was actually thinking we could get nbdcopy to use the
nbdcopy --allocated flag, essentially pushing zeroes (or hopefully
NBD_CMD_WRITE_ZEROES commands not actual zeroes) through the pipeline.
Now I think about it a bit more this seems like it's going to be
somewhat inefficient as far as imageio is concerned.
Pushing actual zeroes will make performance horrible, and create fully
allocated images for no reason on ovirt side. We can mitigate the
allocation issue by enabling zero detection on qemu-nbd on ovirt side.
We can certainly put the -oa flag back again, passing it through
params["output_sparse"]. How does it fail if we select the wrong
combination? - can we check it in precheck? Will there ever be a way
of querying what combinations are supported?
Currently creating the disk will fail with raw-sparse on block storage
with unhelpful HTTP error (maybe 409 Conflict). Since this is done
after converting the image it is not great.
If we move disk creation to the precheck part virt-v2v can fail very quickly.
However to create a qcow2 image in the right size we need to create the
disk after converting the remote disk to the overlay, by running
qemu-img measure -f qcow2 -O qcow2 --output json overlay.qcow2
And passing the "required" size to the plugin. It will use this value in
the initial_size= argument when creating the disk. This is how ovirt
upload_disk.py example does it:
https://github.com/oVirt/ovirt-engine-sdk/blob/37d284aede10fd55a9a8639561...
Another way to handle this issue is to always use initial_size=provisioned_size
(what we have now), and after the disk upload finish, reduce the disk
to the optimal
size using the reduce action:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/reduce...
This may cause the import to fail if the storage domain is too full to
create the disk
fully allocated. Using "qemu-img measure" will work in these cases.
We have a ovirt bug for providing the possible combinations:
https://bugzilla.redhat.com/1829009
I think this is the right way to handle this issue. precheck can use this to
validate the arguments or choose good default values transparently.
Nir