On Mon, Mar 26, 2018 at 10:59 AM Richard W.M. Jones <rjones@redhat.com> wrote:
On Sun, Mar 25, 2018 at 08:05:14PM +0000, Nir Soffer wrote:
> On Sun, Mar 25, 2018 at 2:41 PM Richard W.M. Jones <rjones@redhat.com>
> When a vm is running, vdsm monitor the allocated space and ask the SPM
> host to allocated more space when the highest write offset is too high, so
> the disk looks like a thin provisioned disk to the vm. This mechanism is
> not available for storage operations, and all of them specify initial_size
> when converting images to qcow2 format.

This is going to cause problems for backup applications too.  There's
no way in general they can estimate the data to be transferred.

To backup a chain of images, you need to reconstruct the chain remotely
on the machine doing the restore, since the qcow2 header must have correct
backing file when uploaded. So I guess backup vendor will build the chain 
and upload the real files, so this should not be an issue.

If they want to do something more fancy, like creating empty qcow2 file for
setting up the backing file, and streaming the data separately, they can
estimate the required allocation using the number of 64k (assuming standard
cluster size) guest sectors, and the virtual size.

See here how vdsm estimate this since 4.1:
https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/qcow2.py

> If there is no way to estimate the size and you must allocate the entire
> image, we can add a shrink step in upload finalization, to shrink the image
> size to optimal size. We are already doing this in several flows that
> cannot estimate the final disk size and do over allocation.

I guess we'll need something like that.

I think we can implement this for 4.2.z. We will open a new RFE for this.

Nir