On Sun, Jan 30, 2022 at 8:29 PM Richard W.M. Jones <rjones(a)redhat.com> wrote:
On Sun, Jan 30, 2022 at 06:18:03PM +0200, Nir Soffer wrote:
> On Sun, Jan 30, 2022 at 11:10 AM Richard W.M. Jones <rjones(a)redhat.com>
wrote:
> >
> > On Sun, Jan 30, 2022 at 12:45:37AM +0200, Nir Soffer wrote:
> > > On Fri, Jan 28, 2022 at 10:37 PM Richard W.M. Jones
<rjones(a)redhat.com> wrote:
> > > > + .get_preferred_block_size = nbd_ops_get_preferred_block_size,
> > >
> > > Why preferred block size and not minimum block size? For example if we
> > > write 256k when the minimum block size is 64k, wouldn't qemu block
layer
> > > handle the write properly, creating 4 compressed clusters?
> >
> > My theory was that if the destination prefers a particular block size,
> > and we're going to all this effort anyway, we might as well use the
> > preference. For the qcow2/compress filter the two values are
> > identical.
> >
> > > When not using a compress filter, qemu-nbd reports block size of 4k, and
> > > using this value it will kill performance.
> >
> > Not sure I understand?
>
> I think this was a mistake - if we use the preferred size only for
> alignment, not
> for limiting the size of the requests, it should be ok to use the
> preferred block size.
Got it.
Anyhow let's discuss in more detail if I ever get a patch series that
works! We probably need to make the various blocksize(s) into a
configuration option.
I think we can avoid more configuration and use something like:
min_extent_size = max(src.min_block_size, dst.min_block_size) //
or preferred size
request_size = ROUND_UP(request_size, min_extent_size)
sparse = ROUND_UP(sparse, dst.min_block_size)