On Tue, Feb 15, 2022 at 06:38:55PM +0200, Nir Soffer wrote:
> On Tue, Feb 15, 2022 at 5:54 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> Pick the nbdcopy --requests parameter to target an implicit buffer
> size of 64M inside nbdcopy. However don't set nbdcopy --request < 64.
>
> If request_size == 256K (the default) => requests = 256
> If request_size == 8M => requests = 64 (buffer size 512M)
>
>
> Considering the total bytes buffered makes sense. I did the same in another
> application that only reads from NBD using libnbd async API. I'm using:
>
> max_requests = 16
> max_bytes = 2m
>
> So if you have small requests (e.g. 4k), you get 16 inflight requests per
> connection
> and with 4 connections 64 inflight requests on the storage side.
>
> But if you have large requests (256k), you get only 8 requests per connection
> and
> 32 requests on the storage side.
>
> This was tested in a read-only case both on my laptop with fast NVMe
> (Samsung 970 EVO Plus 1T) and with super fast NVMe on Dell server,
> and with shared storage (NetApp iSCSI).
>
> With fast NVMe, limiting the maximum buffered bytes to 1M is actually
> ~10% faster, but with shared storage using more requests is faster.
>
> What you suggest here will result in:
> small requests: 256 requests per connection, 1024 requests on storage side
> large requests: 64 requests per connection, 156 requests on storage side.
So a note here that we're not using multi-conn when converting from
VDDK because VDDK doesn't behave well:
https://github.com/libguestfs/virt-v2v/commit/bb0e698360470cb4ff5992e8e01a3165f56fe41e
> I don't think any storage can handle such a large amount of connections better.
>
> I think we should test --requests 8 first, it may show nice speedup comapred
> to what we see in
> https://bugzilla.redhat.com/show_bug.cgi?id=2039255#c33
>
> Looks like in
> https://bugzilla.redhat.com/show_bug.cgi?id=2039255#c32
>
> We introduced 2 changes at the same time, which makes it impossible to tell
> the effect of any single change.
I couldn't measure any performance benefit from increasing the number
of requests, but also it didn't have any down-side. Ming Xie also did
a test and she didn't see any benefit or loss either.
The purpose of the patch (which I didn't explain well) was to ensure
that if we make the request-size larger, we don't blow up nbdcopy
memory usage too much. So aim for a target amount of memory consumed
in nbdcopy buffers (64M), but conservatively never reducing #buffers
below the current setting (64).
The intent is good, but I think we need to refine the
actual sizes, but this can be done later.
Also this should be better to do in nbdcopy instead of virt-v2v, since it
will improve all callers of nbdcopy.
Nir