On Wed, Aug 5, 2020 at 3:47 PM Richard W.M. Jones <rjones(a)redhat.com> wrote:
Here are some results anyway. The command I'm using is:
$ ./nbdkit -r -U - vddk \
libdir=/path/to/vmware-vix-disklib-distrib \
user=root password='***' \
server='***' thumbprint=aa:bb:cc:... \
vm=moref=3 \
file='[datastore1] Fedora 28/Fedora 28.vmdk' \
--run 'time /var/tmp/threaded-reads $unixsocket'
Source for threaded-reads is attached.
(1) Existing nbdkit VDDK plugin.
NR_MULTI_CONN = 1
NR_CYCLES = 10000
Note this is making 10,000 pread requests.
real 1m26.103s
user 0m0.283s
sys 0m0.571s
(2) VDDK plugin patched to support SERIALIZE_REQUESTS.
NR_MULTI_CONN = 1
NR_CYCLES = 10000
Note this is making 10,000 pread requests.
real 1m26.755s
user 0m0.230s
sys 0m0.539s
(3) VDDK plugin same as in (2).
NR_MULTI_CONN = 8
NR_CYCLES = 10000
Note this is making 80,000 pread requests in total.
real 7m11.729s
user 0m2.891s
sys 0m6.037s
My observations:
Tests (1) and (2) are about the same within noise.
Test (3) is making 8 times as many requests as test (1), so I think
it's fair to compare the 8 x time taken by test (1) (ie. the time it
would have taking to make 80,000 requests):
Test (1) * 8 = 11m28
Test (3) = 7m11
That's pretty good results, 62% faster.
What is the request size used? I would test 1, 2, 4, 8 MiB reads.
So if we had a client which could actually use multi-conn then this
would be a reasonable win. It seems like there's still a lot of
locking going on somewhere, perhaps inside VDDK or in the server.
It's certainly nowhere near a linear speedup.
The patch does at least seem stable. I'll post it in a minute.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages.
http://libguestfs.org