On Fri, Aug 23, 2019 at 09:30:36AM -0500, Eric Blake wrote:
 I've run several tests to demonstrate why this is useful, as well
as
 prove that because I have multiple interoperable projects, it is worth
 including in the NBD standard.  The original proposal was here:
 
https://lists.debian.org/nbd/2019/03/msg00004.html
 where I stated:
 
 > I will not push this without both:
 > - a positive review (for example, we may decide that burning another
 > NBD_FLAG_* is undesirable, and that we should instead have some sort
 > of NBD_OPT_ handshake for determining when the server supports
 > NBD_CMF_FLAG_FAST_ZERO)
 > - a reference client and server implementation (probably both via qemu,
 > since it was qemu that raised the problem in the first place) 
Is the plan to wait until NBD_CMF_FLAG_FAST_ZERO gets into the NBD
protocol doc before doing the rest?  Also I would like to release both
libnbd 1.0 and nbdkit 1.14 before we introduce any large new features.
Both should be released this week, in fact maybe even today or
tomorrow.
[...]
 First, I had to create a scenario where falling back to writes is
 noticeably slower than performing a zero operation, and where
 pre-zeroing also shows an effect.  My choice: let's test 'qemu-img
 convert' on an image that is half-sparse (every other megabyte is a
 hole) to an in-memory nbd destination.  Then I use a series of nbdkit
 filters to force the destination to behave in various manners:
  log logfile=>(sed ...|uniq -c) (track how many normal/fast zero
 requests the client makes)
  nozero $params (fine-tune how zero requests behave - the parameters
 zeromode and fastzeromode are the real drivers of my various tests)
  blocksize maxdata=256k (allows large zero requests, but forces large
 writes into smaller chunks, to magnify the effects of write delays and
 allow testing to provide obvious results with a smaller image)
  delay delay-write=20ms delay-zero=5ms (also to magnify the effects on a
 smaller image, with writes penalized more than zeroing)
  stats statsfile=/dev/stderr (to track overall time and a decent summary
 of how much I/O occurred).
  noextents (forces the entire image to report that it is allocated,
 which eliminates any testing variability based on whether qemu-img uses
 that to bypass a zeroing operation [1]) 
I can't help thinking that a sh plugin might have been simpler ...
 I hope you enjoyed reading this far, and agree with my interpretation
of
 the numbers about why this feature is useful! 
Yes it seems reasonable.
The only thought I had is whether the qemu block layer does or should
combine requests in flight so that a write-zero (offset) followed by a
write-data (same offset) would erase the earlier request.  In some
circumstances that might provide a performance improvement without
needing any changes to protocols.
 - NBD should have a way to advertise (probably via NBD_INFO_ during
 NBD_OPT_GO) if the initial image is known to begin life with all zeroes
 (if that is the case, qemu-img can skip the extents calls and
 pre-zeroing pass altogether) 
Yes, I really think we should do this one as well.
Rich.
-- 
Richard Jones, Virtualization Group, Red Hat 
http://people.redhat.com/~rjones
Read my programming and virtualization blog: 
http://rwmj.wordpress.com
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top