While it may be counterintuitive at first, the introduction of
NBD_CMD_WRITE_ZEROES and NBD_CMD_BLOCK_STATUS has caused a performance
regression in qemu [1], when copying a sparse file. When the
destination file must contain the same contents as the source, but it
is not known in advance whether the destination started life with all
zero content, then there are cases where it is faster to request a
bulk zero of the entire device followed by writing only the portions
of the device that are to contain data, as that results in fewer I/O
transactions overall. In fact, there are even situations where
trimming the entire device prior to writing zeroes may be faster than
bare write zero request [2]. However, if a bulk zero request ever
falls back to the same speed as a normal write, a bulk pre-zeroing
algorithm is actually a pessimization, as it ends up writing portions
of the disk twice.
[1] https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg06389.html
[2] https://github.com/libguestfs/nbdkit/commit/407f8dde
Hence, it is desirable to have a way for clients to specify that a
particular write zero request is being attempted for a fast wipe, and
get an immediate failure if the zero request would otherwise take the
same time as a write. Conversely, if the client is not performing a
pre-initialization pass, it is still more efficient in terms of
networking traffic to send NBD_CMD_WRITE_ZERO requests where the
server implements the fallback to the slower write, than it is for the
client to have to perform the fallback to send NBD_CMD_WRITE with a
zeroed buffer.
Add a protocol flag and corresponding transmission advertisement flag
to make it easier for clients to inform the server of their intent. If
the server advertises NBD_FLAG_SEND_FAST_ZERO, then it promises two
things: to perform a fallback to write when the client does not
request NBD_CMD_FLAG_FAST_ZERO (so that the client benefits from the
lower network overhead); and to fail quickly with ENOTSUP if the
client requested the flag but the server cannot write zeroes more
efficiently than a normal write (so that the client is not penalized
with the time of writing data areas of the disk twice).
I think the issue is not that zero is slow as normal write, but that it is not fast
enough so it worth the zero entire disk before writing data.
For example, on storage server we had in the past BLKZEROOUT rate was
50G/s. On another server, it can run anywhere from 1G/s to 100G/s, depending
on the allocation status of the zeroed range.
Note that the semantics are chosen so that servers should advertise
the new flag whether or not they have fast zeroing (that is, this is
NOT the server advertising that it has fast zeroes, but rather
advertising that the client can get feedback as needed on whether
zeroing is fast). It is also intentional that the new advertisement
includes a new errno value, ENOTSUP, with rules that this error should
not be returned for any pre-existing behaviors, must not happen when
the client does not request a fast zero, and must be returned quickly
if the client requested fast zero but anything other than the error
would not be fast; while leaving it possible for clients to
distinguish other errors like EINVAL if alignment constraints are not
met. Clients should not send the flag unless the server advertised
support, but well-behaved servers should already be reporting EINVAL
to unrecognized flags. If the server does not advertise the new
feature, clients can safely fall back to assuming that writing zeroes
is no faster than normal writes.
Note that the Linux fallocate(2) interface may or may not be powerful
enough to easily determine if zeroing will be efficient - in
particular, FALLOC_FL_ZERO_RANGE in isolation does NOT give that
insight; for block devices, it is known that ioctl(BLKZEROOUT) does
NOT have a way for userspace to probe if it is efficient or slow. But
with enough demand, the kernel may add another FALLOC_FL_ flag to use
with FALLOC_FL_ZERO_RANGE, and/or appropriate ioctls with guaranteed
ENOTSUP failures if a fast path cannot be taken. If a server cannot
easily determine if write zeroes will be efficient, it is better off
not advertising NBD_FLAG_SEND_FAST_ZERO.
I think this can work for file based images. If fallocate() fails, the client
will get ENOTSUP after the first call quickly.
For block device we don't have any way to know if a fallocate() or BLKZEROOUT
will be fast, so I guess servers will never advertise FAST_ZERO.
Generally this new flag usefulness is limited. It will only help qemu-img to convert
faster to file based images.
Do we have performance measurements showing significant speed up when
zeroing the entire image before coping data, compared with zeroing only the
unallocated ranges?
For example if the best speedup we can get in real world scenario is 2%, is ti
worth complicating the protocol and using another bit?
Signed-off-by: Eric Blake <eblake@redhat.com>
---
I will not push this without both:
- a positive review (for example, we may decide that burning another
NBD_FLAG_* is undesirable, and that we should instead have some sort
of NBD_OPT_ handshake for determining when the server supports
NBD_CMF_FLAG_FAST_ZERO)
- a reference client and server implementation (probably both via qemu,
since it was qemu that raised the problem in the first place)
doc/proto.md | 44 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 43 insertions(+), 1 deletion(-)
diff --git a/doc/proto.md b/doc/proto.md
index 8aaad96..1107766 100644
--- a/doc/proto.md
+++ b/doc/proto.md
@@ -1059,6 +1059,17 @@ The field has the following format:
which support the command without advertising this bit, and
conversely that this bit does not guarantee that the command will
succeed or have an impact.
+- bit 11, `NBD_FLAG_SEND_FAST_ZERO`: allow clients to detect whether
+ `NBD_CMD_WRITE_ZEROES` is efficient. The server MUST set this
+ transmission flag to 1 if the `NBD_CMD_WRITE_ZEROES` request
+ supports the `NBD_CMD_FLAG_FAST_ZERO` flag, and MUST set this
+ transmission flag to 0 if `NBD_FLAG_SEND_WRITE_ZEROES` is not
+ set. Servers SHOULD NOT set this transmission flag if there is no
+ quick way to determine whether a particular write zeroes request
+ will be efficient, but the lack of an efficient write zero
I think we should use "fast" instead of "efficient". For example when the kernel
fallback to manual zeroing it is probably the most efficient way it can be done,
but it is not fast.
+ implementation SHOULD NOT prevent a server from setting this
+ flag. Clients MUST NOT set the `NBD_CMD_FLAG_FAST_ZERO` request flag
+ unless this transmission flag is set.
Clients SHOULD ignore unknown flags.
@@ -1636,6 +1647,12 @@ valid may depend on negotiation during the handshake phase.
MUST NOT send metadata on more than one extent in the reply. Client
implementors should note that using this flag on multiple contiguous
requests is likely to be inefficient.
+- bit 4, `NBD_CMD_FLAG_FAST_ZERO`; valid during
+ `NBD_CMD_WRITE_ZEROES`. If set, but the server cannot perform the
+ write zeroes any faster than it would for an equivalent
+ `NBD_CMD_WRITE`, then the server MUST fail quickly with an error of
+ `ENOTSUP`. The client MUST NOT set this unless the server advertised
+ `NBD_FLAG_SEND_FAST_ZERO`.
##### Structured reply flags
@@ -2004,7 +2021,10 @@ The following request types exist:
reached permanent storage, unless `NBD_CMD_FLAG_FUA` is in use.
A client MUST NOT send a write zeroes request unless
- `NBD_FLAG_SEND_WRITE_ZEROES` was set in the transmission flags field.
+ `NBD_FLAG_SEND_WRITE_ZEROES` was set in the transmission flags
+ field. Additionally, a client MUST NOT send the
+ `NBD_CMD_FLAG_FAST_ZERO` flag unless `NBD_FLAG_SEND_FAST_ZERO` was
+ set in the transimssion flags field.
By default, the server MAY use trimming to zero out the area, even
if it did not advertise `NBD_FLAG_SEND_TRIM`; but it MUST ensure
@@ -2014,6 +2034,23 @@ The following request types exist:
same area will not cause fragmentation or cause failure due to
insufficient space.
+ If the server advertised `NBD_FLAG_SEND_FAST_ZERO` but
+ `NBD_CMD_FLAG_FAST_ZERO` is not set, then the server MUST NOT fail
+ with `ENOTSUP`, even if the operation is no faster than a
+ corresponding `NBD_CMD_WRITE`. Conversely, if
+ `NBD_CMD_FLAG_FAST_ZERO` is set, the server MUST fail quickly with
+ `ENOTSUP` unless the request can be serviced more efficiently than
+ a corresponding `NBD_CMD_WRITE`. The server's determination of
+ efficiency MAY depend on whether the request was suitably aligned,
+ on whether the `NBD_CMD_FLAG_NO_HOLE` flag was present, or even on
+ whether a previous `NBD_CMD_TRIM` had been performed on the
+ region. If the server did not advertise
+ `NBD_FLAG_SEND_FAST_ZERO`, then it SHOULD NOT fail with `ENOTSUP`,
+ regardless of the speed of servicing a request, and SHOULD fail
+ with `EINVAL` if the `NBD_CMD_FLAG_FAST_ZERO` flag was set. A
+ server MAY advertise `NBD_FLAG_SEND_FAST_ZERO` whether or not it
+ can perform efficient zeroing.
+
If an error occurs, the server MUST set the appropriate error code
in the error field.
@@ -2114,6 +2151,7 @@ The following error values are defined:
* `EINVAL` (22), Invalid argument.
* `ENOSPC` (28), No space left on device.
* `EOVERFLOW` (75), Value too large.
+* `ENOTSUP` (95), Operation not supported.
* `ESHUTDOWN` (108), Server is in the process of being shut down.
The server SHOULD return `ENOSPC` if it receives a write request
@@ -2125,6 +2163,10 @@ request is not aligned to advertised minimum block sizes. Finally, it
SHOULD return `EPERM` if it receives a write or trim request on a
read-only export.
+The server SHOULD NOT return `ENOTSUP` except as documented in
+response to `NBD_CMD_WRITE_ZEROES` when `NBD_CMD_FLAG_FAST_ZERO` is
+supported.
This makes ENOTSUP less useful. I think it should be allowed to return ENOTSUP
as response for other commands if needed.
+
The server SHOULD return `EINVAL` if it receives an unknown command.
The server SHOULD return `EINVAL` if it receives an unknown command flag. It
--
2.20.1
I think this makes sense, and should work, but we need more data supporting that this is
useful in practice.
Nir