On Fri, Nov 11, 2022 at 08:52:31AM +0100, Laszlo Ersek wrote:
On 11/10/22 00:26, Eric Blake wrote:
> Modern qemu tends to advertise a strict 32M payload limit. Yet we
> default to allowing the user to attempt up to 64M in pwrite. For
> pread, qemu will reply with EINVAL but keep the connection up; but for
> pwrite, an overlarge buffer is fatal. It's time to teach libnbd to
> honor qemu's max buffer by default, while still allowing the user to
> disable the strict mode setting if they want to try 64M anyways.
>
> This also involves advertising a new LIBNBD_SIZE_PAYLOAD via
> nbd_get_block_size, which is more reliable than asking the user to
> manually obey LIBNBD_SIZE_MAXIMUM which may not be set or may be
> larger than 64M.
> ---
> +++ b/lib/rw.c
> @@ -207,8 +207,14 @@ nbd_internal_command_common (struct nbd_handle *h,
>
> switch (type) {
> /* Commands which send or receive data are limited to MAX_REQUEST_SIZE. */
> - case NBD_CMD_READ:
> case NBD_CMD_WRITE:
> + if (h->strict & LIBNBD_STRICT_PAYLOAD && count >
h->payload_maximum) {
> + set_error (ERANGE, "request too large: maximum payload size is
%d",
For pedantry's sake, I think we shouldn't print a uint32_t value with a
%d format specifier (I understand the value is limited to
MAX_REQUEST_SIZE, which would be fine to print with %d *if* passed in as
an "unsigned int"; but for pedantry we may not want to assume that
uint32_t is actually "unsigned int".) PRIu32 would fit better.
Indeed (would matter more for a platform with 16-bit int, but those
are long-relegated to museum pieces now).
> + h->payload_maximum);
> + goto err;
> + }
> + /* fallthrough */
> + case NBD_CMD_READ:
> if (count > MAX_REQUEST_SIZE) {
> set_error (ERANGE, "request too large: maximum request size is
%d",
> MAX_REQUEST_SIZE);
>
Otherwise the patch looks OK to me, but I don't understand where it
really matters.
* If a client program using libnbd connects to an NBD server that
advertises LIBNBD_SIZE_MAXIMUM, then it is up to the client application
(not libnbd) to adhere to that maximum, or face the consequences. Is
that right?
Mostly true. However, we can argue that libnbd should default to
obeying the spec unless the user opts in to intentionally pushing the
boundaries, and the spec says a client should obey an advertised
maximum. So enforcing the maximum at client side is a way of avoiding
undefined behavior for an application using libnbd that is not paying
attention to advertised maximums.
* If a client program using libnbd connects to an NBD server that does
*not* advertise LIBNBD_SIZE_MAXIMUM, then the client can (per libnbd's
own limit) attempt writes up to 64MB. If the NBD server then abruptly
disconnects, *without having advertised* a lower limit (and IIUC this
applies to *old* qemu-nbd), then -- per NBD spec / protocol -- the fault
is with either the client program or libnbd, because the request size
should not have exceeded 32MB. The question is who is responsible for
enforcing this protocol-default-limit: the client application, or libnbd?
Before the patch, the server might abruptly disconnect, after the patch,
the client app will get a local error -- the request will not succeed
either way, and the client application will have to be modified (=
reprogrammed by a human) anyways, to consider such a 32MB default limit
explicitly.
Indeed, and we've already had patches in the past to things like
nbdcopy to explicitly implement a default 32M buffer size to avoid
tripping up qemu-nbd.
So what use case / failure scenario is improved by this patch? Is it
that the client application's programmer gets a friendlier / more
understandable error message, rather than just the client app being
silently kicked out by (old) qemu-nbd?
Yes.
Anyway, the patch looks OK; feel free to extend (or not) the commit
message and/or to change (or not) the %d format specifier, from my end:
Reviewed-by: Laszlo Ersek <lersek(a)redhat.com>
Thanks for the close review; will fix and push shortly.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization:
qemu.org |
libvirt.org