On Thu, Sep 01, 2022 at 05:17:14PM +0100, Richard W.M. Jones wrote:
>
> [*]It's easier to skip on server failure than to try and write an
> nbdkit patch to add yet another --config feature probe just to depend
> on new-enough nbdkit to gracefully probe in advance if the server
> should succeed.
>
> - /* FIXME: For now, we reject this client-side, but it is overly strict. */
> + /* Older servers don't permit this, but there is no reliable indicator
> + * of whether nbdkit is new enough, so just skip the rest of the test
> + * if the attempt fails (then read the logs to see that the skip was
> + * indeed caused by the server, and not an accidental client-side bug).
> + */
In theory you could parse nbdkit --dump-config, although I agree this
approach is fine too.
It only helps to parse --dump-config if --dump-config has an entry
that tells us that nbdkit accepts LIST_META_CONTEXT without
STRUCTURED_REPLY first (and that still wouldn't help the window of
releases that had the feature but not a --dump-config entry, if we
decide to add such an entry).
But I did think of another way to test it:
If we had new APIs:
int64_t nbd_stats_opt_packets_sent(handle);
int64_t nbd_stats_opt_bytes_sent(handle);
int64_t nbd_stats_opt_packets_received(handle);
int64_t nbd_stats_opt_bytes_received(handle);
int64_t nbd_stats_transmission_packets_sent(handle);
int64_t nbd_stats_transmission_bytes_sent(handle);
int64_t nbd_stats_transmission_packets_received(handle);
int64_t nbd_stats_transmission_bytes_received(handle);
that basically count every outgoing packet and byte of NBD_OPT,
NBD_REP, NBD_CMD, and response header in either direction, it becomes
easy to track when something is squelched client-side by whether the
transmission counts increment. And it may be interesting to know how
many bytes/packets were involved in the NBD protocol over the life of
a connection.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization:
qemu.org |
libvirt.org