On Thu, Sep 01, 2022 at 09:42:25PM +0100, Richard W.M. Jones wrote:
On Thu, Sep 01, 2022 at 02:55:45PM -0500, Eric Blake wrote:
> On Thu, Sep 01, 2022 at 05:17:14PM +0100, Richard W.M. Jones wrote:
> > >
> > > [*]It's easier to skip on server failure than to try and write an
> > > nbdkit patch to add yet another --config feature probe just to depend
> > > on new-enough nbdkit to gracefully probe in advance if the server
> > > should succeed.
>
> > >
> > > - /* FIXME: For now, we reject this client-side, but it is overly strict.
*/
> > > + /* Older servers don't permit this, but there is no reliable
indicator
> > > + * of whether nbdkit is new enough, so just skip the rest of the test
> > > + * if the attempt fails (then read the logs to see that the skip was
> > > + * indeed caused by the server, and not an accidental client-side
bug).
> > > + */
> >
> > In theory you could parse nbdkit --dump-config, although I agree this
> > approach is fine too.
>
> It only helps to parse --dump-config if --dump-config has an entry
> that tells us that nbdkit accepts LIST_META_CONTEXT without
> STRUCTURED_REPLY first (and that still wouldn't help the window of
> releases that had the feature but not a --dump-config entry, if we
> decide to add such an entry).
I was thinking of checking the version number - wouldn't that be
sufficient?
Ah, yes. Although I find version-based checks fragile in relation to
feature-based checks (for example, version-based would be checking for
nbdkit v1.27.3 (or easier as v1.28+); but feature-wise it was also
backported to v1.26.6).
> But I did think of another way to test it:
>
> If we had new APIs:
>
> int64_t nbd_stats_opt_packets_sent(handle);
> int64_t nbd_stats_opt_bytes_sent(handle);
> int64_t nbd_stats_opt_packets_received(handle);
> int64_t nbd_stats_opt_bytes_received(handle);
> int64_t nbd_stats_transmission_packets_sent(handle);
> int64_t nbd_stats_transmission_bytes_sent(handle);
> int64_t nbd_stats_transmission_packets_received(handle);
> int64_t nbd_stats_transmission_bytes_received(handle);
>
> that basically count every outgoing packet and byte of NBD_OPT,
> NBD_REP, NBD_CMD, and response header in either direction, it becomes
> easy to track when something is squelched client-side by whether the
> transmission counts increment. And it may be interesting to know how
> many bytes/packets were involved in the NBD protocol over the life of
> a connection.
How is "packet" defined here - it's difficult to measure on the wire
packets. Maybe it means commands?
Maybe packet isn't the right term, but I'm thinking each time a header
is sent. For example, in nbd_opt_list(), a client sending just
NBD_OPT_LIST (1 packet out if server is newstyle, 0 packets out if
server is oldstyle) may get back 2 packets from a newstyle server (an
NBD_REP_SERVER listing the default export, and an NBD_REP_ACK ending
the list). Similarly, an NBD_CMD_PREAD (1 packet out) can result in
multiple structured reply packets (NBD_REPLY_TYPE_DATA_OFFSET,
NBD_REPLY_TYPE_NONE+NBD_REPLY_FLAG_DONE).
But the idea is every time we send or receive a magic number, that
increases the packet count, and then if the packet count doesn't
increase, it was because we failed the command early client-side.
While adding these new APIs would be fine, it might be an idea to add
another API to toggle collection on and off (and perhaps one to find
out if collection is possible). I'm not certain that in every
situation we'll be able to accurately and/or efficiently collect such
statistics -- if we use io_uring, or with offloading of some sort(??)
Also interesting. I'm not sure how much overhead the statistics
collection will cause; and if it's runtime toggleable there's still
the cost of extra conditionals (although we might be able to use the
unlikely() macro to put it off the hot path for the common case of not
running stat counters).
At this point, I may just put it as a TODO item rather than coding it
up in the short term.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization:
qemu.org |
libvirt.org