On Fri, Sep 01, 2023 at 08:35:58AM +0000, Tage Johansson wrote:
On 8/30/2023 10:11 PM, Eric Blake wrote:
> CI shows our async handle fails to build on FreeBSD and MacOS (where
> epoll() is not available as a syscall, and therefore not available as
> a Rust crate). We can instead accomplish the same level-probing
> effects by doing a zero-timeout poll with mio (which defers under the
> hood to epoll on Linux, and kqueue on BSD).
The problem with mio is that it uses edge triggered notifications. According
to this page <
https://docs.rs/mio/latest/mio/struct.Poll.html>:
> Draining readiness
> Once a readiness event is received, the corresponding operation must be
> performed repeatedly until it returns variant
> std::io::error::ErrorKind::WouldBlock. Unless this is done, there is no
> guarantee that another readiness event will be delivered, even if
> further data is received for the event source.
Would that still be true even if we deregister the interest after our
one-shot poll, and reregister it immediately before-hand? In other
words, even if registration sets up edge triggers for future
notifications, the fact that we are only doing a one-shot query seems
like it would be sufficient to use our one-shot as a level probe
(assuming that the registration starts by learning the current state
of the fd before waiting for any future edges).
I agree that once we get a notification after a .poll(), the normal
assumption is that we must loop and consume all data before calling
.poll again (because the .poll would block until the next edge - but
if we didn't consume to the blocking point, there is no next edge).
But since we are NOT looping, nor doing any consumption work, our
timeout of 0 is trying to behave as an instant level check, and if
adding a reregister/deregister wrapper around the .poll makes it more
reliable (by wiping out all prior events and re-priming the
registration to start with the current level), that may be all the
more we need.
The motivation behind using epoll in addition to tokio in the first place
was to check if fd is readable/writable without necessarily knowing if the
last read/write operation returned [ErrorKind::WouldBlock]. And it doesn't
seems that mio full fills that requirenment.
Do we need to expand the libnbd API itself to make it more obvious
whether our state machine completed because it hit a EWOULDBLOCK
scenario? In general, once you kick the state machine (by sending a
new command, or calling one of the two aio_notify calls), the state
machine tries to then process as much data as possible until it blocks
again, rather than stopping after just processing a single
transaction. That is, if libnbd isn't already designed to work with
edge triggers, how much harder would it be to modify it (perhaps by
adding a new API) so that it DOES work nicely with edge triggers?
If we can solve _that_ problem, then the Rust async handle wouldn't
need to use epoll, mio, or anything else; tokio's edge trigger
behavior would already be sufficient on its own.
I tried using the polling <
https://docs.rs/polling/latest/polling/> crate,
which uses epoll and kqueue under the hood. but for some reason the
disconnect call failes, at least on my Linux machine. I have know idea
what's the problem.
Another idea would be that we create our own abstraction upon epoll and
kqueue. But I am unable to test any kqueue implementation, I think that I
can not even compile it on my machine.
If there is already a solution out there that works, let's favor that
instead of reimplementing a portability wrapper ourselves.
It would of course also be possible to disable the Rust bindings (or at
least the asynchronous API) on BSD and MacOS, but that would not be a good
solution I think.
Yeah, crippling the BSD build is less desirable, even if doing it in
the short term to get green CI is tempting.
You mentioned that the disconnect API was hanging. Is it because we
have not set up tokio to inform us when it detects that a socket has
closed? Admittedly, I haven't reproduced the hang you mentioned yet;
is it something that was easy to reproduce (for example, just remove
all the epoll code without replacement, then run 'make check')? If I
can reproduce the hang, maybe I can learn from strace where things are
getting stuck?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:
qemu.org |
libguestfs.org