On 5/21/19 10:09 AM, Eric Blake wrote:
As already noted in our state machine, a client that batches up a
large read followed by large writes, coupled with a server that only
processes commands in order, can result in deadlock (the server won't
read more until we unblock its ability to write out its reply to our
first command; but we aren't willing to read until we are done writing
out our second command). Break the deadlock by teaching the generator
that while we are in the middle of writing a command, we must remain
responsive to read_notify events; if the server has data for us to
read, we should consume that before jumping back into the middle of
our command issue (and consuming a reply can invalidate sbuf, so we
have to drop an assertion in PREPARE_WRITE_PAYLOAD).
---
generator/generator | 20 ++++++++++++++++++--
generator/states-issue-command.c | 25 ++++++++++++++++++++++++-
generator/states-reply.c | 5 ++++-
lib/internal.h | 1 +
4 files changed, 47 insertions(+), 4 deletions(-)
Squash this in, if we think we solved the problem:
diff --git i/generator/generator w/generator/generator
index 23b3cbf..5c84a5d 100755
--- i/generator/generator
+++ w/generator/generator
@@ -620,12 +620,6 @@ and issue_command_state_machine = [
State {
default_state with
name = "START";
- (* XXX There's a possible deadlock here if a server cannot
- * handle multiple requests pipelined on a single connection.
- * We could try to issue a command and block, but reads might
- * be available. It should be possible to break this with
- * another state.
- *)
comment = "Begin issuing a command to the remote server";
external_events = [];
};
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization:
qemu.org |
libvirt.org