On 11/14/2017 01:23 PM, Eric Blake wrote:
This is a minimal implementation of an NBD forwarder; it lets us
convert between old and newstyle connections (great if a client
expects one style but the real server only provides the other),
or add TLS safety on top of a server without having to rewrite
that server. Right now, the real server is expected to live
on a named Unix socket with no encryption, and the transactions
are serialized rather than interleaved; further enhancements
could be made to also permit TCP servers, TLS encryption, and/or
more efficient transmission.
I also envision the possibility of enhancing our testsuite to
use NBD forwarding as a great test of our server.
Some of the design decisions made in this file are geared towards
the future addition of allowing interleaved transmission (for
example, the fact that we track a separate 'struct transmission',
even though only one exists at a time for now; and the fact that
all reading is done through a helper function).
Signed-off-by: Eric Blake <eblake(a)redhat.com>
---
+#define nbd_config_help \
+ "socket=<SOCKNAME> (required) The Unix socket to connect to.\n" \
+ "export=<NAME> Export name to connect to (default
"").\n" \
That should be (default \"\"), of course. Will squash in when pushing.
+/* Write zeroes to the file. */
+static int
+nbd_zero (void *handle, uint32_t count, uint64_t offset, int may_trim)
+{
+ struct handle *h = handle;
+ uint32_t cmd = NBD_CMD_WRITE_ZEROES;
+ int c;
+
+ if (!(h->flags & NBD_FLAG_SEND_WRITE_ZEROES)) {
+ /* Trigger a fall back to regular writing */
+ errno = EOPNOTSUPP;
+ return -1;
Potential issue: nbdkit has a max transmission size of 64M, but many
other implementations (hello qemu) limit to 32M, so the NBD spec
mentions 32M as the portable safe size. If the client sends a
write-zeroes request of 128M, the fallback in plugins.c will fragment it
into two 64M transactions with actual buffers; which all the other
plugins have been able to handle - but now that we are talking to
another NBD server that might have a smaller transaction limit, we could
kill our connection instead of succeeding. Similarly, if we have a
client that can send a 64M read request, but nbdkit forwards on to a
server that refuses to reply with that much data, we can get an error
where success would have been possible had we fragmented (but at least
in that case, we're less likely to kill the connection).
We really need to teach nbdkit to use NBD_OPT_GO, which lets servers
advertise their maximum transaction size to the client, and then honor
that maximum; but in the meantime, I think we probably ought to consider
scaling nbdkit's max transmission size down to 32M to match other
implementations (either at the connections.c level for ALL plugins, or
in nbd_write() in this file for JUST the nbd plugin). Will be a
followup patch.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization:
qemu.org |
libvirt.org