On 4/23/19 4:01 AM, Richard W.M. Jones wrote:
If the plugin .pread method did not fill the buffer with data then
the
contents of the heap could be leaked back to the client. The simplest
way to avoid any possibility of this happening is to zero the buffer
before passing it to the .pread method.
I have checked plugins shipped with nbdkit and none of them are
vulnerable in this way. They all do one of these things:
- They fully set the buffer with a single call to something like
memcpy, memset, etc.
- They use a loop like ‘while (count > 0)’ and correctly update count
and the buffer pointer on each iteration.
- For language plugins (except OCaml), they check that the string
length returned from the language plugin is at least as long as the
requested data, before memcpy-ing the data to the return buffer.
- For OCaml, see the previous commit.
Of course I cannot check plugins which may be supplied by others.
Credit: Eric Blake for finding the bug.
---
server/protocol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
This patch does not document any guarantee. Would it be worth explicitly
mentioning this behavior as part of the API guarantee of .pread, so that
plugins like zero and null can rely on the incoming buffer being
pre-zeroed rather than having to repeat the memset() themselves?
Should this behavior have an opt-out switch for plugins concerned with
speed? For safety, we want this behavior to be on by default (as we
don't know whether out-of-tree plugins might otherwise leak our heap
data). But if we were to add a new field to include/nbd-plugin.h that a
plugin can set when they DON'T want to waste time on the pre-emptive
clear because they promise to fill the entire buffer, then the bug
shifts out of nbdkit for leaking data and over to the plugin for setting
the flag incorrectly if it breaks the promise. The new field won't be
set by any existing out-of-tree plugins, and newly written plugins will
have the documentation of that particular aspect to make their decision
on whether to opt-out.
diff --git a/server/protocol.c b/server/protocol.c
index 54d8adb..4d67392 100644
--- a/server/protocol.c
+++ b/server/protocol.c
@@ -658,10 +658,10 @@ protocol_recv_request_send_reply (struct connection *conn)
/* Allocate the data buffer used for either read or write requests. */
if (cmd == NBD_CMD_READ || cmd == NBD_CMD_WRITE) {
- buf = malloc (count);
+ buf = calloc (1, count);
Definitely a safety fix for READ, but isn't this a speed penalty for
WRITE? That is, there is a cost of zeroing the buffer (especially if it
is a 32M request, where zeroing it may be large enough to also have
performance impacts due to cache pressure), when we KNOW that we are
going to immediately rewrite the buffer with incoming data before ever
calling into the plugin's .pwrite, makes it seem like we should limit
the calloc or memset to just the READ path. I don't know if the added
time spent on calloc() is significant, or if it is in the noise compared
to other overheads such as the socket traffic, but it may be worth
benchmarking (I know you have your fio patches that might help us answer
that question).
if (buf == NULL) {
out_of_memory:
- perror ("malloc");
+ perror ("calloc");
error = ENOMEM;
if (cmd == NBD_CMD_WRITE &&
skip_over_write_buffer (conn->sockin, count) < 0)
Is there any smarter algorithm we could use? Right now, we are doing a
malloc()/free() per READ/WRITE. And although libc malloc() should in
general ty to be efficient on frequent malloc reuse, we can often still
be faster by reusing allocations rather than doing frequent
malloc()/free(). Would it be worth exploring a patch that creates a
per-thread buffer (maybe initially sized at 64k, and widened only as the
client sends larger requests), where we reuse the buffer between
successive calls rather than reallocating? We'd still want to initially
zero the buffer when we first allocate one for the thread or when we
enlarge it, but we can argue that after that point, a plugin with a
.pread that does a partial job will only leak data that has already been
previously read from or written to that plugin (and no longer risk a
leak of OUR heap data), which means we avoid both the malloc() and the
memset() in the common case, for slightly faster behavior, and with no
visible difference in output for a plugin that fully fills the buffer in
.pread.
[I still hope to someday improve the nbdkit threading to create helper
threads on demand due to actual client parallel requests, rather than
its current behavior of spawning all parallel threads up front when the
client first connects - that becomes more important if we are also
allocating up to 64M in per-thread buffers, when we don't need the
allocation for a client that only ever sends serialized requests]
And if we DO switch to per-thread reused buffers, it may STILL make
sense to have nbdkit-plugin.h have a way for plugins to opt-in or
opt-out of whether .pread is guaranteed a pre-zeroed buffer on all reads.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization:
qemu.org |
libvirt.org