On 06/29/22 13:03, Richard W.M. Jones wrote:
You're not supposed to read or write NBD servers at a granularity
less
than the advertised minimum block size. nbdcopy has ignored this
requirement, and this is usually fine because the NBD servers we care
about support 512-byte sector granularity, and never advertise sizes /
extents less granular than sectors (even if it's a bit suboptimal in a
few cases).
However there is one new case where we do care: When writing to a
compressed qcow2 file, qemu advertises a minimum and preferred block
size of 64K, and it really means it. You cannot write blocks smaller
than this because of the way qcow2 compression is implemented.
This commit attempts to do the least work possible to fix this.
The previous multi-thread-copying loop was driven by the extent map
received from the source. I have modified the loop so that it
iterates over request_size blocks. request_size is set from the
command line (--request-size) but will be adjusted upwards if either
the source or destination preferred block size is larger. So this
will always copy blocks which are at least the preferred block size
(except for the very last block of the disk).
Does this mean that, if we have a large hole, we won't skip it on the
source / zero it on the destination in one operation (assuming that's
what we've been doing, that is), but read through it dutifully in
request_size blocks?
If that's the case, I'm totally OK with that; it's just exactly the kind
of "sacrifice" that I felt would be necessary here!
While copying these blocks we consult the source extent map. If it
contains only zero regions covering the whole block (only_zeroes
function) then we can skip straight to zeroing the target
(fill_dst_range_with_zeroes), else we do read + write as before.
OK, so we subdivide the request_size-sized block still. I'm curious (and
will later look) whether we'll rely on a previously fetched extent map
(fetched for the whole block device), or if we get the extent-set that
covers the individual request. In the former (more efficient) case,
we'll effectively have to build an intersection between the request byte
interval, and the intervals in the extent map.
I only modified the multi-thread-copying loop, not the synchronous
loop. That should be updated in the same way later.
One side effect of this change is it always makes larger requests,
Why?
Considering an NBD server where minimum=preferred=maximum block size,
what is the difference?
(In fact I'd have thought this change causes nbdcopy to use *smaller*
requests than before, exactly because now we'll advance in fixed size
blocks, not in extent-sized jumps!)
even for regions we know are sparse.
A sparse region is specifically where I'd expect this change to slow
down progress (use smaller requests than what would be permitted by a
big hole).
This is clear in the
copy-sparse.sh and copy-sparse-allocated.sh tests which were
previously driven by the 32K sparse map granularity of the source.
What does this mean? Was the test image (well, "test device")
intentionally populated by alternating, 32KB, "data" and "gap"
extents?
Without changing these tests, they would make make 256K reads &
writes
Why? What advertizes a 256K preferred block size? nbdkit?
(and also read from areas of the disk even though we know they are
sparse).
But that's unavoidable, no? (I mean it's inevitable that we now
investigate (check for zeroes) the internals of these gaps, at request
boundaries, against the extent map; we can still avoid actually reading
those bytes.)
I adjusted these tests to use --request-size=32768 to force
the existing behaviour.
Note this doesn't attempt to limit the maximum block size when reading
or writing. That is for future work.
This is a partial fix for
https://bugzilla.redhat.com/2047660.
Further changes will be required in virt-v2v.
Link:
https://lists.gnu.org/archive/html/qemu-block/2022-01/threads.html#00729
Link:
https://bugzilla.redhat.com/show_bug.cgi?id=2047660
---
TODO | 4 +-
copy/copy-sparse-allocated.sh | 4 +-
copy/copy-sparse.sh | 7 +-
copy/main.c | 7 ++
copy/multi-thread-copying.c | 143 +++++++++++++++++++++++++---------
5 files changed, 119 insertions(+), 46 deletions(-)
I need to understand the design better here, before I can look at the code.
Thanks
Laszlo
diff --git a/TODO b/TODO
index 7c9c15e245..bc38d7045e 100644
--- a/TODO
+++ b/TODO
@@ -28,7 +28,9 @@ Performance: Chart it over various buffer sizes and threads, as that
Examine other fuzzers:
https://gitlab.com/akihe/radamsa
nbdcopy:
- - Minimum/preferred/maximum block size.
+ - Enforce maximum block size.
+ - Synchronous loop should be adjusted to take into account
+ the NBD preferred block size, as was done for multi-thread loop.
- Benchmark.
- Better page cache usage, see nbdkit-file-plugin options
fadvise=sequential cache=none.
diff --git a/copy/copy-sparse-allocated.sh b/copy/copy-sparse-allocated.sh
index 203c3b9850..465e347067 100755
--- a/copy/copy-sparse-allocated.sh
+++ b/copy/copy-sparse-allocated.sh
@@ -17,8 +17,6 @@
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
# Adapted from copy-sparse.sh.
-#
-# This test depends on the nbdkit default sparse block size (32K).
. ../tests/functions.sh
@@ -33,7 +31,7 @@ requires nbdkit eval --version
out=copy-sparse-allocated.out
cleanup_fn rm -f $out
-$VG nbdcopy --allocated -- \
+$VG nbdcopy --allocated --request-size=32768 -- \
[ nbdkit --exit-with-parent data data='
1
@1073741823 1
diff --git a/copy/copy-sparse.sh b/copy/copy-sparse.sh
index 1a6da868e9..7912a2120a 100755
--- a/copy/copy-sparse.sh
+++ b/copy/copy-sparse.sh
@@ -16,8 +16,6 @@
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
-# This test depends on the nbdkit default sparse block size (32K).
-
. ../tests/functions.sh
set -e
@@ -34,8 +32,9 @@ cleanup_fn rm -f $out
# Copy from a sparse data disk to an nbdkit-eval-plugin instance which
# is logging everything. This allows us to see exactly what nbdcopy
# is writing, to ensure it is writing and zeroing the target as
-# expected.
-$VG nbdcopy -S 0 -- \
+# expected. Force request size to match nbdkit default sparse
+# allocator block size (32K).
+$VG nbdcopy -S 0 --request-size=32768 -- \
[ nbdkit --exit-with-parent data data='
1
@1073741823 1
diff --git a/copy/main.c b/copy/main.c
index 9b551de664..cbfe5b5436 100644
--- a/copy/main.c
+++ b/copy/main.c
@@ -40,6 +40,7 @@
#include "ispowerof2.h"
#include "human-size.h"
+#include "minmax.h"
#include "version.h"
#include "nbdcopy.h"
@@ -379,6 +380,12 @@ main (int argc, char *argv[])
if (threads < connections)
connections = threads;
+ /* request_size must always be larger than preferred size of
+ * source or destination.
+ */
+ request_size = MAX (request_size, src->preferred);
+ request_size = MAX (request_size, dst->preferred);
+
/* Adapt queue to size to request size if needed. */
if (request_size > queue_size)
queue_size = request_size;
diff --git a/copy/multi-thread-copying.c b/copy/multi-thread-copying.c
index 06cdb8eef6..7194f4fafd 100644
--- a/copy/multi-thread-copying.c
+++ b/copy/multi-thread-copying.c
@@ -166,6 +166,58 @@ decrease_queue_size (struct worker *worker, size_t len)
worker->queue_size -= len;
}
+/* Using the extents map 'exts', check if the region
+ * [offset..offset+len-1] contains only zero extents.
+ *
+ * The invariant for '*i' is always an extent which starts before or
+ * equal to the current offset.
+ */
+static bool
+only_zeroes (const extent_list exts, size_t *i,
+ uint64_t offset, unsigned len)
+{
+ size_t j;
+
+ /* Invariant. */
+ assert (0 <= *i && *i < exts.len);
+ assert (exts.ptr[*i].offset <= offset);
+
+ /* Update the variant. Search for the last possible extent in the
+ * list which is <= offset.
+ */
+ for (j = *i + 1; j < exts.len; ++j) {
+ if (exts.ptr[j].offset <= offset)
+ *i = j;
+ else
+ break;
+ }
+
+ /* Check invariant again. */
+ assert (0 <= *i && *i < exts.len);
+ assert (exts.ptr[*i].offset <= offset);
+
+ /* Search forward, look for any non-zero extents covering the region. */
+ for (j = *i; j < exts.len; ++j) {
+ uint64_t start, end;
+
+ /* [start..end] of the current extent. */
+ start = exts.ptr[j].offset;
+ end = exts.ptr[j].offset + exts.ptr[j].length;
+
+ if (end <= offset)
+ continue;
+
+ if (start >= offset + len)
+ break;
+
+ /* Non-zero extent covering this region => test failed. */
+ if (!exts.ptr[j].zero)
+ return false;
+ }
+
+ return true;
+}
+
/* There are 'threads' worker threads, each copying work ranges from
* src to dst until there are no more work ranges.
*/
@@ -177,7 +229,10 @@ worker_thread (void *wp)
extent_list exts = empty_vector;
while (get_next_offset (&offset, &count)) {
+ struct command *command;
size_t i;
+ bool is_zeroing = false;
+ uint64_t zeroing_start = 0;
assert (0 < count && count <= THREAD_WORK_SIZE);
if (extents)
@@ -185,52 +240,64 @@ worker_thread (void *wp)
else
default_get_extents (src, w->index, offset, count, &exts);
- for (i = 0; i < exts.len; ++i) {
- struct command *command;
- size_t len;
+ i = 0; // index into extents array
+ while (count) {
+ const size_t len = MIN (count, request_size);
- if (exts.ptr[i].zero) {
+ if (only_zeroes (exts, &i, offset, len)) {
/* The source is zero so we can proceed directly to skipping,
- * fast zeroing, or writing zeroes at the destination.
+ * fast zeroing, or writing zeroes at the destination. Defer
+ * zeroing so we can send it as a single large command.
*/
- command = create_command (exts.ptr[i].offset, exts.ptr[i].length,
- true, w);
- fill_dst_range_with_zeroes (command);
+ if (!is_zeroing) {
+ is_zeroing = true;
+ zeroing_start = offset;
+ }
}
-
else /* data */ {
- /* As the extent might be larger than permitted for a single
- * command, we may have to split this into multiple read
- * requests.
- */
- while (exts.ptr[i].length > 0) {
- len = exts.ptr[i].length;
- if (len > request_size)
- len = request_size;
-
- command = create_command (exts.ptr[i].offset, len,
- false, w);
-
- wait_for_request_slots (w);
-
- /* NOTE: Must increase the queue size after waiting. */
- increase_queue_size (w, len);
-
- /* Begin the asynch read operation. */
- src->ops->asynch_read (src, command,
- (nbd_completion_callback) {
- .callback = finished_read,
- .user_data = command,
- });
-
- exts.ptr[i].offset += len;
- exts.ptr[i].length -= len;
+ /* If we were in the middle of deferred zeroing, do it now. */
+ if (is_zeroing) {
+ /* Note that offset-zeroing_start can never exceed
+ * THREAD_WORK_SIZE, so there is no danger of overflowing
+ * size_t.
+ */
+ command = create_command (zeroing_start, offset-zeroing_start,
+ true, w);
+ fill_dst_range_with_zeroes (command);
+ is_zeroing = false;
}
+
+ /* Issue the asynchronous read command. */
+ command = create_command (offset, len, false, w);
+
+ wait_for_request_slots (w);
+
+ /* NOTE: Must increase the queue size after waiting. */
+ increase_queue_size (w, len);
+
+ /* Begin the asynch read operation. */
+ src->ops->asynch_read (src, command,
+ (nbd_completion_callback) {
+ .callback = finished_read,
+ .user_data = command,
+ });
}
- offset += count;
- count = 0;
- } /* for extents */
+ offset += len;
+ count -= len;
+ } /* while (count) */
+
+ /* If we were in the middle of deferred zeroing, do it now. */
+ if (is_zeroing) {
+ /* Note that offset-zeroing_start can never exceed
+ * THREAD_WORK_SIZE, so there is no danger of overflowing
+ * size_t.
+ */
+ command = create_command (zeroing_start, offset - zeroing_start,
+ true, w);
+ fill_dst_range_with_zeroes (command);
+ is_zeroing = false;
+ }
}
/* Wait for in flight NBD requests to finish. */