Because the test previously used error rates of 50%, it could
sometimes "fail to fail". This is noticable if you run the test
repeatedly:
$ while make -C copy check TESTS=copy-nbd-error.sh >& /tmp/log ; do echo -n . ;
done
This now happens more often because of the larger requests made by the
new multi-threaded loop, resulting in fewer calls to the error filter,
so a greater chance that a series of 50% coin tosses will come up all
heads in the test.
Fix this by making the test non-stocastic.
Fixes: commit 8d444b41d09a700c7ee6f9182a649f3f2d325abb
---
copy/copy-nbd-error.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/copy/copy-nbd-error.sh b/copy/copy-nbd-error.sh
index 0088807f54..01524a890c 100755
--- a/copy/copy-nbd-error.sh
+++ b/copy/copy-nbd-error.sh
@@ -40,7 +40,7 @@ $VG nbdcopy -- [ nbdkit --exit-with-parent -v --filter=error pattern 5M
\
# Failure to read should be fatal
echo "Testing read failures on non-sparse source"
$VG nbdcopy -- [ nbdkit --exit-with-parent -v --filter=error pattern 5M \
- error-pread-rate=0.5 ] null: && fail=1
+ error-pread-rate=1 ] null: && fail=1
# However, reliable block status on a sparse image can avoid the need to read
echo "Testing read failures on sparse source"
@@ -51,7 +51,7 @@ $VG nbdcopy -- [ nbdkit --exit-with-parent -v --filter=error null 5M \
echo "Testing write data failures on arbitrary destination"
$VG nbdcopy -- [ nbdkit --exit-with-parent -v pattern 5M ] \
[ nbdkit --exit-with-parent -v --filter=error --filter=noextents \
- memory 5M error-pwrite-rate=0.5 ] && fail=1
+ memory 5M error-pwrite-rate=1 ] && fail=1
# However, writing zeroes can bypass the need for normal writes
echo "Testing write data failures from sparse source"
--
2.37.0.rc2