On 12/28/18 12:45 PM, Richard W.M. Jones wrote:
The same as qemu's copyonread flag, this caches read requests.
---
filters/cache/nbdkit-cache-filter.pod | 11 +++++
filters/cache/cache.c | 37 +++++++++++++--
tests/Makefile.am | 4 +-
tests/test-cache-on-read.sh | 66 +++++++++++++++++++++++++++
4 files changed, 114 insertions(+), 4 deletions(-)
Makes sense.
+=item B<cache-on-read=true>
+
+Cache read requests as well as write requests. Any time a block is
+read from the plugin, it is saved in the cache (if there is sufficient
+space) so the same data can be served more quickly later.
Is it worth mentioning that this is fine for a client that is expected
to be the only writing client of a given server; but that it can result
in stale data if the server can modify the data via other means? (In
particular, since we don't implement NBD_FLAG_CAN_MULTI_CONN, we already
admit that two clients trying to write in parallel are not guaranteed to
see consistent read results after a flush - while caching read data only
makes that more apparent)
+++ b/tests/test-cache-on-read.sh
What you have here looks
okay, but I have a possible idea for an
additional test: use the delay filter to prove that repeated reads of a
region of the disk only suffer a one-time read penalty, rather than a
penalty per read.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization:
qemu.org |
libvirt.org