On Mon, Dec 31, 2018 at 04:59:09PM -0600, Eric Blake wrote:
On 12/28/18 12:45 PM, Richard W.M. Jones wrote:
> The same as qemu's copyonread flag, this caches read requests.
> ---
> filters/cache/nbdkit-cache-filter.pod | 11 +++++
> filters/cache/cache.c | 37 +++++++++++++--
> tests/Makefile.am | 4 +-
> tests/test-cache-on-read.sh | 66 +++++++++++++++++++++++++++
> 4 files changed, 114 insertions(+), 4 deletions(-)
Makes sense.
> +=item B<cache-on-read=true>
> +
> +Cache read requests as well as write requests. Any time a block is
> +read from the plugin, it is saved in the cache (if there is sufficient
> +space) so the same data can be served more quickly later.
Is it worth mentioning that this is fine for a client that is expected
to be the only writing client of a given server; but that it can result
in stale data if the server can modify the data via other means?
Yes this is worth mentioning and I'll add it to the documentation.
(In
particular, since we don't implement NBD_FLAG_CAN_MULTI_CONN, we already
admit that two clients trying to write in parallel are not guaranteed to
see consistent read results after a flush - while caching read data only
makes that more apparent)
I'm not sure I understand this scenario. Doesn't the cache.c lock
cause the flush to be serialized across all clients?
> +++ b/tests/test-cache-on-read.sh
What you have here looks okay, but I have a possible idea for an
additional test: use the delay filter to prove that repeated reads of a
region of the disk only suffer a one-time read penalty, rather than a
penalty per read.
I'll add an “XXX” comment to the test so we can improve it in future.
Thanks,
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html