On 2/24/21 11:33 AM, Eric Blake wrote:
Implement a TODO item of emulating multi-connection consistency via
multiple plugin flush calls to allow a client to assume that a flush
on a single connection is good enough. This also gives us some
fine-tuning over whether to advertise the bit, including some setups
that are unsafe but may be useful in timing tests.
Testing is interesting: I used the sh plugin to implement a server
that intentionally keeps a per-connection cache.
Note that this filter assumes that multiple connections will still
share the same data (other than caching effects); effects are not
guaranteed when trying to mix it with more exotic plugins like info
that violate that premise.
---
+static int
+multi_conn_flush (struct nbdkit_next_ops *next_ops, void *nxdata,
+ void *handle, uint32_t flags, int *err)
+{
+ struct handle *h = handle, *h2;
+ size_t i;
+
+ if (h->mode == EMULATE) {
+ /* Optimize for the common case of a single connection: flush all
+ * writes on other connections, then flush both read and write on
+ * the current connection, then finally flush all other
+ * connections to avoid reads seeing stale data, skipping the
+ * flushes that make no difference according to dirty tracking.
+ */
+ bool updated = h->dirty & WRITE;
+
+ ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock);
+ for (i = 0; i < conns.size; i++) {
+ h2 = conns.ptr[i];
+ if (h == h2)
+ continue;
+ if (dirty || h2->dirty & WRITE) {
+ if (h2->next_ops->flush (h2->nxdata, flags, err) == -1)
Bummer. This isn't working quite like I hoped, because backend_flush()
(which is what next_ops->flush is pointing to) starts out with GET_CONN,
which picks up the context of the current connection, not the other
connection that I tried to save in my list. I may end up needing to
undo some of the refactoring we did in 91023f269d. But then again, we
want to be able to eventually let filters open up a single background
connection into the plugin regardless of how many frontend connections a
client makes into the filter, so solving the problem for multi-conn will
get us one step closer to solving it for background threading.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization:
qemu.org |
libvirt.org