On Tue, Feb 11, 2020 at 11:46:06AM -0600, Eric Blake wrote:
On 2/11/20 11:15 AM, Richard W.M. Jones wrote:
>Since commit 86fdb48c6a5362d66865493d9d2172166f99722e we have stored
>the connection object in thread-local storage.
>
>In this very large, but mostly mechanical change we stop passing the
>connection pointer around everywhere, and instead use the value stored
>in thread-local storage.
>
>This assumes a 1-1 mapping between the connection and the current
>thread which is true in *most* places. Occasionally we still have the
>explicit connection pointer, especially just before we launch a thread
>or call threadlocal_set_conn.
>---
> server/internal.h | 191 +++++++++----------
> server/backend.c | 160 +++++++++-------
> server/connections.c | 68 ++++---
> server/crypto.c | 14 +-
> server/filters.c | 270 +++++++++++++--------------
> server/locks.c | 12 +-
> server/plugins.c | 64 +++----
> server/protocol-handshake-newstyle.c | 172 +++++++++--------
> server/protocol-handshake-oldstyle.c | 7 +-
> server/protocol-handshake.c | 40 ++--
> server/protocol.c | 127 ++++++-------
> server/public.c | 2 +-
> server/test-public.c | 2 +-
> 13 files changed, 574 insertions(+), 555 deletions(-)
Makes sense to me (and not too hard to rebase my NBD_INFO_INIT_STATE
stuff on top of it). Are we sure that there is not going to be a
speed penalty (from frequent access to the thread-local storage,
compared to previous access through a parameter stored in a
register)?
In v2 the new GET_CONN macro still makes a non-inline call to
threadlocal_get_conn ().
I changed threadlocal_get_conn so it's inlined in the header file
internal.h - this is quite an ugly change because you have to pull a
few private structures into that header file. Anyway it still makes a
non-inline call to pthread_getspecific.
So I went back to v2 and instead enabled LTO, but sadly something
(probably libtool) breaks LTO.
Anyway TLS data is basically a dereference through a register:
https://sourceware.org/git/?p=glibc.git;a=blob;f=nptl/pthread_getspecific...
(note that THREAD_SELF is a macro which returns a constant offset in
the %fs segment) so it should eventually be possible to make this
fast.
A few comments:
>+++ b/server/filters.c
>@@ -49,13 +49,13 @@ struct backend_filter {
> struct nbdkit_filter filter;
> };
>-/* Literally a backend, a connection pointer, and the filter's handle.
>+/* Literally a backend and the filter's handle.
>+ *
> * This is the implementation of our handle in .open, and serves as
> * a stable ‘void *nxdata’ in the filter API.
> */
>-struct b_conn {
>+struct b_h {
> struct backend *b;
>- struct connection *conn;
> void *handle;
> };
>@@ -186,22 +186,22 @@ filter_config_complete (struct backend *b)
> static int
> next_preconnect (void *nxdata, int readonly)
> {
>- struct b_conn *b_conn = nxdata;
>- return b_conn->b->preconnect (b_conn->b, b_conn->conn, readonly);
>+ struct b_h *b_h = nxdata;
>+ return b_h->b->preconnect (b_h->b, readonly);
> }
None of the next_*() wrappers use b_h->handle, they all stick to
b_h->b. I don't think that matters too much, other than...
>@@ -267,101 +266,101 @@ filter_close (struct backend *b, struct connection *conn,
void *handle)
> /* The next_functions structure contains pointers to backend
> * functions. However because these functions are all expecting a
>- * backend and a connection, we cannot call them directly, but must
>+ * backend and a handle, we cannot call them directly, but must
> * write some next_* functions that unpack the two parameters from a
>- * single ‘void *nxdata’ struct pointer (‘b_conn’).
>+ * single ‘void *nxdata’ struct pointer (‘b_h’).
...this comment is slightly off. In fact, if ALL we use is b_h->b,
we could probably populate next_ops with backend_* functions rather
than next_* wrapper functions:
This was in fact my original motivation for this patch series, except
I was expecting we'd use struct backend * instead of the function.
As you say in the snipped bit, this should be a separate patch once we
get this one right.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v