On 3/4/20 1:44 PM, Richard W.M. Jones wrote:
---
docs/nbdkit-service.pod | 1 +
filters/exitlast/nbdkit-exitlast-filter.pod | 1 +
filters/ip/nbdkit-ip-filter.pod | 1 +
filters/limit/nbdkit-limit-filter.pod | 61 +++++++++
filters/rate/nbdkit-rate-filter.pod | 1 +
configure.ac | 2 +
filters/limit/Makefile.am | 66 ++++++++++
tests/Makefile.am | 3 +
filters/limit/limit.c | 129 ++++++++++++++++++++
tests/test-limit.sh | 81 ++++++++++++
TODO | 2 -
11 files changed, 346 insertions(+), 2 deletions(-)
+++ b/filters/limit/nbdkit-limit-filter.pod
@@ -0,0 +1,61 @@
+=head1 NAME
+
+nbdkit-limit-filter - limit number of clients that can connect
Is this number of clients that can connect in parallel, or number of
clients that connect in series?
+
+=head1 SYNOPSIS
+
+ nbdkit --filter=limit PLUGIN [limit=N]
+
+=head1 DESCRIPTION
+
+C<nbdkit-limit-filter> is an nbdkit filter that limits the number of
+clients which can connect concurrently. If more than C<limit=N>
+(default: 1) clients try to connect at the same time then later
+clients are rejected.
+
+=head1 PARAMETERS
+
+=over 4
+
+=item B<limit=>N
+
+Limit the number of concurrent clients to C<N>. This parameter is
+optional. If not specified then the limit defaults to 1. You can
+also set this to 0 to make the number of clients unlimited (ie.
+disable the filter).
+
To some extent, you can also disable parallel connections with the
noparallel-filter set to serialize=connections (but in that mode, the
second client blocks waiting for its turn to connect, rather than the
server dropping the second client immediately). Probably worth
cross-linking the two man pages to highlight this difference in behavior.
+=head1 SEE ALSO
+
+L<nbdkit(1)>,
+L<nbdkit-exitlast-filter(1)>,
+L<nbdkit-ip-filter(1)>,
+L<nbdkit-noparallel-filter(1)>,
+L<nbdkit-rate-filter(1)>,
+L<nbdkit-filter(3)>,
+L<nbdkit-plugin(3)>.
Hmm - you did link to noparallel here, but noparallel doesn't link back.
+++ b/filters/limit/limit.c
+
+/* Counts client connections. */
+static unsigned connections;
+static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
+
+/* Client limit (0 = filter is disabled). */
+static unsigned limit = 1;
+
+static int
+limit_config (nbdkit_next_config *next, nbdkit_backend *nxdata,
+ const char *key, const char *value)
+{
+ if (strcmp (key, "limit") == 0) {
+ if (nbdkit_parse_unsigned ("limit", value, &limit) == -1)
+ return -1;
+ return 0;
+ }
+
+ return next (nxdata, key, value);
+}
+
+static void
+too_many_clients_error (void)
+{
+ nbdkit_error ("limit: too many clients connected, connection rejected");
+}
+
+/* We limit connections in the preconnect stage (in particular before
+ * any heavyweight NBD or TLS negotiations has been done). However we
+ * count connections in the open/close calls since clients can drop
+ * out between preconnect and open.
+ */
Seems reasonable.
+static int
+limit_preconnect (nbdkit_next_preconnect *next, nbdkit_backend *nxdata,
+ int readonly)
+{
+ if (next (nxdata, readonly) == -1)
+ return -1;
+
+ ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock);
+
+ if (limit > 0 && connections >= limit) {
+ too_many_clients_error ();
+ return -1;
+ }
+
+ return 0;
+}
+
+static void *
+limit_open (nbdkit_next_open *next, nbdkit_backend *nxdata,
+ int readonly)
+{
+ if (next (nxdata, readonly) == -1)
+ return NULL;
+
+ ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock);
+
+ /* We have to check again because clients can artificially slow down
+ * the NBD negotiation in order to bypass the limit otherwise.
+ */
+ if (limit > 0 && connections >= limit) {
+ too_many_clients_error ();
+ return NULL;
+ }
+
+ connections++;
+ return NBDKIT_HANDLE_NOT_NEEDED;
+}
+
+static void
+limit_close (void *handle)
+{
+ ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock);
+ connections--;
+}
Overall looks good. As you say, we may want to enhance nbdkit with a
way to enforce that a client complete handshake within a given time
limit rather than dragging their feet, but that is separate from this
filter.
ACK.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization:
qemu.org |
libvirt.org