Extending nbdkit to support listing exports
by Richard W.M. Jones
Hi Eric, Nir.
$SUBJECT - it's complicated! Imagine a scenario where we have
extended the file plugin so you can point it at a directory and it
will use the client-supplied export name to select a file to serve.
Also we have extended the tar filter so that you can map tar file
components into export names. You may be able to see where this is
going ...
Now we point the file plugin at a directory of tar files, and layer
the tar filter on top, and then what happens? The client can only
supply a single export name, so how can it specify component X of tar
file Y?
tar filter ---> file plugin
tar-exportname=on exportname=on
directory=/my-files ---> directory of tar files
It gets worse! At the moment filters and plugins can read the export
name sent by the client by calling the global function
nbdkit_export_name(). I'm not so worried about filters, but plugins
must be able to continue to use this global function. In the scenario
above both the tar filter and the file plugin would see the same
string (if we did nothing), and it would only make sense to one or the
other but never to both.
Listing exports is also complicated: Should we list only the nearest
filter's exports? Or the plugin's exports? Cartesian product of both
somehow?
That's the background, but how do we solve it?
----------------------------------------------
I'd like to say first of all that I believe export names are already a
dumpster fire of potential insecurity, and that we shouldn't have the
core server attempt to parse them, so solutions involving splitting
the export names into components (like tar-component/file-name) should
not be considered because they are too risky, and run into arbitrary
limits (what escape character to use? what is the max length?) Also
changing the NBD protocol is not under consideration.
I think we need to consider these aspects separately:
(a) How filters and plugins get the client exportname.
(b) How filters and plugins respond when the client issues NBD_OPT_LIST.
(c) How the server, filters and plugins respond to NBD_OPT_INFO.
(a) Client exportname
---------------------
The client sends the export name of the export it wants to access with
NBD_OPT_EXPORT_NAME or the newer NBD_OPT_GO. This is an opaque string
(not a filename, pathname etc). Currently filters and plugins can
fetch this using nbdkit_export_name(). However it cannot be modified
by filters.
My proposal is that we extend the .open() callback with an extra
export_name parameter:
void *filter_open (int readonly, const char *export_name);
For plugins I have already proposed this for inclusion in API V3. We
already do this in nbdkit-sh-plugin. For backwards compatibility with
existing plugins we'd make nbdkit_export_name() return the export name
passed down by the last filter in the chain, and deprecate this
function in V3.
For filters the next_open field would take the export_name and this
would allow filters to modify the export name.
nbdkit-tar-filter would take an explicit tar-export-name=<new>
parameter to pass a different export name down to the underlying
plugin. If the tar filter was implementing its own export name
processing then this parameter would either be required or would
default to "". (If the tar filter was not implementing export name
functionality it would pass it down unchanged).
(b) Listing exports with NBD_OPT_LIST
-------------------------------------
I believe we need a new .list_exports() callback for filters and
plugins to handle this case. I'm not completely clear on the API, but
it could be something as simple as:
const char **list_exports (void);
returning a NULL-terminated list of strings.
Is it conceivable we will need additional fields in future?
Plugins which do not implement this would be assumed to return a
single export "" (or perhaps the -e parameter??)
Filters could ignore or replace exports by optionally implementing
this callback. I wouldn't recommend that filters modify export names
however.
(c) Handling NBD_OPT_INFO
-------------------------
I believe our current implementation of this option (which is
definitely not being robustly tested!) is something like this:
- NBD_OPT_INFO received from client (containing the export name)
- open a backend handle
- query backend for flags
- reply to client
- close backend handle
- process the next option
Assuming this is actually correct, probably we can make sure that it
passes the export name to the backend_open, and everything should just
work. In fact this would happen as a side effect of fixing (a) above
so I don't believe there's anything to do here except to have some
test coverage.
----------------------------------------------------------------------
Thoughts welcomed as always,
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
4 years, 4 months
[PATCH nbdkit RFC 0/2] curl: Implement authorization scripts.
by Richard W.M. Jones
This is an RFC only, at the very least it lacks tests.
This implements a rather complex new feature in nbdkit-curl-plugin
allowing you to specify an external shell script that can be used to
fetch an authorization token for services which requires a token or
cookie for access, especially if that token must be renewed
periodically. The motivation can be seen in the changes to the docs
in patch 2.
Rich.
4 years, 4 months
nbdkit / exposing disk images in containers
by Richard W.M. Jones
KubeVirt is a custom resource (a kind of plugin) for Kubernetes which
adds support for running virtual machines. As part of this they have
the same problems as everyone else of how to import large disk images
into the system for pets, templates, etc.
As part of the project they've defined a format for embedding a disk
image into a container (unclear why? perhaps so these can be
distributed using the existing container registry systems?):
https://github.com/kubevirt/containerized-data-importer/blob/master/doc/i...
An example of such a disk-in-a-container is here:
https://hub.docker.com/r/kubevirt/fedora-cloud-container-disk-demo
We've been asked if we can help with tools to efficiently import these
disk images, and I have suggested a few things with nbdkit and have
written a couple of filters (tar, gzip) to support this.
This email is my thoughts on further development work in this area.
----------------------------------------------------------------------
(1) Accessing the disk image directly from the Docker Hub.
When you get down to it, what this actually is:
* There is a disk image in qcow2 format.
* It is embedded as "./disk/downloaded" in a gzip-compressed tar
file. (This is a container with a single layer).
* This tarball is uploaded to (in this case) the Docker Hub and can
be accessed over a URL. The URL can be constructed using a few
json requests.
* The URL is served by nginx and this supports HTTP range requests.
I encapsulated all of this in the attached script. This is an
existence proof that it is possible to access the image with nbdkit.
One problem is that the auth token only lasts for a limited time
(seems to be 5 minutes in my test), and it doesn't automatically renew
as you download the layer, so if the download takes longer than 5
minutes you'll suddenly get unrecoverable authorization failures.
There seem to be two possible ways to solve this:
(a) Write a new nbdkit-container-plugin which does the authorization
(essentially hiding most of the details in the attached script
from the user). It could deal with renewing the key as
required.
(b) Modify nbdkit-curl-plugin so the user could provide a script for
renewing authorization. This would expose the full gory details
to the end user, but on the other hand might be useful in other
situations that require authorization.
(2) nbdkit-tar-filter exportname and listing files.
This has already been covered by an email from Nir Soffer, so I'll
simply link to that:
https://lists.gnu.org/archive/html/qemu-discuss/2020-06/msg00058.html
It basically requires a fairly simple change to nbdkit-tar-filter to
map the tar filenames into export names, and a deeper change to nbdkit
core server to allow listing all export names. The end result would
be that an NBD client could query the list of files [ie exports] in
the tarball and choose one to download.
(3) gzip & tar require full downloads - why not “docker/podman save/export”?
Stepping back to get the bigger picture: Because the OCI standard uses
gzip for compression (https://stackoverflow.com/a/9213826), and
because the tar index is interspersed with the tar data, you always
need to download the whole container layer before you can access the
disk image inside. Currently nbdkit-gzip-filter hides this from the
end user, but it's still downloading the whole thing to a temporary
file. There's no way round that unless OCI can be persuaded to use a
better format.
But docker/podman already has a way to export container layers,
ie. the save and export commands. These also have the advantage that
it will cache the downloaded layers between runs. So why aren't we
using that?
In this world, nbdkit-container-plugin would simply use docker/podman
save (or export?) to grab the container as a tar file, and we would
use the tar filter as above to expose the contents as an NBD endpoint
for further consumption. IOW:
nbdkit container docker.io/kubevirt/fedora-cloud-container-disk-demo \
--filter=tar tar-entry=./downloaded/disk
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW
4 years, 4 months
[RFC nbdkit PATCH] server: Allow --run with --vsock
by Eric Blake
Now that kernels support vsock loopback, we have no reason to forbid
--vsock with --run.
Signed-off-by: Eric Blake <eblake(a)redhat.com>
---
I'm hoping to commit something like this, but right now, the testsuite
is failing more times than it succeeds, with:
+ port=1082294412
+ nbdkit --vsock --port 1082294412 memory 1M --run 'sleep 1; nbdsh -u "$uri" -c "$check"'
...
libnbd: debug: nbdsh: nbd_connect_uri: enter: uri="nbd+vsock://1:1082294412"
libnbd: debug: nbdsh: nbd_connect_uri: event CmdConnectSockAddr: START -> CONNECT.START
libnbd: debug: nbdsh: nbd_connect_uri: poll start: events=4
libnbd: debug: nbdsh: nbd_connect_uri: poll end: r=1 revents=c
libnbd: debug: nbdsh: nbd_connect_uri: event NotifyWrite: CONNECT.START -> CONNECT.CONNECTING
libnbd: debug: nbdsh: nbd_connect_uri: transition: CONNECT.CONNECTING -> DEAD
libnbd: debug: nbdsh: nbd_connect_uri: leave: error="nbd_connect_uri: connect: Connection reset by peer"
but when I try the same from the command line in isolation:
$ ./nbdkit --vsock --port=12345 memory 1m --run 'nbdsh -u $uri -c "print(h.get_size())"'
1048576
I wonder if there is some sort of race where an existing vsock on a
different port is interfering (since the test opens up a background
server on one port before the one-liner attempt on a different port).
docs/nbdkit-captive.pod | 3 ++-
server/internal.h | 1 +
server/captive.c | 46 +++++++++++++++++++++--------------------
server/main.c | 3 +--
tests/test-vsock.sh | 21 +++++++++++++------
5 files changed, 43 insertions(+), 31 deletions(-)
diff --git a/docs/nbdkit-captive.pod b/docs/nbdkit-captive.pod
index 09628367..7f942f95 100644
--- a/docs/nbdkit-captive.pod
+++ b/docs/nbdkit-captive.pod
@@ -54,7 +54,8 @@ option to set the preferred export name, this is included in the URI.
An older URL that refers to the nbdkit port or socket in a manner
specific to certain tools. This form does not include an export name,
-even if B<-e> was used.
+even if B<-e> was used. This variable is empty when using vsock
+loopback (when the B<--vsock> option is used, only C<$uri> is valid).
Note there is some magic here, since qemu and guestfish URLs have a
different format, so nbdkit tries to guess which you are running. If
diff --git a/server/internal.h b/server/internal.h
index 68c53366..1dd84ccb 100644
--- a/server/internal.h
+++ b/server/internal.h
@@ -129,6 +129,7 @@ extern bool tls_verify_peer;
extern char *unixsocket;
extern const char *user, *group;
extern bool verbose;
+extern bool vsock;
extern int saved_stdin;
extern int saved_stdout;
diff --git a/server/captive.c b/server/captive.c
index f8107604..a5b227c4 100644
--- a/server/captive.c
+++ b/server/captive.c
@@ -72,7 +72,7 @@ run_command (void)
/* Construct $uri. */
fprintf (fp, "uri=");
if (port) {
- fprintf (fp, "nbd://localhost:");
+ fprintf (fp, vsock ? "nbd+vsock://1:" : "nbd://localhost:");
shell_quote (port, fp);
if (strcmp (exportname, "") != 0) {
putc ('/', fp);
@@ -99,29 +99,31 @@ run_command (void)
* different syntax, so try to guess which one we need.
*/
fprintf (fp, "nbd=");
- if (strstr (run, "guestfish")) {
- if (port) {
- fprintf (fp, "nbd://localhost:");
- shell_quote (port, fp);
+ if (!vsock) {
+ if (strstr (run, "guestfish")) {
+ if (port) {
+ fprintf (fp, "nbd://localhost:");
+ shell_quote (port, fp);
+ }
+ else if (unixsocket) {
+ fprintf (fp, "nbd://\\?socket=");
+ shell_quote (unixsocket, fp);
+ }
+ else
+ abort ();
}
- else if (unixsocket) {
- fprintf (fp, "nbd://\\?socket=");
- shell_quote (unixsocket, fp);
+ else /* qemu */ {
+ if (port) {
+ fprintf (fp, "nbd:localhost:");
+ shell_quote (port, fp);
+ }
+ else if (unixsocket) {
+ fprintf (fp, "nbd:unix:");
+ shell_quote (unixsocket, fp);
+ }
+ else
+ abort ();
}
- else
- abort ();
- }
- else /* qemu */ {
- if (port) {
- fprintf (fp, "nbd:localhost:");
- shell_quote (port, fp);
- }
- else if (unixsocket) {
- fprintf (fp, "nbd:unix:");
- shell_quote (unixsocket, fp);
- }
- else
- abort ();
}
putc ('\n', fp);
diff --git a/server/main.c b/server/main.c
index c432f5bd..dfa81a85 100644
--- a/server/main.c
+++ b/server/main.c
@@ -545,8 +545,7 @@ main (int argc, char *argv[])
(listen_stdin && run) ||
(listen_stdin && dump_plugin) ||
(vsock && unixsocket) ||
- (vsock && listen_stdin) ||
- (vsock && run)) {
+ (vsock && listen_stdin)) {
fprintf (stderr,
"%s: --dump-plugin, -p, --run, -s, -U or --vsock options "
"cannot be used in this combination\n",
diff --git a/tests/test-vsock.sh b/tests/test-vsock.sh
index 54115e78..13c1c29d 100755
--- a/tests/test-vsock.sh
+++ b/tests/test-vsock.sh
@@ -57,16 +57,25 @@ files="vsock.pid"
rm -f $files
cleanup_fn rm -f $files
-# Run nbdkit.
-start_nbdkit -P vsock.pid --vsock --port $port memory 1M
-
-export port
-nbdsh -c - <<'EOF'
+# An nbdsh script for connecting to vsock
+export connect='
import os
# 1 = VMADDR_CID_LOCAL
h.connect_vsock (1, int (os.environ["port"]))
+'
+export check='
size = h.get_size ()
assert size == 1048576
-EOF
+'
+
+# Run nbdkit.
+start_nbdkit -P vsock.pid --vsock --port $port memory 1M
+
+export port
+nbdsh -c "$connect" -c "$check"
+
+# Repeat on a different port, testing interaction with --run
+port=$((port + 1))
+nbdkit --vsock --port $port memory 1M --run 'nbdsh -u "$uri" -c "$check"'
--
2.27.0
4 years, 4 months
[PATCH nbdkit] New filter: gzip
by Richard W.M. Jones
Turn the existing nbdkit-gzip-plugin into a filter so it can be
applied on top of files or other sources:
nbdkit file --filter=gzip file.gz
nbdkit curl --filter=gzip https://example.com/disk.gz
Because of the nature of the gzip format which is not blocked based
and thus not seekable, this filter caches the whole uncompressed file
in a hidden temporary file. This is required in order to implement
.get_size. See this link for a more detailed explanation:
https://stackoverflow.com/a/9213826
This commit deprecates nbdkit-gzip-plugin and suggests removal in
nbdkit 1.26.
---
filters/gzip/nbdkit-gzip-filter.pod | 85 +++++++
filters/tar/nbdkit-tar-filter.pod | 7 +-
plugins/gzip/nbdkit-gzip-plugin.pod | 9 +
configure.ac | 10 +-
filters/gzip/Makefile.am | 75 ++++++
tests/Makefile.am | 34 +--
filters/gzip/gzip.c | 347 ++++++++++++++++++++++++++++
tests/test-gzip.c | 4 +-
TODO | 2 -
9 files changed, 547 insertions(+), 26 deletions(-)
diff --git a/filters/gzip/nbdkit-gzip-filter.pod b/filters/gzip/nbdkit-gzip-filter.pod
new file mode 100644
index 00000000..da0cf626
--- /dev/null
+++ b/filters/gzip/nbdkit-gzip-filter.pod
@@ -0,0 +1,85 @@
+=head1 NAME
+
+nbdkit-gzip-filter - decompress a .gz file
+
+=head1 SYNOPSIS
+
+ nbdkit file --filter=gzip FILENAME.gz
+
+=head1 DESCRIPTION
+
+C<nbdkit-gzip-filter> is a filter for L<nbdkit(1)> which transparently
+decompresses a gzip-compressed file. You can place this filter on top
+of L<nbdkit-file-plugin(1)> to decompress a local F<.gz> file, or on
+top of other plugins such as L<nbdkit-curl-plugin(1)>:
+
+ nbdkit curl --filter=gzip https://example.com/disk.gz
+
+With L<nbdkit-tar-filter(1)> it can be used to extract files from a
+compressed tar file:
+
+ nbdkit curl --filter=tar --filter=gzip \
+ https://example.com/file.tar.gz tar-entry=disk.img
+
+The filter only allows read-only connections.
+
+B<Note> that gzip files are not very good for random access in large
+files because seeking to a position in the gzip file involves
+decompressing all data before that point in the file. A more
+practical method to compress large disk images is to use the L<xz(1)>
+format and L<nbdkit-xz-filter(1)>.
+
+To allow seeking this filter has to keep the contents of the complete
+uncompressed file, which it does in a hidden temporary file under
+C<$TMPDIR>.
+
+=head1 PARAMETERS
+
+There are no parameters specific to this plugin.
+
+=head1 ENVIRONMENT VARIABLES
+
+=over 4
+
+=item C<TMPDIR>
+
+Because the gzip format is not seekable, this filter has to store the
+complete contents of the compressed file in a temporary file located
+in F</var/tmp> by default. You can override this location by setting
+the C<TMPDIR> environment variable before starting nbdkit.
+
+=back
+
+=head1 FILES
+
+=over 4
+
+=item F<$plugindir/nbdkit-gzip-plugin.so>
+
+The plugin.
+
+Use C<nbdkit --dump-config> to find the location of C<$plugindir>.
+
+=back
+
+=head1 VERSION
+
+C<nbdkit-gzip-filter> first appeared in nbdkit 1.22. It is derived
+from C<nbdkit-gzip-plugin> which first appeared in nbdkit 1.0.
+
+=head1 SEE ALSO
+
+L<nbdkit-curl-plugin(1)>,
+L<nbdkit-file-plugin(1)>,
+L<nbdkit-tar-filter(1)>,
+L<nbdkit-xz-filter(1)>,
+L<nbdkit(1)>,
+L<nbdkit-plugin(3)>.
+
+=head1 AUTHORS
+
+Richard W.M. Jones
+
+=head1 COPYRIGHT
+
+Copyright (C) 2013-2020 Red Hat Inc.
diff --git a/filters/tar/nbdkit-tar-filter.pod b/filters/tar/nbdkit-tar-filter.pod
index 56d4cab1..0f0734c3 100644
--- a/filters/tar/nbdkit-tar-filter.pod
+++ b/filters/tar/nbdkit-tar-filter.pod
@@ -42,11 +42,13 @@ server use:
nbdkit -r curl https://example.com/file.tar \
--filter=tar tar-entry=disk.img
-=head2 Open an xz-compressed tar file (read-only)
+=head2 Open an gzip-compressed tar file (read-only)
This filter cannot handle compressed tar files itself, but you can
-combine it with L<nbdkit-xz-filter(1)>:
+combine it with L<nbdkit-gzip-filter(1)> or L<nbdkit-xz-filter(1)>:
+ nbdkit file filename.tar.gz \
+ --filter=tar tar-entry=disk.img --filter=gzip
nbdkit file filename.tar.xz \
--filter=tar tar-entry=disk.img --filter=xz
@@ -100,6 +102,7 @@ from C<nbdkit-tar-plugin> which first appeared in nbdkit 1.2.
L<nbdkit(1)>,
L<nbdkit-curl-plugin(1)>,
L<nbdkit-file-plugin(1)>,
+L<nbdkit-gzip-filter(1)>,
L<nbdkit-offset-filter(1)>,
L<nbdkit-plugin(3)>,
L<nbdkit-ssh-plugin(1)>,
diff --git a/plugins/gzip/nbdkit-gzip-plugin.pod b/plugins/gzip/nbdkit-gzip-plugin.pod
index 1b090125..4cd91ede 100644
--- a/plugins/gzip/nbdkit-gzip-plugin.pod
+++ b/plugins/gzip/nbdkit-gzip-plugin.pod
@@ -6,6 +6,15 @@ nbdkit-gzip-plugin - nbdkit gzip plugin
nbdkit gzip [file=]FILENAME.gz
+=head1 DEPRECATED
+
+B<The gzip plugin is deprecated in S<nbdkit E<ge> 1.22.17> and will be
+removed in S<nbdkit 1.26>>. It has been replaced with a filter with
+the same functionality, see L<nbdkit-gzip-filter(1)>. You can use the
+filter like this:
+
+ nbdkit file --filter=gzip FILENAME.gz
+
=head1 DESCRIPTION
C<nbdkit-gzip-plugin> is a file serving plugin for L<nbdkit(1)>.
diff --git a/configure.ac b/configure.ac
index b51b67b6..3c1f2e11 100644
--- a/configure.ac
+++ b/configure.ac
@@ -105,6 +105,7 @@ filters="\
ext2 \
extentlist \
fua \
+ gzip \
ip \
limit \
log \
@@ -899,10 +900,10 @@ AS_IF([test "$with_libvirt" != "no"],[
])
AM_CONDITIONAL([HAVE_LIBVIRT],[test "x$LIBVIRT_LIBS" != "x"])
-dnl Check for zlib (only if you want to compile the gzip plugin).
+dnl Check for zlib (only if you want to compile the gzip filter).
AC_ARG_WITH([zlib],
[AS_HELP_STRING([--without-zlib],
- [disable gzip plugin @<:@default=check@:>@])],
+ [disable gzip filter @<:@default=check@:>@])],
[],
[with_zlib=check])
AS_IF([test "$with_zlib" != "no"],[
@@ -911,7 +912,7 @@ AS_IF([test "$with_zlib" != "no"],[
AC_SUBST([ZLIB_LIBS])
AC_DEFINE([HAVE_ZLIB],[1],[zlib found at compile time.])
],
- [AC_MSG_WARN([zlib >= 1.2.3.5 not found, gzip plugin will be disabled])])
+ [AC_MSG_WARN([zlib >= 1.2.3.5 not found, gzip filter will be disabled])])
])
AM_CONDITIONAL([HAVE_ZLIB],[test "x$ZLIB_LIBS" != "x"])
@@ -1144,6 +1145,7 @@ AC_CONFIG_FILES([Makefile
filters/ext2/Makefile
filters/extentlist/Makefile
filters/fua/Makefile
+ filters/gzip/Makefile
filters/ip/Makefile
filters/limit/Makefile
filters/log/Makefile
@@ -1253,6 +1255,8 @@ echo "Optional filters:"
echo
feature "ext2 ................................... " \
test "x$HAVE_EXT2_TRUE" = "x"
+feature "gzip ................................... " \
+ test "x$HAVE_ZLIB_TRUE" = "x"
feature "xz ..................................... " \
test "x$HAVE_LIBLZMA_TRUE" = "x"
diff --git a/filters/gzip/Makefile.am b/filters/gzip/Makefile.am
new file mode 100644
index 00000000..a329fab8
--- /dev/null
+++ b/filters/gzip/Makefile.am
@@ -0,0 +1,75 @@
+# nbdkit
+# Copyright (C) 2019-2020 Red Hat Inc.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# * Neither the name of Red Hat nor the names of its contributors may be
+# used to endorse or promote products derived from this software without
+# specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY RED HAT AND CONTRIBUTORS ''AS IS'' AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL RED HAT OR
+# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+# USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+# OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+# SUCH DAMAGE.
+
+include $(top_srcdir)/common-rules.mk
+
+EXTRA_DIST = nbdkit-gzip-filter.pod
+
+if HAVE_ZLIB
+
+filter_LTLIBRARIES = nbdkit-gzip-filter.la
+
+nbdkit_gzip_filter_la_SOURCES = \
+ gzip.c \
+ $(top_srcdir)/include/nbdkit-filter.h \
+ $(NULL)
+
+nbdkit_gzip_filter_la_CPPFLAGS = \
+ -I$(top_srcdir)/include \
+ -I$(top_srcdir)/common/include \
+ -I$(top_srcdir)/common/utils \
+ $(NULL)
+nbdkit_gzip_filter_la_CFLAGS = \
+ $(WARNINGS_CFLAGS) \
+ $(ZLIB_CFLAGS) \
+ $(NULL)
+nbdkit_gzip_filter_la_LIBADD = \
+ $(top_builddir)/common/utils/libutils.la \
+ $(ZLIB_LIBS) \
+ $(NULL)
+nbdkit_gzip_filter_la_LDFLAGS = \
+ -module -avoid-version -shared $(SHARED_LDFLAGS) \
+ -Wl,--version-script=$(top_srcdir)/filters/filters.syms \
+ $(NULL)
+
+if HAVE_POD
+
+man_MANS = nbdkit-gzip-filter.1
+CLEANFILES += $(man_MANS)
+
+nbdkit-gzip-filter.1: nbdkit-gzip-filter.pod
+ $(PODWRAPPER) --section=1 --man $@ \
+ --html $(top_builddir)/html/$@.html \
+ $<
+
+endif HAVE_POD
+
+endif
diff --git a/tests/Makefile.am b/tests/Makefile.am
index 2b5737b8..77f21d79 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -557,23 +557,6 @@ EXTRA_DIST += test-floppy.sh
TESTS += test-full.sh
EXTRA_DIST += test-full.sh
-# gzip plugin test.
-if HAVE_MKE2FS_WITH_D
-if HAVE_ZLIB
-LIBGUESTFS_TESTS += test-gzip
-check_DATA += disk.gz
-CLEANFILES += disk.gz
-
-test_gzip_SOURCES = test-gzip.c test.h
-test_gzip_CFLAGS = $(WARNINGS_CFLAGS) $(LIBGUESTFS_CFLAGS)
-test_gzip_LDADD = libtest.la $(LIBGUESTFS_LIBS)
-
-disk.gz: disk
- rm -f $@
- gzip -9 -c disk > $@
-endif HAVE_ZLIB
-endif HAVE_MKE2FS_WITH_D
-
# info plugin test.
TESTS += \
test-info-address.sh \
@@ -1253,6 +1236,23 @@ EXTRA_DIST += test-extentlist.sh
TESTS += test-fua.sh
EXTRA_DIST += test-fua.sh
+# gzip filter test.
+if HAVE_MKE2FS_WITH_D
+if HAVE_ZLIB
+LIBGUESTFS_TESTS += test-gzip
+check_DATA += disk.gz
+CLEANFILES += disk.gz
+
+test_gzip_SOURCES = test-gzip.c test.h
+test_gzip_CFLAGS = $(WARNINGS_CFLAGS) $(LIBGUESTFS_CFLAGS)
+test_gzip_LDADD = libtest.la $(LIBGUESTFS_LIBS)
+
+disk.gz: disk
+ rm -f $@
+ gzip -9 -c disk > $@
+endif HAVE_ZLIB
+endif HAVE_MKE2FS_WITH_D
+
# ip filter test.
TESTS += test-ip-filter.sh
EXTRA_DIST += test-ip-filter.sh
diff --git a/filters/gzip/gzip.c b/filters/gzip/gzip.c
new file mode 100644
index 00000000..582652cd
--- /dev/null
+++ b/filters/gzip/gzip.c
@@ -0,0 +1,347 @@
+/* nbdkit
+ * Copyright (C) 2018-2020 Red Hat Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * * Neither the name of Red Hat nor the names of its contributors may be
+ * used to endorse or promote products derived from this software without
+ * specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY RED HAT AND CONTRIBUTORS ''AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ * PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL RED HAT OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+ * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <string.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <pthread.h>
+
+#include <zlib.h>
+
+#include <nbdkit-filter.h>
+
+#include "cleanup.h"
+#include "minmax.h"
+
+/* The first thread to call gzip_prepare has to uncompress the whole
+ * plugin to the temporary file. This lock prevents concurrent
+ * access.
+ */
+static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
+
+/* Temporary file storing the uncompressed data. */
+static int fd = -1;
+
+/* Size of uncompressed data. */
+static int64_t size = -1;
+
+static void
+gzip_unload (void)
+{
+ if (fd >= 0)
+ close (fd);
+}
+
+static int
+gzip_thread_model (void)
+{
+ return NBDKIT_THREAD_MODEL_PARALLEL;
+}
+
+static void *
+gzip_open (nbdkit_next_open *next, nbdkit_backend *nxdata, int readonly)
+{
+ /* Always pass readonly=1 to the underlying plugin. */
+ if (next (nxdata, 1) == -1)
+ return NULL;
+
+ return NBDKIT_HANDLE_NOT_NEEDED;
+}
+
+/* Convert a zlib error (always negative) to an nbdkit error message,
+ * and return errno correctly.
+ */
+static void
+zerror (const char *op, const z_stream *strm, int zerr)
+{
+ if (zerr == Z_MEM_ERROR) {
+ errno = ENOMEM;
+ nbdkit_error ("gzip: %s: %m", op);
+ }
+ else {
+ errno = EIO;
+ if (strm->msg)
+ nbdkit_error ("gzip: %s: %s", op, strm->msg);
+ else
+ nbdkit_error ("gzip: %s: unknown error: %d", op, zerr);
+ }
+}
+
+/* Write a whole buffer to the temporary file or fail. */
+static int
+xwrite (const void *buf, size_t count)
+{
+ ssize_t r;
+
+ while (count > 0) {
+ r = write (fd, buf, count);
+ if (r == -1) {
+ nbdkit_error ("write: %m");
+ return -1;
+ }
+ buf += r;
+ count -= r;
+ }
+
+ return 0;
+}
+
+/* The first thread to call gzip_prepare uncompresses the whole plugin. */
+static int
+do_uncompress (struct nbdkit_next_ops *next_ops, void *nxdata)
+{
+ int64_t compressed_size;
+ z_stream strm;
+ int zerr;
+ const char *tmpdir;
+ size_t len;
+ char *template;
+ CLEANUP_FREE char *in_block = NULL, *out_block = NULL;
+
+ /* This was the same buffer size as used in the old plugin. As far
+ * as I know it was chosen at random.
+ */
+ const size_t block_size = 128 * 1024;
+
+ assert (size == -1);
+
+ /* Get the size of the underlying plugin. */
+ compressed_size = next_ops->get_size (nxdata);
+ if (compressed_size == -1)
+ return -1;
+
+ /* Create the temporary file. */
+ tmpdir = getenv ("TMPDIR");
+ if (!tmpdir)
+ tmpdir = LARGE_TMPDIR;
+
+ len = strlen (tmpdir) + 8;
+ template = alloca (len);
+ snprintf (template, len, "%s/XXXXXX", tmpdir);
+
+#ifdef HAVE_MKOSTEMP
+ fd = mkostemp (template, O_CLOEXEC);
+#else
+ /* This is only invoked serially with the lock held, so this is safe. */
+ fd = mkstemp (template);
+ if (fd >= 0) {
+ fd = set_cloexec (fd);
+ if (fd < 0) {
+ int e = errno;
+ unlink (template);
+ errno = e;
+ }
+ }
+#endif
+ if (fd == -1) {
+ nbdkit_error ("mkostemp: %s: %m", tmpdir);
+ return -1;
+ }
+
+ unlink (template);
+
+ /* Uncompress the whole plugin. This is REQUIRED in order to
+ * implement gzip_get_size. See: https://stackoverflow.com/a/9213826
+ *
+ * For use of inflateInit2 on gzip streams see:
+ * https://stackoverflow.com/a/1838702
+ */
+ memset (&strm, 0, sizeof strm);
+ zerr = inflateInit2 (&strm, 16+MAX_WBITS);
+ if (zerr != Z_OK) {
+ zerror ("inflateInit2", &strm, zerr);
+ return -1;
+ }
+
+ in_block = malloc (block_size);
+ if (!in_block) {
+ nbdkit_error ("malloc: %m");
+ return -1;
+ }
+ out_block = malloc (block_size);
+ if (!out_block) {
+ nbdkit_error ("malloc: %m");
+ return -1;
+ }
+
+ for (;;) {
+ /* Do we need to read more from the plugin? */
+ if (strm.avail_in == 0 && strm.total_in < compressed_size) {
+ size_t n = MIN (block_size, compressed_size - strm.total_in);
+ int err = 0;
+
+ if (next_ops->pread (nxdata, in_block, (uint32_t) n, strm.total_in,
+ 0, &err) == -1) {
+ errno = err;
+ return -1;
+ }
+
+ strm.next_in = (void *) in_block;
+ strm.avail_in = n;
+ }
+
+ /* Inflate the next chunk of input. */
+ strm.next_out = (void *) out_block;
+ strm.avail_out = block_size;
+ zerr = inflate (&strm, Z_SYNC_FLUSH);
+ if (zerr < 0) {
+ zerror ("inflate", &strm, zerr);
+ return -1;
+ }
+
+ /* Write the output to the file. */
+ if (xwrite (out_block, (char *) strm.next_out - out_block) == -1)
+ return -1;
+
+ if (zerr == Z_STREAM_END)
+ break;
+ }
+
+ /* Set the size to the total uncompressed size. */
+ size = strm.total_out;
+ nbdkit_debug ("gzip: uncompressed size: %" PRIi64, size);
+
+ zerr = inflateEnd (&strm);
+ if (zerr != Z_OK) {
+ zerror ("inflateEnd", &strm, zerr);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+gzip_prepare (struct nbdkit_next_ops *next_ops, void *nxdata, void *handle,
+ int readonly)
+{
+ ACQUIRE_LOCK_FOR_CURRENT_SCOPE (&lock);
+
+ if (size >= 0)
+ return 0;
+ return do_uncompress (next_ops, nxdata);
+}
+
+/* Whatever the plugin says, this filter makes it read-only. */
+static int
+gzip_can_write (struct nbdkit_next_ops *next_ops, void *nxdata,
+ void *handle)
+{
+ return 0;
+}
+
+/* Similar to above, whatever the plugin says, extents are not
+ * supported.
+ */
+static int
+gzip_can_extents (struct nbdkit_next_ops *next_ops, void *nxdata,
+ void *handle)
+{
+ return 0;
+}
+
+/* We are already operating as a cache regardless of the plugin's
+ * underlying .can_cache, but it's easiest to just rely on nbdkit's
+ * behavior of calling .pread for caching.
+ */
+static int
+gzip_can_cache (struct nbdkit_next_ops *next_ops, void *nxdata,
+ void *handle)
+{
+ return NBDKIT_CACHE_EMULATE;
+}
+
+/* Get the file size. */
+static int64_t
+gzip_get_size (struct nbdkit_next_ops *next_ops, void *nxdata,
+ void *handle)
+{
+ /* This must be true because gzip_prepare must have been called. */
+ assert (size >= 0);
+
+ /* We must call underlying get_size even though we don't use the
+ * result, because it caches the plugin size in server/backend.c.
+ */
+ if (next_ops->get_size (nxdata) == -1)
+ return -1;
+
+ return size;
+}
+
+/* Read data from the temporary file. */
+static int
+gzip_pread (struct nbdkit_next_ops *next_ops, void *nxdata,
+ void *handle, void *buf, uint32_t count, uint64_t offset,
+ uint32_t flags, int *err)
+{
+ /* This must be true because gzip_prepare must have been called. */
+ assert (fd >= 0);
+
+ while (count > 0) {
+ ssize_t r = pread (fd, buf, count, offset);
+ if (r == -1) {
+ nbdkit_error ("pread: %m");
+ return -1;
+ }
+ if (r == 0) {
+ nbdkit_error ("pread: unexpected end of file");
+ return -1;
+ }
+ buf += r;
+ count -= r;
+ offset += r;
+ }
+
+ return 0;
+}
+
+static struct nbdkit_filter filter = {
+ .name = "gzip",
+ .longname = "nbdkit gzip filter",
+ .unload = gzip_unload,
+ .thread_model = gzip_thread_model,
+ .open = gzip_open,
+ .prepare = gzip_prepare,
+ .can_write = gzip_can_write,
+ .can_extents = gzip_can_extents,
+ .can_cache = gzip_can_cache,
+ .prepare = gzip_prepare,
+ .get_size = gzip_get_size,
+ .pread = gzip_pread,
+};
+
+NBDKIT_REGISTER_FILTER(filter)
diff --git a/tests/test-gzip.c b/tests/test-gzip.c
index 9e1229e1..969d6d0e 100644
--- a/tests/test-gzip.c
+++ b/tests/test-gzip.c
@@ -1,5 +1,5 @@
/* nbdkit
- * Copyright (C) 2013 Red Hat Inc.
+ * Copyright (C) 2013-2020 Red Hat Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
@@ -50,7 +50,7 @@ main (int argc, char *argv[])
int r;
char *data;
- if (test_start_nbdkit ("gzip", "-r", "file=disk.gz", NULL) == -1)
+ if (test_start_nbdkit ("file", "--filter=gzip", "disk.gz", NULL) == -1)
exit (EXIT_FAILURE);
g = guestfs_create ();
diff --git a/TODO b/TODO
index 28bcc952..addf8025 100644
--- a/TODO
+++ b/TODO
@@ -169,8 +169,6 @@ Rust:
Suggestions for filters
-----------------------
-* gzip plugin should really be a filter
-
* LUKS encrypt/decrypt filter, bonus points if compatible with qemu
LUKS-encrypted disk images
--
2.27.0
4 years, 4 months
[PATCH] RFC: rhv-upload-plugin: Use imageio client
by Nir Soffer
We can use now ImageioClient to communicate with ovirt-imageio server
on oVirt host.
Using the client greatly simplifies the plugin, and enables new features
like transparent proxy support. The client will use transfer_url if
possible, or fall back to proxy_url.
Since the client implements the buffer protocol, move to version 2 of
the API for more efficient pread().
Another advantage the client is maintained by oVirt, and fixes are
delivered quickly in oVirt, without depending on RHEL release schedule.
Not ready yet, we have several issues:
- The client does not support "http", so the tests will fail now. This
is good since we should test with real imageio server. I will work on
better tests later.
- Need to require ovirt-imageio-client, providing the client library.
- params['rhv_direct'] is ignored, we always try direct transfer now.
- ImageioClient is not released yet. The patches are here:
https://gerrit.ovirt.org/q/topic:client+is:open+project:ovirt-imageio
- Not tested yet.
---
v2v/rhv-upload-plugin.py | 359 +++++++--------------------------------
1 file changed, 61 insertions(+), 298 deletions(-)
diff --git a/v2v/rhv-upload-plugin.py b/v2v/rhv-upload-plugin.py
index 8c11012b..172da602 100644
--- a/v2v/rhv-upload-plugin.py
+++ b/v2v/rhv-upload-plugin.py
@@ -26,12 +26,17 @@ import ssl
import sys
import time
+from ovirt_imageio.client import ImageioClient
+
from http.client import HTTPSConnection, HTTPConnection
from urllib.parse import urlparse
import ovirtsdk4 as sdk
import ovirtsdk4.types as types
+# Version 2 supports the buffer protocol, improving performance.
+API_VERSION = 2
+
# Timeout to wait for oVirt disks to change status, or the transfer
# object to finish initializing [seconds].
timeout = 5 * 60
@@ -114,67 +119,61 @@ def open(readonly):
transfer = create_transfer(connection, disk, host)
try:
- destination_url = parse_transfer_url(transfer)
- http = create_http(destination_url)
- options = get_options(http, destination_url)
- http = optimize_http(http, host, options)
+ # ImageioClient uses transfer_url if possible, and proxy_url if not.
+ # TODO: How to handle params['rhv_direct']?
+ client = ImageioClient(
+ transfer.transfer_url,
+ cafile=params['rhv_cafile'],
+ secure=not params['insecure'],
+ proxy_url=transfer.proxy_url)
except:
cancel_transfer(connection, transfer)
raise
- debug("imageio features: flush=%(can_flush)r trim=%(can_trim)r "
- "zero=%(can_zero)r unix_socket=%(unix_socket)r"
- % options)
-
# Save everything we need to make requests in the handle.
return {
- 'can_flush': options['can_flush'],
- 'can_trim': options['can_trim'],
- 'can_zero': options['can_zero'],
- 'needs_auth': options['needs_auth'],
+ 'client': client,
'connection': connection,
'disk_id': disk.id,
'transfer': transfer,
'failed': False,
- 'highestwrite': 0,
- 'http': http,
- 'path': destination_url.path,
}
-@failing
def can_trim(h):
- return h['can_trim']
+ # Imageio does not support trim. Image sparsness is controled on the server
+ # side. If transfer ticket["sparse"] is True, zeroing deallocates space.
+ # Otherwise zeroing allocates space.
+ return False
-@failing
def can_flush(h):
- return h['can_flush']
+ # Imageio client can always flush.
+ return True
+
+
+def can_zero(h):
+ # Imageio client can always zero. If the server does not support zero, the client
+ # emulates it by streaming zeroes to server.
+ return True
@failing
def get_size(h):
- return params['disk_size']
+ client = h['client']
+ try:
+ return client.size()
+ except Exception as e:
+ request_failed("cannot get size", e)
# Any unexpected HTTP response status from the server will end up calling this
# function which logs the full error, and raises a RuntimeError exception.
-def request_failed(r, msg):
- status = r.status
- reason = r.reason
- try:
- body = r.read()
- except EnvironmentError as e:
- body = "(Unable to read response body: %s)" % e
-
- # Log the full error if we're verbose.
+def request_failed(msg, reason):
+ error = "%s: %s" % (msg, reason)
debug("unexpected response from imageio server:")
- debug(msg)
- debug("%d: %s" % (status, reason))
- debug(body)
-
- # Only a short error is included in the exception.
- raise RuntimeError("%s: %d %s: %r" % (msg, status, reason, body[:200]))
+ debug(error)
+ raise RuntimeError(error)
# For documentation see:
@@ -184,168 +183,46 @@ def request_failed(r, msg):
@failing
-def pread(h, count, offset):
- http = h['http']
- transfer = h['transfer']
-
- headers = {"Range": "bytes=%d-%d" % (offset, offset + count - 1)}
- if h['needs_auth']:
- headers["Authorization"] = transfer.signed_ticket
-
- http.request("GET", h['path'], headers=headers)
-
- r = http.getresponse()
- # 206 = HTTP Partial Content.
- if r.status != 206:
- request_failed(r,
- "could not read sector offset %d size %d" %
- (offset, count))
-
- return r.read()
-
-
-@failing
-def pwrite(h, buf, offset):
- http = h['http']
- transfer = h['transfer']
-
- count = len(buf)
- h['highestwrite'] = max(h['highestwrite'], offset + count)
-
- http.putrequest("PUT", h['path'] + "?flush=n")
- if h['needs_auth']:
- http.putheader("Authorization", transfer.signed_ticket)
- # The oVirt server only uses the first part of the range, and the
- # content-length.
- http.putheader("Content-Range", "bytes %d-%d/*" % (offset, offset + count - 1))
- http.putheader("Content-Length", str(count))
- http.endheaders()
-
+def pread(h, buf, offset, flags):
+ client = h['client']
try:
- http.send(buf)
- except BrokenPipeError:
- pass
-
- r = http.getresponse()
- if r.status != 200:
- request_failed(r,
- "could not write sector offset %d size %d" %
- (offset, count))
-
- r.read()
+ client.read(offset, buf)
+ except Exception as e:
+ request_failed(
+ "could not read offset %d size %d" % (offset, len(buf)), e)
@failing
-def zero(h, count, offset, may_trim):
- http = h['http']
-
- # Unlike the trim and flush calls, there is no 'can_zero' method
- # so nbdkit could call this even if the server doesn't support
- # zeroing. If this is the case we must emulate.
- if not h['can_zero']:
- emulate_zero(h, count, offset)
- return
-
- # Construct the JSON request for zeroing.
- buf = json.dumps({'op': "zero",
- 'offset': offset,
- 'size': count,
- 'flush': False}).encode()
-
- headers = {"Content-Type": "application/json",
- "Content-Length": str(len(buf))}
-
- http.request("PATCH", h['path'], body=buf, headers=headers)
-
- r = http.getresponse()
- if r.status != 200:
- request_failed(r,
- "could not zero sector offset %d size %d" %
- (offset, count))
-
- r.read()
-
-
-def emulate_zero(h, count, offset):
- http = h['http']
- transfer = h['transfer']
-
- # qemu-img convert starts by trying to zero/trim the whole device.
- # Since we've just created a new disk it's safe to ignore these
- # requests as long as they are smaller than the highest write seen.
- # After that we must emulate them with writes.
- if offset + count < h['highestwrite']:
- http.putrequest("PUT", h['path'])
- if h['needs_auth']:
- http.putheader("Authorization", transfer.signed_ticket)
- http.putheader("Content-Range",
- "bytes %d-%d/*" % (offset, offset + count - 1))
- http.putheader("Content-Length", str(count))
- http.endheaders()
-
- try:
- buf = bytearray(128 * 1024)
- while count > len(buf):
- http.send(buf)
- count -= len(buf)
- http.send(memoryview(buf)[:count])
- except BrokenPipeError:
- pass
-
- r = http.getresponse()
- if r.status != 200:
- request_failed(r,
- "could not write zeroes offset %d size %d" %
- (offset, count))
-
- r.read()
+def pwrite(h, buf, offset, flags):
+ client = h['client']
+ try:
+ client.write(offset, buf)
+ except Exception as e:
+ request_failed(
+ "could not write offset %d size %d" % (offset, len(buf)), e)
@failing
-def trim(h, count, offset):
- http = h['http']
-
- # Construct the JSON request for trimming.
- buf = json.dumps({'op': "trim",
- 'offset': offset,
- 'size': count,
- 'flush': False}).encode()
-
- headers = {"Content-Type": "application/json",
- "Content-Length": str(len(buf))}
-
- http.request("PATCH", h['path'], body=buf, headers=headers)
-
- r = http.getresponse()
- if r.status != 200:
- request_failed(r,
- "could not trim sector offset %d size %d" %
- (offset, count))
-
- r.read()
+def zero(h, count, offset, flags):
+ client = h['client']
+ try:
+ client.zero(offset, count)
+ except Exception as e:
+ request_failed(
+ "could not zero offset %d size %d" % (offset, count), e)
@failing
def flush(h):
- http = h['http']
-
- # Construct the JSON request for flushing.
- buf = json.dumps({'op': "flush"}).encode()
-
- headers = {"Content-Type": "application/json",
- "Content-Length": str(len(buf))}
-
- http.request("PATCH", h['path'], body=buf, headers=headers)
-
- r = http.getresponse()
- if r.status != 200:
- request_failed(r, "could not flush")
-
- r.read()
+ client = h['client']
+ try:
+ client.flush()
+ except Exception as e:
+ request_failed("could not flush", e)
def close(h):
- http = h['http']
+ client = h['client']
connection = h['connection']
transfer = h['transfer']
disk_id = h['disk_id']
@@ -356,7 +233,7 @@ def close(h):
# plugin exits.
sys.stderr.flush()
- http.close()
+ client.close()
# If the connection failed earlier ensure we cancel the transfer. Canceling
# the transfer will delete the disk.
@@ -382,24 +259,6 @@ def close(h):
connection.close()
-# Modify http.client.HTTPConnection to work over a Unix domain socket.
-# Derived from uhttplib written by Erik van Zijst under an MIT license.
-# (https://pypi.org/project/uhttplib/)
-# Ported to Python 3 by Irit Goihman.
-
-
-class UnixHTTPConnection(HTTPConnection):
- def __init__(self, path, timeout=socket._GLOBAL_DEFAULT_TIMEOUT):
- self.path = path
- HTTPConnection.__init__(self, "localhost", timeout=timeout)
-
- def connect(self):
- self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
- if self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
- self.sock.settimeout(timeout)
- self.sock.connect(self.path)
-
-
# oVirt SDK operations
@@ -659,99 +518,3 @@ def transfer_supports_format():
"""
sig = inspect.signature(types.ImageTransfer)
return "format" in sig.parameters
-
-
-# oVirt imageio operations
-
-
-def parse_transfer_url(transfer):
- """
- Returns a parsed transfer url, preferring direct transfer if possible.
- """
- if params['rhv_direct']:
- if transfer.transfer_url is None:
- raise RuntimeError("direct upload to host not supported, "
- "requires ovirt-engine >= 4.2 and only works "
- "when virt-v2v is run within the oVirt/RHV "
- "environment, eg. on an oVirt node.")
- return urlparse(transfer.transfer_url)
- else:
- return urlparse(transfer.proxy_url)
-
-
-def create_http(url):
- """
- Create http connection for transfer url.
-
- Returns HTTPConnection.
- """
- if url.scheme == "https":
- context = \
- ssl.create_default_context(purpose=ssl.Purpose.SERVER_AUTH,
- cafile=params['rhv_cafile'])
- if params['insecure']:
- context.check_hostname = False
- context.verify_mode = ssl.CERT_NONE
-
- return HTTPSConnection(url.hostname, url.port, context=context)
- elif url.scheme == "http":
- return HTTPConnection(url.hostname, url.port)
- else:
- raise RuntimeError("unknown URL scheme (%s)" % url.scheme)
-
-
-def get_options(http, url):
- """
- Send OPTIONS request to imageio server and return options dict.
- """
- http.request("OPTIONS", url.path)
- r = http.getresponse()
- data = r.read()
-
- if r.status == 200:
- j = json.loads(data)
- features = j["features"]
- return {
- # New imageio never used authentication.
- "needs_auth": False,
- "can_flush": "flush" in features,
- "can_trim": "trim" in features,
- "can_zero": "zero" in features,
- "unix_socket": j.get('unix_socket'),
- }
-
- elif r.status == 405 or r.status == 204:
- # Old imageio servers returned either 405 Method Not Allowed or
- # 204 No Content (with an empty body).
- return {
- # Authentication was required only when using old imageio proxy.
- # Can be removed when dropping support for oVirt < 4.2.
- "needs_auth": not params['rhv_direct'],
- "can_flush": False,
- "can_trim": False,
- "can_zero": False,
- "unix_socket": None,
- }
- else:
- raise RuntimeError("could not use OPTIONS request: %d: %s" %
- (r.status, r.reason))
-
-
-def optimize_http(http, host, options):
- """
- Return an optimized http connection using unix socket if we are connected
- to imageio server on the local host and it features a unix socket.
- """
- unix_socket = options['unix_socket']
-
- if host is not None and unix_socket is not None:
- try:
- http = UnixHTTPConnection(unix_socket)
- except Exception as e:
- # Very unlikely failure, but we can recover by using the https
- # connection.
- debug("cannot create unix socket connection, using https: %s" % e)
- else:
- debug("optimizing connection using unix socket %r" % unix_socket)
-
- return http
--
2.25.4
4 years, 4 months
[PATCH nbdkit] tar as a filter.
by Richard W.M. Jones
For review only, this needs some clean up and more tests.
My eyes are going cross-eyed looking at the calculate_offset_of_entry
function, so time to take a break ...
Rich.
4 years, 4 months
Building virt-v2v - Error: guestfs_config.cmi: is not a compiled interface for this version of OCaml
by Nir Soffer
I did not touch virt-v2v for a while, and I cannot build it now.
There are not instructions in the README on under docs, so I tried the common
stuff:
$ ./autogen.sh
...
Next you should type 'make' to build the package,
...
$ make
make all-recursive
make[1]: Entering directory '/home/nsoffer/src/virt-v2v'
Making all in common/mlstdutils
make[2]: Entering directory '/home/nsoffer/src/virt-v2v/common/mlstdutils'
OCAMLOPT guestfs_config.cmx
File "guestfs_config.ml", line 1:
Error: guestfs_config.cmi
is not a compiled interface for this version of OCaml.
It seems to be for an older version of OCaml.
make[2]: *** [Makefile:2321: guestfs_config.cmx] Error 2
make[2]: Leaving directory '/home/nsoffer/src/virt-v2v/common/mlstdutils'
make[1]: *** [Makefile:1842: all-recursive] Error 1
make[1]: Leaving directory '/home/nsoffer/src/virt-v2v'
make: *** [Makefile:1760: all] Error 2
I'm using Fedora 31.
Nir
4 years, 4 months