[cross-project PATCH v2] NBD 64-bit extensions
by Eric Blake
This is a cover letter for a set of multi-project patch series all
designed to implement 64-bit operations in NBD, and demonstrate
interoperability of the new extension between projects.
v1 of the project was attempted nearly a year ago:
https://lists.nongnu.org/archive/html/qemu-devel/2021-12/msg00453.html
Since then, I've addressed a lot of preliminary cleanups in libnbd to
make it easier to test things, and incorporated review comments issued
on v1, including:
- no orthogonality between simple/structured replies and 64-bit mode:
once extended headers are negotiated, all transmission traffic (both
from client to server and server to client) uses just one header
size
- add support for the client to pass a payload on commands to the
server, and demonstrate it by further implementing a way to pass a
flag with NBD_CMD_BLOCK_STATUS that says the client is passing a
payload to request status of just a subset of the negotiated
contexts, rather than all possible contexts that were earlier
negotiated during NBD_OPT_SET_META_CONTEXT
- tweaks to the header layouts: tweak block status to provide 64-bit
flags values (although "base:allocation" will remain usable with
just 32-bit flags); reply with offset rather than padding and with
fields rearranged for maximal sharing between client and server
layouts
- word-smithing of the NBD proposed protocol; split things into
several smaller patches, where we can choose how much to take
- more unit tests added in qemu and libnbd
- the series end with RFC patches on whether to support 64-bit hole
responses to NBD_CMD_READ, even though our current limitations say
servers don't have to accept more than a 32M read request and
therefore will never have that big of a hole to read)
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
1 year, 8 months
[PATCH libnbd] generator: Pass LISTEN_FDNAMES=nbd with systemd socket activation
by Richard W.M. Jones
systemd allows sockets passed through socket activation to be named
with the protocol they require. We only ever pass one socket, name
it. This environment variable is currently ignored by qemu-nbd and
nbdkit, but might be used by qemu-storage-daemon:
https://lists.nongnu.org/archive/html/qemu-devel/2023-01/msg06114.html
---
generator/states-connect-socket-activation.c | 41 +++++++++++---------
1 file changed, 23 insertions(+), 18 deletions(-)
diff --git a/generator/states-connect-socket-activation.c b/generator/states-connect-socket-activation.c
index 9a83834915..22f06d4fd3 100644
--- a/generator/states-connect-socket-activation.c
+++ b/generator/states-connect-socket-activation.c
@@ -34,16 +34,18 @@
/* This is baked into the systemd socket activation API. */
#define FIRST_SOCKET_ACTIVATION_FD 3
-/* == strlen ("LISTEN_PID=") | strlen ("LISTEN_FDS=") */
-#define PREFIX_LENGTH 11
-
extern char **environ;
/* Prepare environment for calling execvp when doing systemd socket
* activation. Takes the current environment and copies it. Removes
- * any existing LISTEN_PID or LISTEN_FDS and replaces them with new
- * variables. env[0] is "LISTEN_PID=..." which is filled in by
- * CONNECT_SA.START, and env[1] is "LISTEN_FDS=1".
+ * any existing LISTEN_PID, LISTEN_FDS or LISTEN_FDNAMES, and replaces
+ * them with new variables.
+ *
+ * env[0] is "LISTEN_PID=..." which is filled in by CONNECT_SA.START
+ *
+ * env[1] is "LISTEN_FDS=1"
+ *
+ * env[2] is "LISTEN_FDNAMES=nbd"
*/
static int
prepare_socket_activation_environment (string_vector *env)
@@ -53,26 +55,29 @@ prepare_socket_activation_environment (string_vector *env)
assert (env->len == 0);
- /* Reserve slots env[0] and env[1]. */
+ /* Reserve slots env[0]..env[2] */
+ if (string_vector_reserve (env, 3) == -1)
+ goto err;
p = strdup ("LISTEN_PID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx");
if (p == NULL)
goto err;
- if (string_vector_append (env, p) == -1) {
- free (p);
- goto err;
- }
+ string_vector_append (env, p);
p = strdup ("LISTEN_FDS=1");
if (p == NULL)
goto err;
- if (string_vector_append (env, p) == -1) {
- free (p);
+ string_vector_append (env, p);
+ p = strdup ("LISTEN_FDNAMES=nbd");
+ if (p == NULL)
goto err;
- }
+ string_vector_append (env, p);
- /* Append the current environment, but remove LISTEN_PID, LISTEN_FDS. */
+ /* Append the current environment, but remove the special
+ * environment variables.
+ */
for (i = 0; environ[i] != NULL; ++i) {
- if (strncmp (environ[i], "LISTEN_PID=", PREFIX_LENGTH) != 0 &&
- strncmp (environ[i], "LISTEN_FDS=", PREFIX_LENGTH) != 0) {
+ if (strncmp (environ[i], "LISTEN_PID=", 11) != 0 &&
+ strncmp (environ[i], "LISTEN_FDS=", 11) != 0 &&
+ strncmp (environ[i], "LISTEN_FDNAMES=", 15) != 0) {
char *copy = strdup (environ[i]);
if (copy == NULL)
goto err;
@@ -194,7 +199,7 @@ CONNECT_SA.START:
char buf[32];
const char *v =
nbd_internal_fork_safe_itoa ((long) getpid (), buf, sizeof buf);
- strcpy (&env.ptr[0][PREFIX_LENGTH], v);
+ strcpy (&env.ptr[0][strlen ("LISTEN_FDS=")], v);
/* Restore SIGPIPE back to SIG_DFL. */
signal (SIGPIPE, SIG_DFL);
--
2.39.0
1 year, 9 months
[p2v PATCH 00/11] Expose virt-v2v's "-oo"; re-enable openstack
by Laszlo Ersek
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1792141
Let the user pass "-oo" options from the kernel cmdline and from the GUI
to virt-v2v. This is primarily useful with the OpenStack output mode,
so reenable that mode.
Cc: Alban Lecorps <alban.lecorps(a)ubisoft.com>
Laszlo
Alban Lecorps (1):
Introduce "p2v.output.misc" for passing "-oo" options to virt-v2v
Laszlo Ersek (10):
test-virt-p2v-cmdline: turn option list into a shell array
guestfs-utils: import guestfs_int_join_strings()
gui: factor out entry_text_dup()
gui: factor out tgl_btn_is_act()
gui: flatten get_phys_topo_from_conv_dlg()
gui: wrap overlong function declaration lines
gui: wrap string literals
gui: expose "p2v.output.misc" (-oo)
Reenable the OpenStack output mode
make-kickstart: add URLs for RHEL-9
conversion.c | 9 ++
generate-p2v-config.pl | 10 ++
gui.c | 168 ++++++++++++++++----
libguestfs/guestfs-utils.c | 35 ++++
libguestfs/guestfs-utils.h | 1 +
ssh.c | 5 +-
test-virt-p2v-cmdline.sh | 24 ++-
virt-p2v-make-kickstart.in | 13 ++
virt-p2v.pod | 2 +
9 files changed, 230 insertions(+), 37 deletions(-)
1 year, 9 months
[libnbd PATCH v2 0/3] Improve nbdsh -u handling
by Eric Blake
v1 was here:
https://listman.redhat.com/archives/libguestfs/2022-October/030216.html
Since then, I've incorporated changes based on Rich's feedback:
swap order of patches 2 and 3
less change in patch 1 (including no unsafe eval(%s) for --uri)
in patch 2, include -c in list of snippets to store, and use dict of
lambdas to map back to the desired action
Eric Blake (3):
nbdsh: Refactor handling of -c
nbdsh: Allow -u interleaved with -c
nbdsh: Improve --help and initial banner contents.
python/nbdsh.py | 142 +++++++++++++++++++++++++++------------------
sh/test-context.sh | 26 ++++-----
sh/test-error.sh | 37 +++++++-----
3 files changed, 119 insertions(+), 86 deletions(-)
--
2.38.1
1 year, 9 months
[nbdkit PATCH 0/2] retry: add support for retrying .open
by Eric Blake
In https://bugzilla.redhat.com/show_bug.cgi?id=1841820, it was pointed
out that the retry filter not retrying .open means that an ssh
connection (such as in a vmx+ssh v2v conversion) fails when the ssh
connection itself cannot be retried. A year ago, this was an inherent
limitation of our retry implementation; but in the meantime, my work
to allow filters to open independent backends has made it feasible to
implement.
Eric Blake (2):
retry: Add in retry support during .open
retry: Test previous patch
tests/Makefile.am | 2 +
server/backend.c | 7 ++-
filters/retry/retry.c | 98 ++++++++++++++++++++++++----------------
tests/test-retry-open.sh | 85 ++++++++++++++++++++++++++++++++++
4 files changed, 150 insertions(+), 42 deletions(-)
create mode 100755 tests/test-retry-open.sh
--
2.39.1
1 year, 9 months
[PATCH v2v v2] -o rhv-upload: Improve error message for invalid or missing -os parameter
by Richard W.M. Jones
For -o rhv-upload, the -os parameter specifies the storage domain.
Because the RHV API allows globs when searching for a domain, if you
used a parameter like -os 'data*' then this would confuse the Python
code, since it can glob to the name of a storage domain, but then
later fail when we try to exact match the storage domain we found.
The result of this was a confusing error in the precheck script:
IndexError: list index out of range
This fix validates the output storage parameter before trying to use
it. Since valid storage domain names cannot contain glob characters
or spaces, it avoids the problems above and improves the error message
that the user sees:
$ virt-v2v [...] -o rhv-upload -os ''
...
RuntimeError: The storage domain (-os) parameter ‘’ is not valid
virt-v2v: error: failed server prechecks, see earlier errors
$ virt-v2v [...] -o rhv-upload -os 'data*'
...
RuntimeError: The storage domain (-os) parameter ‘data*’ is not valid
virt-v2v: error: failed server prechecks, see earlier errors
Although the IndexError should no longer happen, I also added a
try...catch around that code to improve the error in case it still
happens.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1986386
Reported-by: Junqin Zhou
Thanks: Nir Soffer
---
output/rhv-upload-precheck.py | 27 +++++++++++++++++++++------
1 file changed, 21 insertions(+), 6 deletions(-)
diff --git a/output/rhv-upload-precheck.py b/output/rhv-upload-precheck.py
index 1dc1b8498a..ba125611ba 100644
--- a/output/rhv-upload-precheck.py
+++ b/output/rhv-upload-precheck.py
@@ -18,6 +18,7 @@
import json
import logging
+import re
import sys
from urllib.parse import urlparse
@@ -46,6 +47,15 @@ output_password = output_password.rstrip()
parsed = urlparse(params['output_conn'])
username = parsed.username or "admin@internal"
+# Check the storage domain name is valid
+# (https://bugzilla.redhat.com/show_bug.cgi?id=1986386#c1)
+# Also this means it cannot contain spaces or glob symbols, so
+# the search below is valid.
+output_storage = params['output_storage']
+if not re.match('^[-a-zA-Z0-9_]+$', output_storage):
+ raise RuntimeError("The storage domain (-os) parameter ‘%s’ is not valid" %
+ output_storage)
+
# Connect to the server.
connection = sdk.Connection(
url=params['output_conn'],
@@ -60,28 +70,33 @@ system_service = connection.system_service()
# Check whether there is a datacenter for the specified storage.
data_centers = system_service.data_centers_service().list(
- search='storage.name=%s' % params['output_storage'],
+ search='storage.name=%s' % output_storage,
case_sensitive=True,
)
if len(data_centers) == 0:
storage_domains = system_service.storage_domains_service().list(
- search='name=%s' % params['output_storage'],
+ search='name=%s' % output_storage,
case_sensitive=True,
)
if len(storage_domains) == 0:
# The storage domain does not even exist.
raise RuntimeError("The storage domain ‘%s’ does not exist" %
- (params['output_storage']))
+ output_storage)
# The storage domain is not attached to a datacenter
# (shouldn't happen, would fail on disk creation).
raise RuntimeError("The storage domain ‘%s’ is not attached to a DC" %
- (params['output_storage']))
+ output_storage)
datacenter = data_centers[0]
# Get the storage domain.
storage_domains = connection.follow_link(datacenter.storage_domains)
-storage_domain = [sd for sd in storage_domains if sd.name == params['output_storage']][0]
+try:
+ storage_domain = [sd for sd in storage_domains
+ if sd.name == output_storage][0]
+except IndexError:
+ raise RuntimeError("The storage domain ‘%s’ does not exist" %
+ output_storage)
# Get the cluster.
clusters = connection.follow_link(datacenter.clusters)
@@ -90,7 +105,7 @@ if len(clusters) == 0:
raise RuntimeError("The cluster ‘%s’ is not part of the DC ‘%s’, "
"where the storage domain ‘%s’ is" %
(params['rhv_cluster'], datacenter.name,
- params['output_storage']))
+ output_storage))
cluster = clusters[0]
cpu = cluster.cpu
if cpu.architecture == types.Architecture.UNDEFINED:
--
2.39.0
1 year, 9 months
Proposal for tools to inspect drivers and inject Windows virtio drivers
by Richard W.M. Jones
We've had requests from products that use libguestfs & virt-v2v to
provide additional tooling for:
(a) Inspecting a virtual machine disk image to find out what virtual
devices it needs (ie. what drivers are installed in the guest).
(b) Taking a Windows disk image and injecting virtio drivers from the
virtio-win suite.
Virt-v2v does both operations as a part of importing guests from
VMware to KVM, but it doesn't expose these as separate operations.
- - -
For (a), you might run the tool against a disk image and it would
display various facts (similar to virt-inspector
https://libguestfs.org/virt-inspector.1.html):
$ virt-drivers -a linux.img
<operatingsystems>
<operatingsystem>
<root>/dev/sda2</root>
<boot_drivers>
[list of drivers from the initramfs in a format TBD]
</boot_drivers>
<drivers>
[list of kernel modules]
</drivers>
<boot_loader>
[extra stuff about the bootloader configuration,
list of kernels, default kernel, grub1 or grub2,
config file, ...]
</boot_loader>
I propose a completely new tool added to guestfs-tools to do this,
which will basically pull the current kernel module and grub analysis
code from virt-v2v into a new library in libguestfs-common.
For Windows we actually don't do this in virt-v2v at the moment, so
that would need to be completely new code, likely parsing the
DriverDatabase from the Windows registry.
- - -
For (b), you could specify the location of the Window disk image and
the virtio-win path/ISO to have it attempt to install the drivers in
the disk image:
$ virt-customize -a windows.img \
--inject-virtio-win /usr/share/virtio-win/virtio-win-1.2.3.iso
This would largely involve taking the current virtio-win code from
virt-v2v, turning it into a library, and then adding a new module into
libguestfs-common/mlcustomize. (It would be a good time to refactor
this code.)
- - -
Good? Bad? Let me know what you think ...
I think one large danger is that injecting virtio-win drivers into
existing Windows images is a very invasive operation with a large
potential to go wrong. It would be better to work with the tools that
create these images so that they're able to inject virtio-win drivers
at initial creation. (Or "Inbox" the drivers with Microsoft, but
there may be issues there).
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
nbdkit - Flexible, fast NBD server with plugins
https://gitlab.com/nbdkit/nbdkit
1 year, 9 months