RFC for NBD protocol extension: extended headers
by Eric Blake
In response to this mail, I will be cross-posting a series of patches
to multiple projects as a proof-of-concept implementation and request
for comments on a new NBD protocol extension, called
NBD_OPT_EXTENDED_HEADERS. With this in place, it will be possible for
clients to request 64-bit zero, trim, cache, and block status
operations when supported by the server.
Not yet complete: an implementation of this in nbdkit. I also plan to
tweak libnbd's 'nbdinfo --map' and 'nbdcopy' to take advantage of the
larger block status results. Also, with 64-bit commands, we may want
to also make it easier to let servers advertise an actual maximum size
they are willing to accept for the commands in question (for example,
a server may be happy with a full 64-bit block status, but still want
to limit non-fast zero and cache to a smaller limit to avoid denial of
service).
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
2 years, 2 months
[PATCH v3] spec: Clarify BLOCK_STATUS reply details
by Eric Blake
Our docs were inconsistent on whether a NBD_REPLY_TYPE_BLOCK_STATUS
reply chunk can exceed the client's requested length, and silent on
whether the lengths must be consistent when multiple contexts were
negotiated. Clarify this to match existing practice as implemented in
qemu-nbd. Clean up some nearby grammatical errors while at it.
---
Another round of rewording attempts, based on feedback from Rich on
v2. I went ahead and pushed patch 1 and 2 of the v2 series, as they
were less controversial.
doc/proto.md | 42 ++++++++++++++++++++++++++++--------------
1 file changed, 28 insertions(+), 14 deletions(-)
diff --git a/doc/proto.md b/doc/proto.md
index 8a817d2..bacccfa 100644
--- a/doc/proto.md
+++ b/doc/proto.md
@@ -882,15 +882,25 @@ The procedure works as follows:
server supports.
- During transmission, a client can then indicate interest in metadata
for a given region by way of the `NBD_CMD_BLOCK_STATUS` command,
- where *offset* and *length* indicate the area of interest. The
- server MUST then respond with the requested information, for all
- contexts which were selected during negotiation. For every metadata
- context, the server sends one set of extent chunks, where the sizes
- of the extents MUST be less than or equal to the length as specified
- in the request. Each extent comes with a *flags* field, the
- semantics of which are defined by the metadata context.
-- A server MUST reply to `NBD_CMD_BLOCK_STATUS` with a structured
- reply of type `NBD_REPLY_TYPE_BLOCK_STATUS`.
+ where *offset* and *length* indicate the area of interest. On
+ success, the server MUST respond with one structured reply chunk of
+ type `NBD_REPLY_TYPE_BLOCK_STATUS` per metadata context selected
+ during negotiation, where each reply chunk is a list of one or more
+ consecutive extents for that context. Each extent comes with a
+ *flags* field, the semantics of which are defined by the metadata
+ context.
+
+The client's requested *length* is only a hint to the server, so the
+cumulative extent length contained in a chunk of the server's reply
+may be shorter or longer the original request. When more than one
+metadata context was negotiated, the reply chunks for the different
+contexts of a single block status request need not have the same
+number of extents or cumulative extent length.
+
+In the request, the client may use the `NBD_CMD_FLAG_REQ_ONE` command
+flag to further constrain the server's reply so that each chunk
+contains exactly one extent whose length does not exceed the client's
+original *length*.
A client MUST NOT use `NBD_CMD_BLOCK_STATUS` unless it selected a
nonzero number of metadata contexts during negotiation, and used the
@@ -1778,8 +1788,8 @@ MUST initiate a hard disconnect.
*length* MUST be 4 + (a positive integer multiple of 8). This reply
represents a series of consecutive block descriptors where the sum
of the length fields within the descriptors is subject to further
- constraints documented below. This chunk type MUST appear
- exactly once per metadata ID in a structured reply.
+ constraints documented below. A successful block status request MUST
+ have exactly one status chunk per negotiated metadata context ID.
The payload starts with:
@@ -1801,15 +1811,19 @@ MUST initiate a hard disconnect.
*length* of the final extent MAY result in a sum larger than the
original requested length, if the server has that information anyway
as a side effect of reporting the status of the requested region.
+ When multiple metadata contexts are negotiated, the reply chunks for
+ the different contexts need not have the same number of extents or
+ cumulative extent length.
Even if the client did not use the `NBD_CMD_FLAG_REQ_ONE` flag in
its request, the server MAY return fewer descriptors in the reply
than would be required to fully specify the whole range of requested
information to the client, if looking up the information would be
too resource-intensive for the server, so long as at least one
- extent is returned. Servers should however be aware that most
- clients implementations will then simply ask for the next extent
- instead.
+ extent is returned. Servers should however be aware that most
+ client implementations will likely follow up with a request for
+ extent information at the first offset not covered by a
+ reduced-length reply.
All error chunk types have bit 15 set, and begin with the same
*error*, *message length*, and optional *message* fields as
--
2.35.1
2 years, 6 months
[PATCH v3] New API: guestfs_device_name returning the drive name
by Richard W.M. Jones
For each drive added, return the name. For example calling this with
index 0 will return the string "/dev/sda". I called it
guestfs_device_name (not drive_name) for consistency with the existing
guestfs_device_index function.
You don't really need to call this function. You can follow the
advice here:
https://libguestfs.org/guestfs.3.html#block-device-naming
and assume that drives are added with predictable names like
"/dev/sda", "/dev/sdb", etc.
However it's useful to expose the internal guestfs_int_drive_name
function since especially handling names beyond index 26 is tricky
(https://rwmj.wordpress.com/2011/01/09/how-are-linux-drives-named-beyond-d...)
Fixes: https://github.com/libguestfs/libguestfs/issues/80
---
generator/actions_core.ml | 24 +++++++++++++++++++++++-
lib/drives.c | 15 +++++++++++++++
2 files changed, 38 insertions(+), 1 deletion(-)
diff --git a/generator/actions_core.ml b/generator/actions_core.ml
index ce9ee39cca..dc12fdc33e 100644
--- a/generator/actions_core.ml
+++ b/generator/actions_core.ml
@@ -737,7 +737,29 @@ returns the index of the device in the list of devices.
Index numbers start from 0. The named device must exist,
for example as a string returned from C<guestfs_list_devices>.
-See also C<guestfs_list_devices>, C<guestfs_part_to_dev>." };
+See also C<guestfs_list_devices>, C<guestfs_part_to_dev>,
+C<guestfs_device_name>." };
+
+ { defaults with
+ name = "device_name"; added = (1, 49, 1);
+ style = RString (RPlainString, "name"), [Int "index"], [];
+ tests = [
+ InitEmpty, Always, TestResult (
+ [["device_name"; "0"]], "STREQ (ret, \"/dev/sda\")"), [];
+ InitEmpty, Always, TestResult (
+ [["device_name"; "1"]], "STREQ (ret, \"/dev/sdb\")"), [];
+ InitEmpty, Always, TestLastFail (
+ [["device_name"; "99"]]), []
+ ];
+ shortdesc = "convert device index to name";
+ longdesc = "\
+This function takes a device index and returns the device
+name. For example index C<0> will return the string C</dev/sda>.
+
+The drive index must have been added to the handle.
+
+See also C<guestfs_list_devices>, C<guestfs_part_to_dev>,
+C<guestfs_device_index>." };
{ defaults with
name = "shutdown"; added = (1, 19, 16);
diff --git a/lib/drives.c b/lib/drives.c
index fd95308d2d..a6179fc367 100644
--- a/lib/drives.c
+++ b/lib/drives.c
@@ -31,6 +31,7 @@
#include <netdb.h>
#include <arpa/inet.h>
#include <assert.h>
+#include <errno.h>
#include <libintl.h>
#include "c-ctype.h"
@@ -1084,3 +1085,17 @@ guestfs_impl_device_index (guestfs_h *g, const char *device)
error (g, _("%s: device not found"), device);
return r;
}
+
+char *
+guestfs_impl_device_name (guestfs_h *g, int index)
+{
+ char drive_name[64];
+
+ if (index < 0 || index >= g->nr_drives) {
+ guestfs_int_error_errno (g, EINVAL, _("drive index out of range"));
+ return NULL;
+ }
+
+ guestfs_int_drive_name (index, drive_name);
+ return safe_asprintf (g, "/dev/sd%s", drive_name);
+}
--
2.35.1
2 years, 7 months
[PATCH 0/2] lib: Add API for reading the "name" field for a drive
by Richard W.M. Jones
Patch 1 seems uncontroversial.
Patch 2 is tricky. This is related to the following RFE:
https://github.com/libguestfs/libguestfs/issues/80
I initially believed that the reporter wanted to just associate some
general data per drive, and wanted to reuse the (unused) name field
for this. That's what this patch implements.
However I since believe that he's in fact trying to just get the drive
name of the drive that was just added. There's no need for any API
for that, you can just assume drives are called (from the guestfs API
point of view) /dev/sda, /dev/sdb etc.
Adding patch 2 probably confuses things, since I guess most people
would assume that an API called guestfs_get_drive_name would return
the drive name (/dev/sda) not some string that you have to add.
Rich.
2 years, 7 months
golang examples fail to find libnbd
by info@maximka.de
Download libnbd from github and compiled successfully, but for some reason golang-executable created by `make` failed to find libnbd:
✔ ~/git/github/libguestfs/libnbd [master|✔]
21:24 $ ./golang/examples/get_size/get_size
./golang/examples/get_size/get_size: /lib/x86_64-linux-gnu/libnbd.so.0: version `LIBNBD_1.12' not found (required by ./golang/examples/get_size/get_size)
./golang/examples/get_size/get_size: /lib/x86_64-linux-gnu/libnbd.so.0: version `LIBNBD_1.8' not found (required by ./golang/examples/get_size/get_size)
./golang/examples/get_size/get_size: /lib/x86_64-linux-gnu/libnbd.so.0: version `LIBNBD_1.6' not found (required by ./golang/examples/get_size/get_size)
./golang/examples/get_size/get_size: /lib/x86_64-linux-gnu/libnbd.so.0: version `LIBNBD_1.4' not found (required by ./golang/examples/get_size/get_size)
Could anyone explain, why go compiles executable without any issue, but it fails to run?
I'll appreciate any feedback!
Thanks,
Alexei
2 years, 8 months
[p2v PATCH] virt-p2v-make-kickstart: add packages for making the P2V ISO UEFI-bootable
by Laszlo Ersek
Including the shim-x64 and grub2-efi-x64-cdboot packages causes
livecd-creator to automatically build a UEFI-bootable CD.
I didn't modify the dependencies for distros other than the RH family,
because:
- I checked Debian briefly for any package providing a file ending with
"BOOTX64.EFI", and the only hit was irrelevant (it was a systemd file).
- Kickstart is RH-specific anyway.
Tested with an actual conversion:
- Built the ISO with the following livecd-creator and dnf fixes included:
- https://github.com/rpm-software-management/dnf/pull/1825
- https://github.com/livecd-tools/livecd-tools/pull/227
- Booted the P2V ISO in UEFI mode on QEMU, against a previously installed
(UEFI) RHEL-7.9 guest's disk.
- Converted the guest with the help of a virt-v2v conversion server VM,
using the QEMU output module.
- Successfully booted the converted guest.
Thanks: Neal Gompa <ngompa13(a)gmail.com>
Cc: Neal Gompa <ngompa13(a)gmail.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2038105
Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
---
dependencies.m4 | 4 ++++
virt-p2v-make-kickstart.pod | 14 +++++++++++++-
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/dependencies.m4 b/dependencies.m4
index fc3715e4d3c7..8a88b61d3120 100644
--- a/dependencies.m4
+++ b/dependencies.m4
@@ -59,6 +59,10 @@ ifelse(REDHAT,1,
dnl RHBZ#1157679
@hardware-support
+
+ dnl UEFI Boot (RHBZ#2038105)
+ shim-x64
+ grub2-efi-x64-cdboot
)
ifelse(DEBIAN,1,
diff --git a/virt-p2v-make-kickstart.pod b/virt-p2v-make-kickstart.pod
index c5e23d59222a..eda0c737c2e8 100644
--- a/virt-p2v-make-kickstart.pod
+++ b/virt-p2v-make-kickstart.pod
@@ -147,7 +147,7 @@ RHEL 6-based virt-p2v 0.9 they can boot on any hardware.
=head2 TESTING THE P2V ISO USING QEMU
-You can use qemu to test-boot the P2V ISO:
+You can use qemu to test-boot the P2V ISO (BIOS mode):
qemu-kvm -m 1024 -hda /tmp/guest.img -cdrom /tmp/livecd-p2v.iso -boot d
@@ -155,6 +155,18 @@ Note that C<-hda> is the (virtual) system that you want to convert
(for test purposes). It could be any guest type supported by
L<virt-v2v(1)>, including Windows or Red Hat Enterprise Linux.
+For UEFI:
+
+ qemu-kvm -m 1024 -M q35 \
+ -drive if=pflash,format=raw,file=/usr/share/OVMF/OVMF_CODE.fd,readonly=on \
+ -drive if=pflash,format=raw,file=/usr/share/OVMF/OVMF_VARS.fd,snapshot=on \
+ \
+ -drive if=none,format=raw,file=/tmp/guest.img,media=disk,id=guest-disk \
+ -device ide-hd,drive=guest-disk,bus=ide.0 \
+ \
+ -drive if=none,format=raw,file=/tmp/livecd-p2v.iso,media=cdrom,id=p2v-cdrom \
+ -device ide-cd,drive=p2v-cdrom,bus=ide.1,bootindex=1
+
=head2 TESTING PXE SUPPORT USING QEMU
=over 4
--
2.19.1.3.g30247aa5d201
2 years, 8 months
[PATCH v2v 0/2] output: Add new -o kubevirt mode
by Richard W.M. Jones
This is very bare-bones at the moment. It only has minimal
documentation and has no tests at all.
Nevertheless, this adds a new -o kubevirt mode, so you can import
guests into Kubevirt, a system which adds virtualization support to
Kubernetes[0]. Example of generated YAML metadata can be found at the
end of this cover email. Upstream examples of metadata to compare it
to can be found in [1].
I wasn't able to test this yet since my Kubernetes instance died and
no one knows how to fix it ...
I only bothered to map out the basic hardware and disks, there are
many to-dos which will require reading the Kubevirt source code to
finish.
Generating YAML is an adventure. The format is full of nasty
beartraps. What I'm doing is probably mostly safe, but I wouldn't be
surprised if there are security holes.
Rich.
---
# generated by virt-v2v 2.1.1local,libvirt
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: fedora-35
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: disk-0
resources:
requests:
memory: 2048Mi
cpu:
cores: 1
features:
acpi: {}
apic: {}
pae: {}
volumes:
- hostDisk:
path: /var/tmp//fedora-35-sda
type: Disk
name: disk-0
terminationGracePeriodSeconds: 0
[0] https://github.com/kubevirt/kubevirt
[1] https://github.com/kubevirt/kubevirt/tree/main/examples
2 years, 8 months
[PATCH] appliance: don't read extfs signature from QCOW2 image directly
by Andrey Drobyshev
If the appliance is a QCOW2 image, function get_root_uuid_with_file()
fails to read ext filesystem signature (0x53EF at offset 0x438) from it.
This results in the following error:
libguestfs: error: /usr/lib64/guestfs/appliance/root: appliance is not
an extfs filesystem
The error itself is harmless, but misleading. So let's skip retrieving
the signature and UUID in case the image contains QCOW2 header. It's
safe because in this case we'll retrieve it later from RAW image dumped
from that QCOW2 by "qemu-img dd".
Signed-off-by: Andrey Drobyshev <andrey.drobyshev(a)virtuozzo.com>
---
lib/appliance-kcmdline.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/lib/appliance-kcmdline.c b/lib/appliance-kcmdline.c
index 8b78655eb..092d37329 100644
--- a/lib/appliance-kcmdline.c
+++ b/lib/appliance-kcmdline.c
@@ -65,7 +65,7 @@
static char *
get_root_uuid_with_file (guestfs_h *g, const char *appliance)
{
- unsigned char magic[2], uuid[16];
+ unsigned char magic[4], uuid[16];
char *ret;
int fd;
@@ -74,6 +74,10 @@ get_root_uuid_with_file (guestfs_h *g, const char *appliance)
perrorf (g, _("open: %s"), appliance);
return NULL;
}
+ if (read (fd, magic, 4) != 4 || !strncmp ((char *) magic, "QFI\xfb", 4)) {
+ /* No point looking for extfs signature in QCOW2 directly. */
+ return NULL;
+ }
if (lseek (fd, 0x438, SEEK_SET) != 0x438) {
magic_error:
error (g, _("%s: cannot read extfs magic in superblock"), appliance);
--
2.35.1
2 years, 8 months