RFC for NBD protocol extension: extended headers
by Eric Blake
In response to this mail, I will be cross-posting a series of patches
to multiple projects as a proof-of-concept implementation and request
for comments on a new NBD protocol extension, called
NBD_OPT_EXTENDED_HEADERS. With this in place, it will be possible for
clients to request 64-bit zero, trim, cache, and block status
operations when supported by the server.
Not yet complete: an implementation of this in nbdkit. I also plan to
tweak libnbd's 'nbdinfo --map' and 'nbdcopy' to take advantage of the
larger block status results. Also, with 64-bit commands, we may want
to also make it easier to let servers advertise an actual maximum size
they are willing to accept for the commands in question (for example,
a server may be happy with a full 64-bit block status, but still want
to limit non-fast zero and cache to a smaller limit to avoid denial of
service).
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
2 years, 2 months
[PATCH 0/5] Fix rhv-upload output
by Nir Soffer
Fix problems in new rhv-upload implementation:
- The plugin does not flush to all connections in flush()
- The plugin does not close all connections in cleanup()
- Idle connections are closed in imageio server, and we don't have a safe way
to recover.
- virt-v2v try to get disk allocation using imageio output, but imageio output
does not support extents. Even if imageio output will support extents, the
call is done after the transfer was finalized so it does not have access to
storage.
Problems not fixed yet:
- Image transfer is finalized *before* closing the connection to imageio - this
will always time out with RHV < 4.4.9, and succeeds by mistake with RHV 4.4.9
due to a regression that will be fixed in 4.4.10. This will be a non-issue in
next RHV version[1]. To support older RHV versions, virt-v2v must finalize
the image transfer *after* closing the output.
Tested on RHEL 8.6 with upstream nbdkit and libnbd.
[1] https://github.com/oVirt/ovirt-imageio/pull/15
Fixes https://bugzilla.redhat.com/2032324
Nir Soffer (5):
output/rhv-upload-plugin: Fix flush and close
v2v/lib/util.ml: Get disk allocation from input
output/rhv-upload-plugin: Extract send_flush() helper
output/rhv-upload-plugin: Track http last request time
output/rhv-upload-plugin: Keep connections alive
lib/utils.ml | 2 +-
output/rhv-upload-plugin.py | 151 ++++++++++++++++++++++++++----------
2 files changed, 113 insertions(+), 40 deletions(-)
--
2.33.1
2 years, 10 months
Re: [Libguestfs] libnbd | Failed pipeline for master | 70da51e5
by Richard W.M. Jones
Hi Martin,
On Thu, Sep 02, 2021 at 04:05:56PM +0000, GitLab wrote:
> GitLab
> ✖ Pipeline #364204388 has failed!
>
> Project nbdkit / libnbd
> Branch ● master
> Commit ● 70da51e5
> interop: Link interop-nbd-server-tls with -lgnu...
> Commit Author ● Richard W.M. Jones
>
> Pipeline #364204388 triggered by ● Richard W.M. Jones
> had 1 failed job.
> Failed jobs
> ✖ builds x64-opensuse-tumbleweed
> GitLab
This is failing on a new test I added, but it's failing because of how
a particular package is built in OpenSUSE.
The new test is:
https://gitlab.com/nbdkit/libnbd/-/commit/c833fa1226092fd51b1211fa195a2a3...
which tries to test libnbd client with TLS enabled against nbd-server.
nbd-server in OpenSUSE gives this error:
Error: inetd mode requires syslog
Exiting.
which means it was compiled without --enable-syslog.
I notice that the related test is skipped:
SKIP: interop-nbd-server
========================
Test skipped based on ci/skipped_tests file
SKIP interop-nbd-server (exit status: 77)
The format of ci/skipped_tests is pretty odd. Is this patch OK?
diff --git a/ci/skipped_tests b/ci/skipped_tests
index e2de9330..c494b9eb 100644
--- a/ci/skipped_tests
+++ b/ci/skipped_tests
@@ -1,9 +1,9 @@
# Old nbd-server and built without syslog support, tests deadlock, old qemu-img version
-^Ubuntu-18\.04$;interop/interop-nbd-server interop/list-exports-nbd-server interop/structured-read.sh
-^openSUSE Leap-15;interop/interop-nbd-server interop/list-exports-nbd-server
+^Ubuntu-18\.04$;interop/interop-nbd-server interop/interop-nbd-server-tls interop/list-exports-nbd-server interop/structured-read.sh
+^openSUSE Leap-15;interop/interop-nbd-server interop/interop-nbd-server-tls interop/list-exports-nbd-server
# Similar for Tumbleweed, except tests do not deadlock, only limit to version 2021* for now
-^openSUSE Tumbleweed-2021;interop/interop-nbd-server interop/list-exports-nbd-server
+^openSUSE Tumbleweed-2021;interop/interop-nbd-server interop/interop-nbd-server-tls interop/list-exports-nbd-server
# Debian 10 has weird golang issues (old golang anyway) and old qemu-img
^Debian GNU/Linux-10;golang/run-tests.sh interop/structured-read.sh
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
2 years, 11 months
[PATCH 0/3] resolve conflict between manual and libvirt-assigned PCI addresses
by Laszlo Ersek
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2034160
The first patch extends our current <qemu:commandline> hack, moving the
virtio-net-pci device to slot 0x1E, where it is very unlikely to
conflict with any libvirt-assigned PCI address.
The second patch is only refactoring.
The third patch resolves the conflict on libvirt >= 3.8.0 in a better
way (suggested by Rich): such libvirt versions permit SLIRP network
address and prefix specifications right in the <interface> element.
Therefore we can have libvirt manage the virtio-net device for us,
similarly to virtio-rng, virtio-scsi, and virtio-serial. The (updated)
<qemu:commandline> hack is preserved for libvirt < 3.8.0.
Gruesomely meticulously tested. (See the Notes sections on the patches.)
Sanity-tested both back-ends *without* the "--network" switch as well.
Thanks,
Laszlo
Laszlo Ersek (3):
launch-libvirt: place our virtio-net-pci device in slot 0x1e
lib: extract NETWORK_ADDRESS and NETWORK_PREFIX as macros
launch-libvirt: add virtio-net via the standard <interface> element
lib/guestfs-internal.h | 18 ++++++++++++++++++
lib/launch-direct.c | 2 +-
lib/launch-libvirt.c | 34 ++++++++++++++++++++++++++++++----
3 files changed, 49 insertions(+), 5 deletions(-)
base-commit: 4af6d68e2d8b856d91fa5527216ea3db04556086
--
2.19.1.3.g30247aa5d201
2 years, 11 months
[PATCH nbdkit 1/2] tests/test-python-plugin.py: Allow test to use large disks
by Richard W.M. Jones
The Python test harness uses a plugin which always created a fully
allocated disk backed by an in-memory bytearray. This prevented us
from testing very large disks (since we could run out of memory
easily).
Add a feature allowing large all-zero disks to be tested. The disk is
not allocated and non-zero writes will fail, but everything else
works.
---
tests/test-python-plugin.py | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/tests/test-python-plugin.py b/tests/test-python-plugin.py
index 70d545db..a30b7f64 100644
--- a/tests/test-python-plugin.py
+++ b/tests/test-python-plugin.py
@@ -51,13 +51,17 @@ def config_complete():
def open(readonly):
+ if cfg.get('no_disk', False):
+ disk = None
+ else:
+ disk = bytearray(cfg.get('size', 0))
return {
- 'disk': bytearray(cfg.get('size', 0))
+ 'disk': disk
}
def get_size(h):
- return len(h['disk'])
+ return cfg.get('size', 0)
def is_rotational(h):
@@ -123,6 +127,7 @@ def pwrite(h, buf, offset, flags):
actual_fua = bool(flags & nbdkit.FLAG_FUA)
assert expect_fua == actual_fua
end = offset + len(buf)
+ assert h['disk'] is not None
h['disk'][offset:end] = buf
@@ -134,7 +139,8 @@ def trim(h, count, offset, flags):
expect_fua = cfg.get('trim_expect_fua', False)
actual_fua = bool(flags & nbdkit.FLAG_FUA)
assert expect_fua == actual_fua
- h['disk'][offset:offset+count] = bytearray(count)
+ if h['disk'] is not None:
+ h['disk'][offset:offset+count] = bytearray(count)
def zero(h, count, offset, flags):
@@ -147,7 +153,8 @@ def zero(h, count, offset, flags):
expect_fast_zero = cfg.get('zero_expect_fast_zero', False)
actual_fast_zero = bool(flags & nbdkit.FLAG_FAST_ZERO)
assert expect_fast_zero == actual_fast_zero
- h['disk'][offset:offset+count] = bytearray(count)
+ if h['disk'] is not None:
+ h['disk'][offset:offset+count] = bytearray(count)
def cache(h, count, offset, flags):
--
2.32.0
3 years
[v2v PATCH] output_rhv: restrict block status collection to the old RHV output
by Laszlo Ersek
Nir reports that, despite the comment we removed in commit a2a4f7a09996,
we generally cannot access the output NBD servers in the finalization
stage. In particular, in the "rhv_upload_finalize" function
[output/output_rhv_upload.ml], the output disks are disconnected before
"create_ovf" is called.
Consequently, the "get_disk_allocated" call in the "create_ovf" ->
"add_disks" -> "get_disk_allocated" chain fails.
Rich suggests that we explicitly restrict the "get_disk_allocated" call
with a new optional boolean parameter to the one output plugin that really
needs it, namely the old RHV one.
Accordingly, revert the VDSM test case to its state at (a2a4f7a09996^).
Cc: Nir Soffer <nsoffer(a)redhat.com>
Fixes: a2a4f7a09996a5e66d027d0d9692e083eb0a8128
Reported-by: Nir Soffer <nsoffer(a)redhat.com>
Suggested-by: Richard W.M. Jones <rjones(a)redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2034240
Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
---
lib/create_ovf.mli | 3 +-
lib/create_ovf.ml | 35 +++++++++++++---------
output/output_rhv.ml | 4 +--
tests/test-v2v-o-vdsm-options.ovf.expected | 4 +--
4 files changed, 27 insertions(+), 19 deletions(-)
diff --git a/lib/create_ovf.mli b/lib/create_ovf.mli
index 0d1cc5a9311a..d6d4e62eeb86 100644
--- a/lib/create_ovf.mli
+++ b/lib/create_ovf.mli
@@ -46,7 +46,8 @@ val ovf_flavour_to_string : ovf_flavour -> string
val create_ovf : Types.source -> Types.inspect ->
Types.target_meta -> int64 list ->
Types.output_allocation -> string -> string -> string list ->
- string list -> string -> string -> ovf_flavour -> DOM.doc
+ string list -> ?need_actual_sizes:bool -> string -> string ->
+ ovf_flavour -> DOM.doc
(** Create the OVF file.
Actually a {!DOM} document is created, not a file. It can be written
diff --git a/lib/create_ovf.ml b/lib/create_ovf.ml
index dbac3405989b..678d29942abe 100644
--- a/lib/create_ovf.ml
+++ b/lib/create_ovf.ml
@@ -531,7 +531,8 @@ let rec create_ovf source inspect
{ output_name; guestcaps; target_firmware; target_nics }
sizes
output_alloc output_format
- sd_uuid image_uuids vol_uuids dir vm_uuid ovf_flavour =
+ sd_uuid image_uuids vol_uuids ?(need_actual_sizes = false) dir
+ vm_uuid ovf_flavour =
assert (List.length sizes = List.length vol_uuids);
let memsize_mb = source.s_memory /^ 1024L /^ 1024L in
@@ -745,7 +746,7 @@ let rec create_ovf source inspect
(* Add disks to the OVF XML. *)
add_disks sizes guestcaps output_alloc output_format
- sd_uuid image_uuids vol_uuids dir ovf_flavour ovf;
+ sd_uuid image_uuids vol_uuids need_actual_sizes dir ovf_flavour ovf;
(* Old virt-v2v ignored removable media. XXX *)
@@ -791,7 +792,7 @@ and get_flavoured_section ovf ovirt_path rhv_path rhv_path_attr = function
(* This modifies the OVF DOM, adding a section for each disk. *)
and add_disks sizes guestcaps output_alloc output_format
- sd_uuid image_uuids vol_uuids dir ovf_flavour ovf =
+ sd_uuid image_uuids vol_uuids need_actual_sizes dir ovf_flavour ovf =
let references =
let nodes = path_to_nodes ovf ["ovf:Envelope"; "References"] in
match nodes with
@@ -839,7 +840,12 @@ and add_disks sizes guestcaps output_alloc output_format
b /^ 1073741824L
in
let size_gb = bytes_to_gb size in
- let actual_size = get_disk_allocated ~dir ~disknr:i in
+ let actual_size =
+ if need_actual_sizes then
+ get_disk_allocated ~dir ~disknr:i
+ else
+ None
+ in
let format_for_rhv =
match output_format with
@@ -891,16 +897,17 @@ and add_disks sizes guestcaps output_alloc output_format
"ovf:disk-type", "System"; (* RHBZ#744538 *)
"ovf:boot", if is_bootable_drive then "True" else "False";
] in
- (* Ovirt-engine considers the "ovf:actual_size" attribute mandatory. If
- * we don't know the actual size, we must create the attribute with
- * empty contents.
- *)
- List.push_back attrs
- ("ovf:actual_size",
- match actual_size with
- | None -> ""
- | Some actual_size -> Int64.to_string (bytes_to_gb actual_size)
- );
+ if (need_actual_sizes) then
+ (* Ovirt-engine considers the "ovf:actual_size" attribute mandatory.
+ * If we don't know the actual size, we must create the attribute
+ * with empty contents.
+ *)
+ List.push_back attrs
+ ("ovf:actual_size",
+ match actual_size with
+ | None -> ""
+ | Some actual_size -> Int64.to_string (bytes_to_gb actual_size)
+ );
e "Disk" !attrs [] in
append_child disk disk_section;
diff --git a/output/output_rhv.ml b/output/output_rhv.ml
index b902a7ee4619..6a67b7aa152b 100644
--- a/output/output_rhv.ml
+++ b/output/output_rhv.ml
@@ -183,8 +183,8 @@ and rhv_finalize dir source inspect target_meta
(* Create the metadata. *)
let ovf =
Create_ovf.create_ovf source inspect target_meta sizes
- output_alloc output_format esd_uuid image_uuids vol_uuids dir vm_uuid
- Create_ovf.RHVExportStorageDomain in
+ output_alloc output_format esd_uuid image_uuids vol_uuids
+ ~need_actual_sizes:true dir vm_uuid Create_ovf.RHVExportStorageDomain in
(* Write it to the metadata file. *)
let dir = esd_mp // esd_uuid // "master" // "vms" // vm_uuid in
diff --git a/tests/test-v2v-o-vdsm-options.ovf.expected b/tests/test-v2v-o-vdsm-options.ovf.expected
index bd5b5e7d38ec..23ca180f4c2f 100644
--- a/tests/test-v2v-o-vdsm-options.ovf.expected
+++ b/tests/test-v2v-o-vdsm-options.ovf.expected
@@ -2,7 +2,7 @@
<ovf:Envelope xmlns:rasd='http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationS...' xmlns:vssd='http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettin...' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:ovf='http://schemas.dmtf.org/ovf/envelope/1/' xmlns:ovirt='http://www.ovirt.org/ovf' ovf:version='0.9'>
<!-- generated by virt-v2v -->
<References>
- <File ovf:href='VOL' ovf:id='VOL' ovf:description='generated by virt-v2v' ovf:size='#SIZE#'/>
+ <File ovf:href='VOL' ovf:id='VOL' ovf:description='generated by virt-v2v'/>
</References>
<NetworkSection>
<Info>List of networks</Info>
@@ -10,7 +10,7 @@
</NetworkSection>
<DiskSection>
<Info>List of Virtual Disks</Info>
- <Disk ovf:diskId='IMAGE' ovf:size='1' ovf:capacity='536870912' ovf:fileRef='VOL' ovf:parentRef='' ovf:vm_snapshot_id='#UUID#' ovf:volume-format='COW' ovf:volume-type='Sparse' ovf:format='http://en.wikipedia.org/wiki/Byte' ovf:disk-interface='VirtIO' ovf:disk-type='System' ovf:boot='True' ovf:actual_size='1'/>
+ <Disk ovf:diskId='IMAGE' ovf:size='1' ovf:capacity='536870912' ovf:fileRef='VOL' ovf:parentRef='' ovf:vm_snapshot_id='#UUID#' ovf:volume-format='COW' ovf:volume-type='Sparse' ovf:format='http://en.wikipedia.org/wiki/Byte' ovf:disk-interface='VirtIO' ovf:disk-type='System' ovf:boot='True'/>
</DiskSection>
<VirtualSystem ovf:id='VM'>
<Name>windows</Name>
--
2.19.1.3.g30247aa5d201
3 years
[PATCH nbdkit] plugins/python: Fix extents() count format string
by Nir Soffer
The plugin used "i" (int32) instead of "I" (uint32) for the count, so
when the client asks for 4294966784 bytes, the python plugin got -512.
nbdkit: python.0: debug: python: extents count=4294966784 offset=0 req_one=0
...
nbdkit: python.0: debug: extents: count=-512 offset=0 flags=0
With this fix I can get extents from rhv-upload-plugin using nbdinfo.
---
plugins/python/plugin.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/plugins/python/plugin.c b/plugins/python/plugin.c
index f85512b4..366619f9 100644
--- a/plugins/python/plugin.c
+++ b/plugins/python/plugin.c
@@ -957,21 +957,21 @@ py_extents (void *handle, uint32_t count, uint64_t offset,
ACQUIRE_PYTHON_GIL_FOR_CURRENT_SCOPE;
struct handle *h = handle;
PyObject *fn;
PyObject *r;
PyObject *iter, *t;
size_t size;
if (callback_defined ("extents", &fn)) {
PyErr_Clear ();
- r = PyObject_CallFunction (fn, "OiLI", h->py_h, count, offset, flags);
+ r = PyObject_CallFunction (fn, "OILI", h->py_h, count, offset, flags);
Py_DECREF (fn);
if (check_python_failure ("extents") == -1)
return -1;
iter = PyObject_GetIter (r);
if (iter == NULL) {
nbdkit_error ("extents method did not return "
"something which is iterable");
Py_DECREF (r);
return -1;
--
2.33.1
3 years
[PATCH] output/rhv-upload-plugin: Support extents
by Nir Soffer
Add extents support using imageio /extents API:
https://ovirt.github.io/ovirt-imageio/images.html#extents
Imageio does not support yet partial extents, so we cache entire image
extents, and search for the requested range. The search can probably be
faster using binary search, but it seems good enough as is.
Calling pwrite() and zero() invalidates the cached extents.
This can be useful for checking if an image is zero before copy, or
calculating the allocated size after the copy.
There is one caveat - imageio reports holes using qemu:allocation-depth,
not using base:allocation. So qcow2 images report holes for unallocated
areas, and raw images never reports holes. To get the actual allocation
in RHV, oVirt API should be used.
Testing is complicated:
1. Start ovirt-imageio server
2. Add ticket for test image using nbd socket
3. Start qemu-nbd serving the nbd socket
4. Run nbdkit with rhv-upload-plugin with and imageio https url
I tested various images by getting extents with nbdinfo and qemu-img map
and copying image from rhv-upload-plugin with qemu-img convert.
To get extents using nbdinfo, we need a fix in nbdkit python plugin:
https://listman.redhat.com/archives/libguestfs/2021-December/msg00196.html
---
output/rhv-upload-plugin.py | 79 +++++++++++++++++++++++++++++++++++++
1 file changed, 79 insertions(+)
diff --git a/output/rhv-upload-plugin.py b/output/rhv-upload-plugin.py
index 1cb837dd..4f0c02a5 100644
--- a/output/rhv-upload-plugin.py
+++ b/output/rhv-upload-plugin.py
@@ -42,20 +42,25 @@ url = None
cafile = None
insecure = False
is_ovirt_host = False
# List of options read from imageio server.
options = None
# Pool of HTTP connections.
pool = None
+# Cached extents for entire image. The imageio server does not support yet
+# getting partial extents, and getting all extents may be expensive, so we
+# cache the entire image extents.
+cached_extents = None
+
# Parse parameters.
def config(key, value):
global cafile, url, is_ovirt_host, insecure, size
if key == "cafile":
cafile = value
elif key == "insecure":
insecure = value.lower() in ['true', '1']
elif key == "is_ovirt_host":
@@ -123,20 +128,24 @@ def can_fua(h):
return options['can_flush']
def can_multi_conn(h):
# We can always handle multiple connections, and the number of NBD
# connections is independent of the number of HTTP clients in the
# pool.
return True
+def can_extents(h):
+ return options["can_extents"]
+
+
def get_size(h):
return size
# Any unexpected HTTP response status from the server will end up calling this
# function which logs the full error, and raises a RuntimeError exception.
def request_failed(r, msg):
status = r.status
reason = r.reason
try:
@@ -184,48 +193,54 @@ def pread(h, buf, offset, flags):
while got < count:
n = r.readinto(view[got:])
if n == 0:
request_failed(r,
"short read offset %d size %d got %d" %
(offset, count, got))
got += n
def pwrite(h, buf, offset, flags):
+ global cached_extents
+
count = len(buf)
flush = "y" if (options['can_flush'] and (flags & nbdkit.FLAG_FUA)) else "n"
with http_context(pool) as http:
http.putrequest("PUT", url.path + "?flush=" + flush)
# The oVirt server only uses the first part of the range, and the
# content-length.
http.putheader("Content-Range", "bytes %d-%d/*" %
(offset, offset + count - 1))
http.putheader("Content-Length", str(count))
http.endheaders()
try:
http.send(buf)
except BrokenPipeError:
pass
+ cached_extents = None
+
r = http.getresponse()
if r.status != 200:
request_failed(r,
"could not write sector offset %d size %d" %
(offset, count))
r.read()
def zero(h, count, offset, flags):
+ global cached_extents
+
# Unlike the trim and flush calls, there is no 'can_zero' method
# so nbdkit could call this even if the server doesn't support
# zeroing. If this is the case we must emulate.
if not options['can_zero']:
emulate_zero(h, count, offset, flags)
return
flush = bool(options['can_flush'] and (flags & nbdkit.FLAG_FUA))
# Construct the JSON request for zeroing.
@@ -233,39 +248,45 @@ def zero(h, count, offset, flags):
'offset': offset,
'size': count,
'flush': flush}).encode()
headers = {"Content-Type": "application/json",
"Content-Length": str(len(buf))}
with http_context(pool) as http:
http.request("PATCH", url.path, body=buf, headers=headers)
+ cached_extents = None
+
r = http.getresponse()
if r.status != 200:
request_failed(r,
"could not zero sector offset %d size %d" %
(offset, count))
r.read()
def emulate_zero(h, count, offset, flags):
+ global cached_extents
+
flush = "y" if (options['can_flush'] and (flags & nbdkit.FLAG_FUA)) else "n"
with http_context(pool) as http:
http.putrequest("PUT", url.path + "?flush=" + flush)
http.putheader("Content-Range",
"bytes %d-%d/*" % (offset, offset + count - 1))
http.putheader("Content-Length", str(count))
http.endheaders()
+ cached_extents = None
+
try:
buf = bytearray(128 * 1024)
while count > len(buf):
http.send(buf)
count -= len(buf)
http.send(memoryview(buf)[:count])
except BrokenPipeError:
pass
r = http.getresponse()
@@ -289,20 +310,76 @@ def flush(h, flags):
for http in iter_http_pool(pool):
http.request("PATCH", url.path, body=buf, headers=headers)
r = http.getresponse()
if r.status != 200:
request_failed(r, "could not flush")
r.read()
+def extents(h, count, offset, flags):
+ global cached_extents
+
+ if cached_extents is None:
+ cached_extents = get_all_extents()
+
+ end = offset + count
+
+ for ext in cached_extents:
+ start = ext["start"]
+ length = ext["length"]
+
+ # Stop when extent exceeds the requested range.
+ if start >= end:
+ return
+
+ # Skip over extents before the requested range.
+ if start + length <= offset:
+ continue
+
+ extent_type = 0
+
+ if ext["zero"]:
+ extent_type |= nbdkit.EXTENT_ZERO
+
+ # Old imageio server did not report holes. Note that imageio reports
+ # holes only for unallocated area in qcow2 image, so this flag may not
+ # be very useful.
+ if ext.get("hole"):
+ extent_type |= nbdkit.EXTENT_HOLE
+
+ # The first extent may start before the requested range.
+ if start < offset:
+ length -= offset - start
+ start = offset
+
+ # The last extent may end after the requested range.
+ if start + length > end:
+ length = end - start
+
+ yield start, length, extent_type
+
+
+def get_all_extents():
+ with http_context(pool) as http:
+ http.request("GET", url.path + "/extents")
+
+ r = http.getresponse()
+ if r.status != 200:
+ request_failed(r, "could not get extents")
+
+ data = r.read()
+
+ return json.loads(data)
+
+
# Modify http.client.HTTPConnection to work over a Unix domain socket.
# Derived from uhttplib written by Erik van Zijst under an MIT license.
# (https://pypi.org/project/uhttplib/)
# Ported to Python 3 by Irit Goihman.
class UnixHTTPConnection(HTTPConnection):
def __init__(self, path, timeout=socket._GLOBAL_DEFAULT_TIMEOUT):
self.path = path
HTTPConnection.__init__(self, "localhost", timeout=timeout)
def connect(self):
@@ -425,28 +502,30 @@ def get_options(http, url):
http.request("OPTIONS", url.path)
r = http.getresponse()
data = r.read()
if r.status == 200:
j = json.loads(data)
features = j["features"]
return {
"can_flush": "flush" in features,
"can_zero": "zero" in features,
+ "can_extents": "extents" in features,
"unix_socket": j.get('unix_socket'),
"max_readers": j.get("max_readers", 1),
"max_writers": j.get("max_writers", 1),
}
elif r.status == 405 or r.status == 204:
# Old imageio servers returned either 405 Method Not Allowed or
# 204 No Content (with an empty body).
return {
"can_flush": False,
"can_zero": False,
+ "can_extents": False,
"unix_socket": None,
"max_readers": 1,
"max_writers": 1,
}
else:
raise RuntimeError("could not use OPTIONS request: %d: %s" %
(r.status, r.reason))
--
2.33.1
3 years
[v2v PATCH v2] convert_linux: translate the first CD-ROM's references in boot conf files
by Laszlo Ersek
If the only CD-ROM in "s_removables" is on an IDE controller, and the
guest kernel represents it with a /dev/hdX device node, then convert
references to this device node, in the boot config files, to /dev/cdrom.
On the destination (after conversion), /dev/cdrom will point to whataver
node we converted the CD-ROM to, masking a potential i440fx -> q35 (IDE ->
SATA) board change.
If the only CD-ROM is not on an IDE controller, or the guest is modern
enough to represent the IDE CD-ROM as /dev/sr0, then perform no
translation. Namely, /dev/sr0 survives a potential i440fx -> q35 (IDE ->
SATA) board change intact.
When multiple CD-ROMs exist, emit a warning, and attempt the conversion on
the first CD-ROM, as a guess. This may be inexact, but we can't do better,
because:
- SATA, SCSI, and (on modern guests) IDE CD-ROMs are lumped together in
the /dev/sr* namespace, on the source side, and "s_removable_slot" is
useless for telling them apart, as we don't know the exact controller
topology (and OS enumeration order);
- after conversion: some OSes don't create /dev/cdrom* symlinks to all
CD-ROMs, and even if multiple such symlinks are created, their order is
basically random.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1637857
Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
---
Notes:
v2:
- Use List.length >= 2 rather than matching with three cons operations.
[Rich]
- At the same time, eliminate wrong precedence between "->" in a match,
and ";". [Rich]
- Match the first element of the cdrom list, and the contents of that
element, in a single "match" expression. [Rich]
- Not tested, due to the issue I described in
<8b3ce08b-ea47-dc1c-f441-c8b91708bd6f(a)redhat.com> (cannot provide a
mailing list URL because the archive seems to have stopped refreshing
itself?)
convert/convert_linux.ml | 19 +++++++++++++++++++
tests/test-v2v-cdrom.expected | 2 +-
tests/test-v2v-cdrom.xml.in | 4 +++-
tests/test-v2v-i-ova.xml | 2 +-
4 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/convert/convert_linux.ml b/convert/convert_linux.ml
index 8dc648169dcb..d49ecec03aeb 100644
--- a/convert/convert_linux.ml
+++ b/convert/convert_linux.ml
@@ -1020,6 +1020,25 @@ let convert (g : G.guestfs) source inspect keep_serial_console _ =
"xvd" ^ drive_name i, block_prefix_after_conversion ^ drive_name i
) source.s_disks in
+ (* Check the first CD-ROM. If its controller is IDE, and the OS is RHEL<=5,
+ * then translate the CD-ROM from "/dev/hd[SLOT]" to "/dev/cdrom". See
+ * RHBZ#1637857 for details.
+ *)
+ let cdroms = List.filter
+ (fun removable -> removable.s_removable_type = CDROM)
+ source.s_removables in
+ if List.length cdroms >= 2 then
+ warning (f_"multiple CD-ROMs found; translation of CD-ROM references \
+ may be inexact");
+ let map = map @
+ (match cdroms with
+ | { s_removable_controller = Some Source_IDE;
+ s_removable_slot = Some slot } :: _
+ when family = `RHEL_family && inspect.i_major_version <= 5 ->
+ [("hd" ^ drive_name slot, "cdrom")]
+ | _ -> []
+ ) in
+
if verbose () then (
eprintf "block device map:\n";
List.iter (
diff --git a/tests/test-v2v-cdrom.expected b/tests/test-v2v-cdrom.expected
index 34d2bf5961b0..17bd152d8e64 100644
--- a/tests/test-v2v-cdrom.expected
+++ b/tests/test-v2v-cdrom.expected
@@ -4,5 +4,5 @@
</disk>
<disk device='cdrom' type='file'>
<driver name='qemu' type='raw'/>
- <target dev='hdc' bus='ide'/>
+ <target dev='sdc' bus='sata'/>
</disk>
diff --git a/tests/test-v2v-cdrom.xml.in b/tests/test-v2v-cdrom.xml.in
index 6bad5eab1cd4..a6e1e3f514d5 100644
--- a/tests/test-v2v-cdrom.xml.in
+++ b/tests/test-v2v-cdrom.xml.in
@@ -35,7 +35,9 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='@abs_top_builddir(a)/test-data/phony-guests/blank-disk.img'/>
- <!-- virt-v2v should preserve the device name and bus -->
+ <!-- virt-v2v should change the bus to sata, due to Windows 7
+ triggering a machine type change from i440fx to q35. Beyond that,
+ virt-v2v should preserve the on-bus index. -->
<target dev='hdc' bus='ide'/>
</disk>
</devices>
diff --git a/tests/test-v2v-i-ova.xml b/tests/test-v2v-i-ova.xml
index d7383905fdc0..9f3c1974243f 100644
--- a/tests/test-v2v-i-ova.xml
+++ b/tests/test-v2v-i-ova.xml
@@ -28,7 +28,7 @@
</disk>
<disk device='cdrom' type='file'>
<driver name='qemu' type='raw'/>
- <target dev='hda' bus='ide'/>
+ <target dev='sda' bus='sata'/>
</disk>
<disk device='floppy' type='file'>
<driver name='qemu' type='raw'/>
--
2.19.1.3.g30247aa5d201
3 years