Virt-install, virt-customize, virt-builder
by Nguetchouang Ngongang Kevin
Hello everyone, i got 2 questions and I hope you could help me ?
* is it possible to specify extra options of qemu-img (when
creating/building image/os using virt-install and virt-builder ?)
* Is it possible to rebuild another os (in the manner of virt-builder)
on an existing image file but by using VIRT-CUSTOMIZE?
Best regards,
--
Nguetchouang Ngongang Kevin
ENS de Lyon
https://perso.ens-lyon.fr/kevin.nguetchouang/
2 years, 8 months
nbdkit: determining block size distribution
by Nikolaus Rath
Hello,
I want to find out the request size distribution of NBD read, write, and trim requests for a given workload. Background is that I want to figure out the ideal block size for the backing storage used my an nbdkit plugin.
It seems to me that the best way to get this information would be to write an appropriate nbdkit filter, but I was surprised that the stats filter output is rather rudimentary.
Would patches be accepted to add block size histograms? Or is there a better way to do this?
Best,
-Nikolaus
--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
2 years, 8 months
[v2v PATCH 0/9] ensure x86-64-v2 uarch level for RHEL-9.0+ guests
by Laszlo Ersek
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076013
We don't specify a VCPU model in the destination domain description if
the source description doesn't specify one. This is a problem for x86-64
RHEL-9.0+ guests, when using the QEMU or libvirt output modules. Those
outputs default to the "qemu64" VCPU model in the absence of an explicit
VCPU model specification, and "qemu64" does not satisfy the "x86-64-v2"
microarchitecture level, which is required by RHEL-9.0+ guests. As a
result, these guests crash during boot, after conversion.
The series recognizes guests that are unable to boot on the default QEMU
VCPU model, and specifies host CPU passthrough for them in the QEMU and
libvirt output modules.
Testing: <https://bugzilla.redhat.com/show_bug.cgi?id=2076013#c18>.
Thanks,
Laszlo
Laszlo Ersek (9):
types: introduce the "gcaps_default_cpu" field
create_libvirt_xml: simplify match on (s_cpu_vendor, s_cpu_model)
create_libvirt_xml: normalize match on (s_cpu_vendor, s_cpu_model)
create_libvirt_xml: eliminate childless <cpu match="minimum"/> element
create_libvirt_xml: restrict 'match="minimum"' <cpu> attribute
production
create_libvirt_xml: honor "gcaps_default_cpu"
output_qemu: reflect source VCPU model to the QEMU command line
output_qemu: honor "gcaps_default_cpu"
convert_linux: set "gcaps_default_cpu = false" for x86_64 RHEL-9.0+
guests
convert/convert_linux.ml | 9 ++++++
convert/convert_windows.ml | 1 +
lib/types.ml | 3 ++
lib/types.mli | 14 +++++++++
output/create_libvirt_xml.ml | 31 +++++++++++---------
output/output_qemu.ml | 7 +++++
6 files changed, 51 insertions(+), 14 deletions(-)
base-commit: 8643970f9791b1c90dfd6a2dd1abfc9afef1fb52
--
2.19.1.3.g30247aa5d201
2 years, 8 months
[v2v PATCH 0/2] recognize SATA disks in VMX files
by Laszlo Ersek
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1883802
Pino had added SATA disk parsing to libvirtd, for the sake of virt-v2v's
VDDK and VCENTER connection URIs, in RHBZ#1883588. The second patch in
this series seeks to parse SATA disks directly from VMX files ("-i
vmx"), modeled after Pino's libvirtd patches, and Rich's recent NVMe
disk parsing patch (see more precise references in the commit message).
The first patch in the series fixes an unrelated bug in the "-i vmx"
disk numbering. I discovered that bug because I didn't install a new
guest with just SATA disk(s) on my ESXi server, but (out of lazyness) I
added a new SATA disk to my existent Windows Server 2019 guest, which I
had originally installed with a SCSI disk.
Thanks,
Laszlo
Laszlo Ersek (2):
"-i vmx -it ssh": fix non-unique "s_disk_id"
input: -i vmx: Add support for SATA hard disks
input/parse_domain_from_vmx.ml | 51 ++++++---
tests/test-v2v-i-vmx-7.expected | 23 ++++
tests/test-v2v-i-vmx-7.vmx | 110 ++++++++++++++++++++
tests/test-v2v-i-vmx.sh | 3 +-
4 files changed, 173 insertions(+), 14 deletions(-)
create mode 100644 tests/test-v2v-i-vmx-7.expected
create mode 100755 tests/test-v2v-i-vmx-7.vmx
base-commit: 8643970f9791b1c90dfd6a2dd1abfc9afef1fb52
--
2.19.1.3.g30247aa5d201
2 years, 8 months
[v2v PATCH v3] -o rhv-upload: wait for VM creation task
by Tomáš Golembiovský
oVirt API call for VM creation finishes before the VM is actually
created. Entities may be still locked after virt-v2v terminates and if
user tries to perform (scripted) actions after virt-v2v those operations
may fail. To prevent this it is useful to monitor the task and wait for
the completion. This will also help to prevent some corner case
scenarios (that would be difficult to debug) when the VM creation job
fails after virt-v2v already termintates with success.
Thanks: Nir Soffer
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1985827
Signed-off-by: Tomáš Golembiovský <tgolembi(a)redhat.com>
Reviewed-by: Arik Hadas <ahadas(a)redhat.com>
Reviewed-by: Nir Soffer <nsoffer(a)redhat.com>
---
output/rhv-upload-createvm.py | 57 ++++++++++++++++++-
.../ovirtsdk4/__init__.py | 10 +++-
.../ovirtsdk4/types.py | 19 +++++++
3 files changed, 84 insertions(+), 2 deletions(-)
Changes in v3:
- this time really increased sleep and decreased timeout
diff --git a/output/rhv-upload-createvm.py b/output/rhv-upload-createvm.py
index 50bb7e34..8887c52b 100644
--- a/output/rhv-upload-createvm.py
+++ b/output/rhv-upload-createvm.py
@@ -19,12 +19,54 @@
import json
import logging
import sys
+import time
+import uuid
from urllib.parse import urlparse
import ovirtsdk4 as sdk
import ovirtsdk4.types as types
+
+def debug(s):
+ if params['verbose']:
+ print(s, file=sys.stderr)
+ sys.stderr.flush()
+
+
+def jobs_completed(system_service, correlation_id):
+ jobs_service = system_service.jobs_service()
+
+ try:
+ jobs = jobs_service.list(
+ search="correlation_id=%s" % correlation_id)
+ except sdk.Error as e:
+ debug(
+ "Error searching for jobs with correlation id %s: %s" %
+ (correlation_id, e))
+ # We don't know, assume that jobs did not complete yet.
+ return False
+
+ # STARTED is the only "in progress" status, anything else means the job
+ # has already terminated.
+ if all(job.status != types.JobStatus.STARTED for job in jobs):
+ failed_jobs = [(job.description, str(job.status))
+ for job in jobs
+ if job.status != types.JobStatus.FINISHED]
+ if failed_jobs:
+ raise RuntimeError(
+ "Failed to create a VM! Failed jobs: %r" % failed_jobs)
+ return True
+ else:
+ running_jobs = [(job.description, str(job.status)) for job in jobs]
+ debug("Some jobs with correlation id %s are running: %s" %
+ (correlation_id, running_jobs))
+ return False
+
+
+# Seconds to wait for the VM import job to complete in oVirt.
+timeout = 3 * 60
+
# Parameters are passed in via a JSON doc from the OCaml code.
# Because this Python code ships embedded inside virt-v2v there
# is no formal API here.
@@ -67,6 +109,7 @@ system_service = connection.system_service()
cluster = system_service.clusters_service().cluster_service(params['rhv_cluster_uuid'])
cluster = cluster.get()
+correlation_id = str(uuid.uuid4())
vms_service = system_service.vms_service()
vm = vms_service.add(
types.Vm(
@@ -77,5 +120,17 @@ vm = vms_service.add(
data=ovf,
)
)
- )
+ ),
+ query={'correlation_id': correlation_id},
)
+
+# Wait for the import job to finish.
+endt = time.monotonic() + timeout
+while True:
+ time.sleep(10)
+ if jobs_completed(system_service, correlation_id):
+ break
+ if time.monotonic() > endt:
+ raise RuntimeError(
+ "Timed out waiting for VM creation!"
+ " Jobs still running for correlation id %s" % correlation_id)
diff --git a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py
index 0d8f33b3..e33d0714 100644
--- a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py
+++ b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py
@@ -59,6 +59,9 @@ class SystemService(object):
def disks_service(self):
return DisksService()
+ def jobs_service(self):
+ return JobsService()
+
def image_transfers_service(self):
return ImageTransfersService()
@@ -104,6 +107,11 @@ class DisksService(object):
return DiskService(disk_id)
+class JobsService(object):
+ def list(self, search=None):
+ return [types.Job()]
+
+
class ImageTransferService(object):
def __init__(self):
self._finalized = False
@@ -135,7 +143,7 @@ class StorageDomainsService(object):
class VmsService(object):
- def add(self, vm):
+ def add(self, vm, query=None):
return vm
def list(self, search=None):
diff --git a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py
index 5707fa3e..38d89573 100644
--- a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py
+++ b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py
@@ -141,6 +141,25 @@ class Initialization(object):
pass
+class JobStatus(Enum):
+ ABORTED = "aborted"
+ FAILED = "failed"
+ FINISHED = "finished"
+ STARTED = "started"
+ UNKNOWN = "unknown"
+
+ def __init__(self, image):
+ self._image = image
+
+ def __str__(self):
+ return self._image
+
+
+class Job(object):
+ description = "Fake job"
+ status = JobStatus.FINISHED
+
+
class StorageDomain(object):
def __init__(self, name=None):
pass
--
2.35.1
2 years, 8 months
[v2v PATCH v2] -o rhv-upload: wait for VM creation task
by Tomáš Golembiovský
oVirt API call for VM creation finishes before the VM is actually
created. Entities may be still locked after virt-v2v terminates and if
user tries to perform (scripted) actions after virt-v2v those operations
may fail. To prevent this it is useful to monitor the task and wait for
the completion. This will also help to prevent some corner case
scenarios (that would be difficult to debug) when the VM creation job
fails after virt-v2v already termintates with success.
Thanks: Nir Soffer
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1985827
Signed-off-by: Tomáš Golembiovský <tgolembi(a)redhat.com>
Reviewed-by: Arik Hadas <ahadas(a)redhat.com>
Reviewed-by: Nir Soffer <nsoffer(a)redhat.com>
---
output/rhv-upload-createvm.py | 57 ++++++++++++++++++-
.../ovirtsdk4/__init__.py | 10 +++-
.../ovirtsdk4/types.py | 19 +++++++
3 files changed, 84 insertions(+), 2 deletions(-)
Changes in v2:
- fixed tests
- changes suggested by Nir
- increased sleep time and decreased timeout
diff --git a/output/rhv-upload-createvm.py b/output/rhv-upload-createvm.py
index 50bb7e34..642f0720 100644
--- a/output/rhv-upload-createvm.py
+++ b/output/rhv-upload-createvm.py
@@ -19,12 +19,54 @@
import json
import logging
import sys
+import time
+import uuid
from urllib.parse import urlparse
import ovirtsdk4 as sdk
import ovirtsdk4.types as types
+
+def debug(s):
+ if params['verbose']:
+ print(s, file=sys.stderr)
+ sys.stderr.flush()
+
+
+def jobs_completed(system_service, correlation_id):
+ jobs_service = system_service.jobs_service()
+
+ try:
+ jobs = jobs_service.list(
+ search="correlation_id=%s" % correlation_id)
+ except sdk.Error as e:
+ debug(
+ "Error searching for jobs with correlation id %s: %s" %
+ (correlation_id, e))
+ # We don't know, assume that jobs did not complete yet.
+ return False
+
+ # STARTED is the only "in progress" status, anything else means the job
+ # has already terminated.
+ if all(job.status != types.JobStatus.STARTED for job in jobs):
+ failed_jobs = [(job.description, str(job.status))
+ for job in jobs
+ if job.status != types.JobStatus.FINISHED]
+ if failed_jobs:
+ raise RuntimeError(
+ "Failed to create a VM! Failed jobs: %r" % failed_jobs)
+ return True
+ else:
+ running_jobs = [(job.description, str(job.status)) for job in jobs]
+ debug("Some jobs with correlation id %s are running: %s" %
+ (correlation_id, running_jobs))
+ return False
+
+
+# Seconds to wait for the VM import job to complete in oVirt.
+timeout = 5 * 60
+
# Parameters are passed in via a JSON doc from the OCaml code.
# Because this Python code ships embedded inside virt-v2v there
# is no formal API here.
@@ -67,6 +109,7 @@ system_service = connection.system_service()
cluster = system_service.clusters_service().cluster_service(params['rhv_cluster_uuid'])
cluster = cluster.get()
+correlation_id = str(uuid.uuid4())
vms_service = system_service.vms_service()
vm = vms_service.add(
types.Vm(
@@ -77,5 +120,17 @@ vm = vms_service.add(
data=ovf,
)
)
- )
+ ),
+ query={'correlation_id': correlation_id},
)
+
+# Wait for the import job to finish.
+endt = time.monotonic() + timeout
+while True:
+ time.sleep(1)
+ if jobs_completed(system_service, correlation_id):
+ break
+ if time.monotonic() > endt:
+ raise RuntimeError(
+ "Timed out waiting for VM creation!"
+ " Jobs still running for correlation id %s" % correlation_id)
diff --git a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py
index 0d8f33b3..e33d0714 100644
--- a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py
+++ b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/__init__.py
@@ -59,6 +59,9 @@ class SystemService(object):
def disks_service(self):
return DisksService()
+ def jobs_service(self):
+ return JobsService()
+
def image_transfers_service(self):
return ImageTransfersService()
@@ -104,6 +107,11 @@ class DisksService(object):
return DiskService(disk_id)
+class JobsService(object):
+ def list(self, search=None):
+ return [types.Job()]
+
+
class ImageTransferService(object):
def __init__(self):
self._finalized = False
@@ -135,7 +143,7 @@ class StorageDomainsService(object):
class VmsService(object):
- def add(self, vm):
+ def add(self, vm, query=None):
return vm
def list(self, search=None):
diff --git a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py
index 5707fa3e..38d89573 100644
--- a/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py
+++ b/tests/test-v2v-o-rhv-upload-module/ovirtsdk4/types.py
@@ -141,6 +141,25 @@ class Initialization(object):
pass
+class JobStatus(Enum):
+ ABORTED = "aborted"
+ FAILED = "failed"
+ FINISHED = "finished"
+ STARTED = "started"
+ UNKNOWN = "unknown"
+
+ def __init__(self, image):
+ self._image = image
+
+ def __str__(self):
+ return self._image
+
+
+class Job(object):
+ description = "Fake job"
+ status = JobStatus.FINISHED
+
+
class StorageDomain(object):
def __init__(self, name=None):
pass
--
2.35.1
2 years, 8 months
[PATCH nbdkit INCOMPLETE] readahead: Rewrite this filter so it prefetches using .cache
by Richard W.M. Jones
This is an attempt to rewrite nbdkit-readahead-filter with a different
approach. Instead of the filter reading ahead and returning that
data, the filter now simply issues .cache() (ie. prefetch) calls.
This relies on either the underlying plugin to do the right thing, or
failing that you have to inject nbdkit-cache-filter below this filter
which will do the caching on behalf of the plugin.
The patch is marked incomplete because I didn't think about the window
size stuff yet.
So some problems remain:
- We rely on being able to make parallel requests into the underlying
plugin, which means that the plugin (and filters) must be using the
PARALLEL thread model.
- I didn't test it much beyond the test suite. Does it make
realistic workloads any faster?
- In general it's probably better for the client (eg. nbdcopy) to be
issuing prefetches, since it knows the access pattern.
Rich.
2 years, 8 months
[v2v PATCH] output_qemu: rewrite output disk mapping
by Laszlo Ersek
The current code handles some nonexistent cases (such as SCSI floppies,
virtio-block CD-ROMs), and does not create proper drives (that is,
back-ends with no media inserted) for removable devices (floppies,
CD-ROMs).
Rewrite the whole logic:
- handle only those scenarios that QEMU can support,
- separate the back-end (-drive) and front-end (-device) options,
- wherever / whenever a host-bus adapter front-end is needed
(virtio-scsi-pci, isa-fdc), create it,
- assign front-ends to buses (= parent front-ends) and back-ends
explicitly.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2074805
Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
---
Notes:
v1:
- do not pick up Rich's R-b yet, because of the changes below
- replace "parent_ctrl_needed" with Array.exists [Rich]
- reference 'virt-v2v(1) section "BUGS"' in the "this should not happen"
warnings [Rich]
- add "bus=scsi0.0" property to "scsi-hd" and "scsi-cd" devices
- created test scenarios
<https://bugzilla.redhat.com/show_bug.cgi?id=2074805#c17> and tested
them
output/output_qemu.ml | 197 ++++++++++++++++----
1 file changed, 160 insertions(+), 37 deletions(-)
diff --git a/output/output_qemu.ml b/output/output_qemu.ml
index da7278b88b67..da8bd475e56e 100644
--- a/output/output_qemu.ml
+++ b/output/output_qemu.ml
@@ -127,7 +127,7 @@ module QEMU = struct
machine, false in
let smm = secure_boot_required in
- let machine =
+ let machine_str =
match machine with
| I440FX -> "pc"
| Q35 -> "q35"
@@ -153,7 +153,7 @@ module QEMU = struct
arg_list "-device" ["vmgenid"; sprintf "guid=%s" genid; "id=vmgenid0"]
);
- arg_list "-machine" (machine ::
+ arg_list "-machine" (machine_str ::
(if smm then ["smm=on"] else []) @
["accel=kvm:tcg"]);
@@ -184,52 +184,175 @@ module QEMU = struct
);
);
- let make_disk if_name i = function
- | BusSlotEmpty -> ()
+ (* For IDE disks, IDE CD-ROMs, SCSI disks, SCSI CD-ROMs, and floppies, we
+ * need host-bus adapters (HBAs) between these devices and the PCI(e) root
+ * bus. Some machine types create these HBAs automatically (despite
+ * "-no-user-config -nodefaults"), some don't...
+ *)
+ let disk_cdrom_filter =
+ function
+ | BusSlotDisk _
+ | BusSlotRemovable { s_removable_type = CDROM } -> true
+ | _ -> false
+ and floppy_filter =
+ function
+ | BusSlotRemovable { s_removable_type = Floppy } -> true
+ | _ -> false in
+ let ide_ctrl_needed =
+ Array.exists disk_cdrom_filter target_buses.target_ide_bus
+ and scsi_ctrl_needed =
+ Array.exists disk_cdrom_filter target_buses.target_scsi_bus
+ and floppy_ctrl_needed =
+ Array.exists floppy_filter target_buses.target_floppy_bus in
- | BusSlotDisk d ->
- (* Find the corresponding target disk. *)
- let outdisk = disk_path output_storage output_name d.s_disk_id in
+ if ide_ctrl_needed then (
+ match machine with
+ | I440FX -> ()
+ (* The PC machine has a built-in controller of type "piix3-ide"
+ * providing buses "ide.0" and "ide.1", with each bus fitting two
+ * devices.
+ *)
+ | Q35 -> ()
+ (* The Q35 machine has a built-in controller of type "ich9-ahci"
+ * providing buses "ide.0" through "ide.5", with each bus fitting one
+ * device.
+ *)
+ | Virt -> warning (f_"The Virt machine has no support for IDE. Please \
+ report a bug for virt-v2v -- refer to virt-v2v(1) \
+ section \"BUGS\".")
+ );
- arg_list "-drive" ["file=" ^ outdisk; "format=" ^ output_format;
- "if=" ^ if_name; "index=" ^ string_of_int i;
- "media=disk"]
+ if scsi_ctrl_needed then
+ (* We need to add the virtio-scsi HBA on all three machine types. The bus
+ * provided by this device will be called "scsi0.0".
+ *)
+ arg_list "-device" [ "virtio-scsi-pci"; "id=scsi0" ];
- | BusSlotRemovable { s_removable_type = CDROM } ->
- arg_list "-drive" ["format=raw"; "if=" ^ if_name;
- "index=" ^ string_of_int i; "media=cdrom"]
+ if floppy_ctrl_needed then (
+ match machine with
+ | I440FX -> ()
+ (* The PC machine has a built-in controller of type "isa-fdc"
+ * providing bus "floppy-bus.0", fitting two devices.
+ *)
+ | Q35 -> arg_list "-device" [ "isa-fdc"; "id=floppy-bus" ]
+ (* On the Q35 machine, we need to add the same HBA manually. Note that
+ * the bus name will have ".0" appended automatically.
+ *)
+ | Virt -> warning (f_"The Virt machine has no support for floppies. \
+ Please report a bug for virt-v2v -- refer to \
+ virt-v2v(1) section \"BUGS\".")
+ );
- | BusSlotRemovable { s_removable_type = Floppy } ->
- arg_list "-drive" ["format=raw"; "if=" ^ if_name;
- "index=" ^ string_of_int i; "media=floppy"]
- in
- Array.iteri (make_disk "virtio") target_buses.target_virtio_blk_bus;
- Array.iteri (make_disk "ide") target_buses.target_ide_bus;
+ let add_disk_backend disk_id backend_name =
+ (* Add a drive (back-end) for a "virtio-blk-pci", "ide-hd", or "scsi-hd"
+ * device (front-end). The drive has a backing file, identified by
+ * "disk_id".
+ *)
+ let outdisk = disk_path output_storage output_name disk_id in
+ arg_list "-drive" [ "file=" ^ outdisk; "format=" ^ output_format;
+ "if=none"; "id=" ^ backend_name; "media=disk" ]
- let make_scsi i = function
- | BusSlotEmpty -> ()
+ and add_cdrom_backend backend_name =
+ (* Add a drive (back-end) for an "ide-cd" or "scsi-cd" device (front-end).
+ * The drive is empty -- there is no backing file.
+ *)
+ arg_list "-drive" [ "if=none"; "id=" ^ backend_name; "media=cdrom" ]
- | BusSlotDisk d ->
- (* Find the corresponding target disk. *)
- let outdisk = disk_path output_storage output_name d.s_disk_id in
+ and add_floppy_backend backend_name =
+ (* Add a drive (back-end) for a "floppy" device (front-end). The drive is
+ * empty -- there is no backing file. *)
+ arg_list "-drive" [ "if=none"; "id=" ^ backend_name; "media=disk" ] in
- arg_list "-drive" ["file=" ^ outdisk; "format=" ^ output_format;
- "if=scsi"; "bus=0"; "unit=" ^ string_of_int i;
- "media=disk"]
+ let add_virtio_blk disk_id frontend_ctr =
+ (* Create a "virtio-blk-pci" device (front-end), together with its drive
+ * (back-end). The disk identifier is mandatory.
+ *)
+ let backend_name = sprintf "drive-vblk-%d" frontend_ctr in
+ add_disk_backend disk_id backend_name;
+ arg_list "-device" [ "virtio-blk-pci"; "drive=" ^ backend_name ]
- | BusSlotRemovable { s_removable_type = CDROM } ->
- arg_list "-drive" ["format=raw"; "if=scsi"; "bus=0";
- "unit=" ^ string_of_int i; "media=cdrom"]
+ and add_ide disk_id frontend_ctr =
+ (* Create an "ide-hd" or "ide-cd" device (front-end), together with its
+ * drive (back-end). If a disk identifier is passed in, then "ide-hd" is
+ * created (with a non-empty drive); otherwise, "ide-cd" is created (with
+ * an empty drive).
+ *)
+ let backend_name = sprintf "drive-ide-%d" frontend_ctr
+ and ide_bus, ide_unit =
+ match machine with
+ | I440FX -> frontend_ctr / 2, frontend_ctr mod 2
+ | Q35 -> frontend_ctr, 0
+ | Virt -> 0, 0 (* should never happen, see warning above *) in
+ let common_props = [ sprintf "bus=ide.%d" ide_bus;
+ sprintf "unit=%d" ide_unit;
+ "drive=" ^ backend_name ] in
+ (match disk_id with
+ | Some id ->
+ add_disk_backend id backend_name;
+ arg_list "-device" ([ "ide-hd" ] @ common_props)
+ | None ->
+ add_cdrom_backend backend_name;
+ arg_list "-device" ([ "ide-cd" ] @ common_props))
+
+ and add_scsi disk_id frontend_ctr =
+ (* Create a "scsi-hd" or "scsi-cd" device (front-end), together with its
+ * drive (back-end). If a disk identifier is passed in, then "scsi-hd" is
+ * created (with a non-empty drive); otherwise, "scsi-cd" is created (with
+ * an empty drive).
+ *)
+ let backend_name = sprintf "drive-scsi-%d" frontend_ctr in
+ let common_props = [ "bus=scsi0.0";
+ sprintf "lun=%d" frontend_ctr;
+ "drive=" ^ backend_name ] in
+ (match disk_id with
+ | Some id ->
+ add_disk_backend id backend_name;
+ arg_list "-device" ([ "scsi-hd" ] @ common_props)
+ | None ->
+ add_cdrom_backend backend_name;
+ arg_list "-device" ([ "scsi-cd" ] @ common_props))
- | BusSlotRemovable { s_removable_type = Floppy } ->
- arg_list "-drive" ["format=raw"; "if=scsi"; "bus=0";
- "unit=" ^ string_of_int i; "media=floppy"]
- in
- Array.iteri make_scsi target_buses.target_scsi_bus;
+ and add_floppy frontend_ctr =
+ (* Create a "floppy" (front-end), together with its empty drive
+ * (back-end).
+ *)
+ let backend_name = sprintf "drive-floppy-%d" frontend_ctr in
+ add_floppy_backend backend_name;
+ arg_list "-device" [ "floppy"; "bus=floppy-bus.0";
+ sprintf "unit=%d" frontend_ctr;
+ "drive=" ^ backend_name ] in
- (* XXX Highly unlikely that anyone cares, but the current
- * code ignores target_buses.target_floppy_bus.
+ (* Add virtio-blk-pci devices for BusSlotDisk elements on
+ * "target_virtio_blk_bus".
*)
+ Array.iteri
+ (fun frontend_ctr disk ->
+ match disk with
+ | BusSlotDisk d -> add_virtio_blk d.s_disk_id frontend_ctr
+ | _ -> ())
+ target_buses.target_virtio_blk_bus;
+
+ let add_disk_or_cdrom bus_adder frontend_ctr slot =
+ (* Add a disk or CD-ROM front-end to the IDE or SCSI bus. *)
+ match slot with
+ | BusSlotDisk d ->
+ bus_adder (Some d.s_disk_id) frontend_ctr
+ | BusSlotRemovable { s_removable_type = CDROM } ->
+ bus_adder None frontend_ctr
+ | _ -> () in
+
+ (* Add disks and CD-ROMs to the IDE and SCSI buses. *)
+ Array.iteri (add_disk_or_cdrom add_ide) target_buses.target_ide_bus;
+ Array.iteri (add_disk_or_cdrom add_scsi) target_buses.target_scsi_bus;
+
+ (* Add floppies. *)
+ Array.iteri
+ (fun frontend_ctr disk ->
+ match disk with
+ | BusSlotRemovable { s_removable_type = Floppy } ->
+ add_floppy frontend_ctr
+ | _ -> ())
+ target_buses.target_floppy_bus;
let net_bus =
match guestcaps.gcaps_net_bus with
base-commit: b3a9dd6442ea2aff31bd93e85aa130f432e24932
--
2.19.1.3.g30247aa5d201
2 years, 8 months