On Thu, Feb 23, 2023 at 11:43:38AM +0100, Laszlo Ersek wrote:
On 2/22/23 19:20, Andrey Drobyshev wrote:
> Since commits b28cd1dc ("Remove requested_guestcaps / rcaps"), f0afc439
> ("Remove guestcaps_block_type Virtio_SCSI") support for installing
> virtio-scsi driver is missing in virt-v2v. AFAIU plans and demands for
> bringing this feature back have been out there for a while. E.g. I've
> found a corresponding issue which is still open [1].
>
> The code in b28cd1dc, f0afc439 was removed due to removing the old in-place
> support. However, having the new in-place support present and bringing
> this same code (partially) back with several additions and improvements,
> I'm able to successfully convert and boot a Win guest with a virtio-scsi
> disk controller. So please consider the following implementation of
> this feature.
>
> [1]
https://github.com/libguestfs/virt-v2v/issues/12
(Preamble: I'm 100% deferring to Rich on this, so take my comments for
what they are worth.)
In my opinion, the argument made is weak. This cover letter does not say
"why" -- it does not explain why virtio-blk is insufficient for *Virtuozzo*.
Second, reference [1] -- issue #12 -- doesn't sound too convincing. It
writes, "opinionated qemu-based VMs that exclusively use UEFI and only
virtio devices". "Opinionated" is the key word there. They're
entitled
to an opinion, they're not entitled to others conforming to their
opinion. I happen to be opinionated as well, and I hold the opposite view.
I think that issue shouldn't have used the word 'opionated' as
it gives the wrong impression that the choice is somewhat
arbitrary and interchangable. I think there are rational reasons
why virtio-scsi is the better choice that they likely evaluated.
The main tradeoffs for virtio-blk vs virtio-scsi are outlined
by QEMU maintainers here:
https://www.qemu.org/2021/01/19/virtio-blk-scsi-configuration/
TL;DR virtio-blk is preferred for maximum speed, while virtio-scsi
is preferred if you want to be able to add lots of disks free of
worrying about PCI slot availability.
I can totally understand why public clouds would pick to only
support virtio-scsi. The speed benefits of virtio-blk are likely
not relevant / visible to their customers because when you're
overcommiting hosts to serve many VM, the bottleneck is almost
certainly somewhere other than the guest disk device choice.
The ease of adding many disks is very interesting to public clouds
though, especially if the VM has many NICs already taking up PCI
slots. Getting into adding PCI bridges adds more complexity than
using SCSI. OpenStack maintainers have also considered preferring
virtio-scsi over virtio-blk for specifically this reason in the
past and might not have even used virtio-blk at all, if it were
not for the backcompat concerns.
So I'd say it is a reasonable desire to want to (optionally) emit
VMs that are setup for use of virtio-scsi instead of virtio-blk
With regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|