[Adding Tomas Golembiovsky]
On Wed, Sep 26, 2018 at 12:11 PM Richard W.M. Jones <rjones(a)redhat.com>
wrote:
Rather than jumping to a solution, can you explain what the problem
is that you're trying to solve?
You need to do <X>, you tried virt-v2v, it doesn't do <X>, etc.
Well, that's mainly IMS related challenges. We're working on OpenStack
output
support and migration throttling and this implies changes to
virt-v2v-wrapper.
This is then the opportunity to think about virt-v2v-wrapper maintenance and
feature set. It has been created in the first place to simplify interaction
with virt-v2v from ManageIQ.
The first challenge we faced is the interaction with virt-v2v. It's highly
versatile and proposes a lot of options for input and output. The downside
of
it is that over time is becomes more and more difficult to know them all.
And
all the error messages are made for human beings, not machines, so providing
feedback through a launcher, such as virt-v2v-wrapper, is difficult.
The second challenge was monitoring the virt-v2v process liveliness and
progress. For liveliness, the virt-v2v-wrapper stores the PID and checks
that
it's still present and when absent checks its return code for success (0) or
failure (!0), and any other launcher could do the same. For progress, the
only
way to know what happens is to run virt-v2v in debug mode (-v -x) and parse
the
(very extensive) output. Virt-v2v-wrapper does it for us in IMS, but it is
merely a workaround. I'd expect a conversion tool to provide a comprehensive
progress, such as "I'm converting VM 'my_vm' and more specifically disk
X/Y
(XX%). Total conversion progress is XX%". Of course, I'd also expect a
machine
readable output (JSON, CSV, YAML…). Debug mode ensures we have all the data
in
case of failure, so I don't say remove it, but simply add specialized
outputs.
The third challenge was to clean up in case of virt-v2v failure. For
example,
when it fails converting a disk to RHV, it doesn't clean the finished and
unfinished disks. Virt-v2v-wrapper was initially written by RHV team (Tomas)
for RHV migrations, so it sounded fair(ish). But, extending the outputs to
OpenStack, we'll have to deal with leftovers in OpenStack too. Maybe a
cleanup
on failure option would be a good idea, with a default to false to not break
existing behaviour.
The fourth challenge is to limit the resources allocated to virt-v2v during
conversion, because concurrent conversions may have a huge impact on
conversion
host performance. In the case of an oVirt host, this can impact the virtual
machines that run on it. This is not covered yet by the wrapper, but
implementation will likely be based on Linux cgroups and tc.
The wrapper also adds an interesting feature: both virt-v2v and
virt-v2v-wrapper
run daemonized and we can asynchronously poll the progress. This is really
key
for IMS (and maybe for others): this allows us to start as many conversions
in
parallel as needed and monitor them. Currently, the Python code forks and
detaches itself, after providing the paths to the state file. In the
discussion
about cgroups, it was mentioned that systemd units could be used, and it
echoes
with the daemonization, as systemd-run allows running processes under
systemd
and in their own slice, on which cgroups limits can be set.
About the evolution of virt-v2v-wrapper that I'm going to describe, let me
state
that this is my personal view and it endorses only myself.
I would like to see the machine-to-machine interaction, logging and cleanup
in
virt-v2v itself because it is valuable to everyone, not only IMS.
I would also like to convert virt-v2v-wrapper to a conversion API and
Scheduler
service. The idea is that it would provide an as-a-Service endpoint for
conversions, that would allow creation of conversion jobs (POST), fetching
of
the status (GET), cancelation of a conversion (DELETE) and changing of the
limits (PATCH). In the background, a basic scheduler would simply ensure
that
all the jobs are running. Each virt-v2v process would be run as a systemd
unit
(journald could capture the debug output), so that it is independent from
the API and Scheduler processes.
I know that I can propose patches for changes to virt-v2v, or at least file
RFEs in Bugzilla (my developer skills and programing languages breadth are
limited). For the evolved wrapper, my main concern is its housing and
maintenance. It doesn't work only for oVirt, so having its lifecycle tied to
oVirt doesn't seem relevant in the long term. In fact, it can be for any
virt-v2v output, so my personal opinion is that it should live in the
virt-v2v
ecosystem and follow it's lifecycle. As for its maintenance, we still have
to
figure out who will be responsible for it, i.e. who will be able to dedicate
time to it.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
--
*Fabien Dupont*
PRINCIPAL SOFTWARE ENGINEER
Red Hat - Solutions Engineering
fabien(a)redhat.com M: +33 (0) 662 784 971 <+33662784971>
<
http://redhat.com> *TRIED. TESTED. TRUSTED.*
Twitter: @redhatway <
https://twitter.com/redhatway> | Instagram: @redhatinc
<
https://www.instagram.com/redhatinc/> | Snapchat: @redhatsnaps