On Mon, Sep 24, 2018 at 6:30 PM Richard W.M. Jones <rjones@redhat.com> wrote:
On Mon, Sep 24, 2018 at 10:00:21AM +0200, Fabien Dupont wrote:
> Hi,

Hi Fabien, sorry I didn't respond to this earlier as I was doing some
work.  If you CC me on emails then you can usually get a quicker
response.

> I've read the virt-v2v OpenStack output code to understand how it works and
> I've seen this:
>
> >   (* The server name or UUID of the conversion appliance where
> >    * virt-v2v is currently running.  In future we may be able
> >    * to make this optional and derive it from the OpenStack
> >    * metadata service instead.
> >    *)
> >   server_id : string;
>
> Indeed, it can be derived from OpenStack metadata service. The following
> URL called from within the conversion appliance will return the metadata:
> http://169.254.169.254/openstack/latest/meta_data.json. As you can see, the
> IP address is 169.254.169.254, which will is the metadata service. The JSON
> body contains a uuid entry that is the current appliance UUID, hence the
> server_id used by virt-v2v.

We certainly do want to do this, although there was some concern about
whether the metadata service is enabled on every OpenStack instance
out there.  (Also there are two different types of metadata service IIRC?)


This concrete approach will not work in our current deployment, since the metadata service is not there.
The infrastructure was made in such a way that the IP addressing and network configuration
is done on the provider side. This means that all the information VMs are getting, are coming
from the lab network. I am thinking a way around this if possible. I'll try out different OSP network 
configurations and see if I can come up with something which will keep IP, MAC and routing consistent 
after migration, and still have an isolated metadata service on the OSP side. 

 
 (Unfortunately the connection hung
for minutes instead of timing out quickly, which is not great.)

yeah ... That is not the friendliest of approaches, but it waits for a pre-defined timeout someplace. 
 
Cheers,

Nenad