I've tried this issue and could reproduce this issue:
1.Convert a guest with 2 disks to rhev.
# virt-v2v -ic xen+ssh://10.66.106.64 -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export
rhel6.6-i386-hvm -on 2disk-test -of qcow2
[ 0.0] Opening the source -i libvirt -ic xen+ssh://10.66.106.64 rhel6.6-i386-hvm
[ 16.0] Creating an overlay to protect the source from being modified
[ 47.0] Opening the overlay
[ 83.0] Initializing the target -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export
virt-v2v: warning: cannot write files to the NFS server as 36:36, even
though we appear to be running as root. This probably means the NFS client
or idmapd is not configured properly.
You will have to chown the files that virt-v2v creates after the run,
otherwise RHEV-M will not be able to import the VM.
[ 83.0] Inspecting the overlay
[ 90.0] Checking for sufficient free disk space in the guest
[ 90.0] Estimating space required on target for each disk
[ 90.0] Converting Red Hat Enterprise Linux Server release 6.6 Beta (Santiago) to run on
KVM
This guest has virtio drivers installed.
[ 147.0] Mapping filesystem data to avoid copying unused and blank areas
[ 148.0] Closing the overlay
[ 148.0] Copying disk 1/2 to
/tmp/v2v.mtvnoJ/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/b59d8629-38f4-4443-baf0-07462e75bb91/0be98db1-86cc-4da2-9ecb-aae5f700d569
(qcow2)
(100.00/100%)
[ 223.0] Copying disk 2/2 to
/tmp/v2v.mtvnoJ/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/b59d8629-38f4-4443-baf0-07462e75bb91/67f9c9d5-f469-41ad-9390-b281b51515f4
(qcow2)
(100.00/100%)
[ 358.0] Creating output metadata
[ 358.0] Finishing off
2.Check ovf file,the disk part shows as below,the second one shows
ovf:boot='False'.
<Disk ovf:actual_size='1'
ovf:diskId='0be98db1-86cc-4da2-9ecb-aae5f700d569' ovf:size='8'
ovf:fileRef='b59d8629-38f4-4443-baf0-07462e75bb91/0be98db1-86cc-4da2-9ecb-aae5f700d569'
ovf:parentRef='' ovf:vm_snapshot_id='ad886fef-1ca0-4c69-a6d9-52f9a053bd08'
ovf:volume-format='COW' ovf:volume-type='Sparse'
ovf:format='http://en.wikipedia.org/wiki/Byte' ovf:disk-interface='VirtIO'
ovf:disk-type='System' ovf:boot='True'/>
<Disk ovf:actual_size='1'
ovf:diskId='67f9c9d5-f469-41ad-9390-b281b51515f4' ovf:size='4'
ovf:fileRef='b59d8629-38f4-4443-baf0-07462e75bb91/67f9c9d5-f469-41ad-9390-b281b51515f4'
ovf:parentRef='' ovf:vm_snapshot_id='d44852fb-c1df-4b25-9d76-0739616bfbf9'
ovf:volume-format='COW' ovf:volume-type='Sparse'
ovf:format='http://en.wikipedia.org/wiki/Byte' ovf:disk-interface='VirtIO'
ovf:disk-type='System' ovf:boot='False'/>
3.Attched the meta file and ovf file for details
Best regards,
Tingting Zheng(郑婷婷)
----- Original Message -----
From: "Richard W.M. Jones" <rjones(a)redhat.com>
To: "VONDRA Alain" <AVONDRA(a)unicef.fr>
Cc: libguestfs(a)redhat.com
Sent: Wednesday, October 8, 2014 5:32:45 AM
Subject: Re: [Libguestfs] Virt-v2v conversion issue
On Wed, Oct 08, 2014 at 08:11:16AM +0000, VONDRA Alain wrote:
Hi,
I meet an amazing issue, when I convert a raw file to the oVirt environment using
virt-v2v.
All seems to work fine, my VM is composed of 9 disks, the processes
finishes without any trouble, the files are well present in the
right import volume, but only the first system disk appear un my
oVirt Import VM.
I've retried twice and had the same issue .
Here the results of the conversion :
[ 24,0] Copying disk 1/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/8bc601d6-3f1c-4d76-9e15-c3d1d1c27618
(raw)
(100.00/100%)
[2731,0] Copying disk 2/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/92d6e226-ad25-4205-8ccb-ff0f83c9b1c5
(raw)
(100.00/100%)
[16950,0] Copying disk 3/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/931e6d94-247a-4315-aeb5-e948b8abfad5
(raw)
(100.00/100%)
[17670,0] Copying disk 4/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/2a53a0c6-a818-4e91-bd80-d430cb48e114
(raw)
(100.00/100%)
[18619,0] Copying disk 5/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/593a2a5b-feb2-4b61-8a43-9bbb68302286
(raw)
(100.00/100%)
[19053,0] Copying disk 6/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/14dc0673-d4e0-43fc-84ef-3d4f3318f79e
(raw)
(100.00/100%)
[19418,0] Copying disk 7/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/48f0c478-2f89-4f47-95aa-1cbb8ea25a07
(raw)
(100.00/100%)
[20520,0] Copying disk 8/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/0c4abdff-af25-49df-87a4-917298c85688
(raw)
(100.00/100%)
[32429,0] Copying disk 9/9 to
/tmp/v2v.a4uo5V/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/1a1a9826-f233-4290-8b43-a082b902fc9d
(raw)
(100.00/100%)
[34357,0] Creating output metadata
[34357,0] Finishing off
9 hours ..?
And here the import folder
/data/big_export/IMPORTS/0a6404e0-2857-45e4-9322-c1ada5ae13fb/images/a22862e8-f1ef-4530-ba55-6fda7850428c/
:
total 556453768
-rw-rw-rw- 1 vdsm kvm 214762060800 7 oct. 21:29 0c4abdff-af25-49df-87a4-917298c85688
-rw-r--r-- 1 vdsm kvm 297 7 oct. 12:29
0c4abdff-af25-49df-87a4-917298c85688.meta
-rw-rw-rw- 1 vdsm kvm 21484431360 7 oct. 17:52 14dc0673-d4e0-43fc-84ef-3d4f3318f79e
-rw-r--r-- 1 vdsm kvm 296 7 oct. 12:29
14dc0673-d4e0-43fc-84ef-3d4f3318f79e.meta
-rw-rw-rw- 1 vdsm kvm 32227676160 7 oct. 22:01 1a1a9826-f233-4290-8b43-a082b902fc9d
-rw-r--r-- 1 vdsm kvm 296 7 oct. 12:29
1a1a9826-f233-4290-8b43-a082b902fc9d.meta
-rw-rw-rw- 1 vdsm kvm 32226647040 7 oct. 17:39 2a53a0c6-a818-4e91-bd80-d430cb48e114
-rw-r--r-- 1 vdsm kvm 296 7 oct. 12:29
2a53a0c6-a818-4e91-bd80-d430cb48e114.meta
-rw-rw-rw- 1 vdsm kvm 42968862720 7 oct. 18:11 48f0c478-2f89-4f47-95aa-1cbb8ea25a07
-rw-r--r-- 1 vdsm kvm 296 7 oct. 12:29
48f0c478-2f89-4f47-95aa-1cbb8ea25a07.meta
-rw-rw-rw- 1 vdsm kvm 21484431360 7 oct. 17:46 593a2a5b-feb2-4b61-8a43-9bbb68302286
-rw-r--r-- 1 vdsm kvm 296 7 oct. 12:29
593a2a5b-feb2-4b61-8a43-9bbb68302286.meta
-rw-rw-rw- 1 vdsm kvm 85908783104 7 oct. 13:14 8bc601d6-3f1c-4d76-9e15-c3d1d1c27618
-rw-r--r-- 1 vdsm kvm 297 7 oct. 12:29
8bc601d6-3f1c-4d76-9e15-c3d1d1c27618.meta
-rw-rw-rw- 1 vdsm kvm 322134865920 7 oct. 17:11 92d6e226-ad25-4205-8ccb-ff0f83c9b1c5
-rw-r--r-- 1 vdsm kvm 297 7 oct. 12:29
92d6e226-ad25-4205-8ccb-ff0f83c9b1c5.meta
-rw-rw-rw- 1 vdsm kvm 32226647040 7 oct. 17:23 931e6d94-247a-4315-aeb5-e948b8abfad5
-rw-r--r-- 1 vdsm kvm 296 7 oct. 12:29
931e6d94-247a-4315-aeb5-e948b8abfad5.meta
Thanks for your help.
I really need to see what's in all of those *.meta files, *and* the
*.ovf file which is created in another directory (under
master/vms/<uuid>).
I don't think we have tried a multi-disk conversion to RHEV-M yet ...
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
_______________________________________________
Libguestfs mailing list
Libguestfs(a)redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs