-----Original Message-----
From: Richard W.M. Jones [mailto:rjones@redhat.com]
Sent: Tuesday, January 14, 2014 4:42 PM
To: Исаев Виталий Анатольевич
Cc: libguestfs(a)redhat.com
Subject: Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the
RHEV cluster
On Tue, Jan 14, 2014 at 08:07:43AM +0000, Исаев Виталий Анатольевич wrote:
[00072ms] /usr/libexec/qemu-kvm \
-global virtio-blk-pci.scsi=off \
-nodefconfig \
-nodefaults \
-nographic \
-drive
file=/dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f,snapshot=on,if=virtio
\
-nodefconfig \
-machine accel=kvm:tcg \
-m 500 \
-no-reboot \
-device virtio-serial \
-serial stdio \
-device sga \
-chardev
socket,path=/tmp/libguestfs2yVhoH/guestfsd.sock,id=channel0 \
-device
virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
-kernel /var/tmp/.guestfs-0/kernel.3269 \
-initrd /var/tmp/.guestfs-0/initrd.3269 \
-append 'panic=1 console=ttyS0 udevtimeout=300 no_timer_check
acpi=off printk.time=1 cgroup_disable=memory selinux=0 guestfs_verbose=1 TERM=xterm '
\
-drive
file=/var/tmp/.guestfs-0/root.3269,snapshot=on,if=virtio,cache=unsafeq
emu-kvm: -drive
file=/dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da
73--4465--aed2--912119fcf67f,snapshot=on,if=virtio: could not open
disk image
/dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4
465--aed2--912119fcf67f: No such file or directory
libguestfs: child_cleanup: 0x23dc5d0: child process died
libguestfs: trace: launch = -1 (error)
libguestfs runs qemu with the command line above. qemu tries to open the
/dev/mapper/1a9... file. qemu reports that it cannot open that file.
Unfortunately qemu's error messages are very poor. However there are a few
possibilities:
(a) An actual permissions issue. Since you seem to be running this as root, this
doesn't seem to be likely, but you should check that anyway. Are there SELinux
AVCs?
(b) qemu cannot open the backing file. Try running:
qemu-img info /dev/mapper/1a9...
and if it has a backing file, check that the backing file(s) [recursively] can be opened
too.
(c) Also check that the backing file paths are not relative. If they are you will need to
run your script from the correct directory so that the relative paths are accessible.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones virt-p2v
converts physical machines to virtual machines. Boot with a live CD or over the network
(PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
Dear Rich, thank you for a prompt reply on my question. The similar problems have been
found with all of the rest Thin Provisioned disks in the cluster, while all the
Preallocated disks were handled with libguestfs correctly. I guess these issues were
caused by (b) and probably (c) reasons:
The backing file of any of the thin provisioned disks does not exist. For instance let’s
consider the
/dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f
symbolic link pointing to /dev/dm-30 :
[root@rhevh1 mapper]# pwd
/dev/mapper
[root@rhevh1 mapper]# qemu-img info
1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f
image: 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f
file format: qcow2
virtual size: 40G (42949672960 bytes)
disk size: 0
cluster_size: 65536
backing file:
../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5
[root@rhevh1 mapper]# ll
../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5
ls: cannot access
../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5: No such file
or directory
Note that /dev/dm-30 is not accessible with libguestfs.
Now I am trying to find the files with the same name. As a result I receive three symbolic
links pointing to
/dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5:
[root@rhevh1 mapper]# find / -name cbe36298-6397-4ffa-ba8c-5f64e90023e5
/dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5
/var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5
/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5
In turn, the
/dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5 file is a
symbolic link which points to the /dev/dm-19
At last I am trying to launch libguestfs with block device directly:
[root@rhevh1 mapper]# qemu-img info /dev/dm-19
image: /dev/dm-19
file format: raw
virtual size: 40G (42949672960 bytes)
disk size: 0
[root@rhevh1 mapper]# python
Python 2.6.6 (r266:84292, Oct 12 2012, 14:23:48)
[GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>> import guestfs
>> g = guestfs.GuestFS()
>> g.add_drive_opts("/dev/dm-19",readonly=1)
>> g.launch()
>> g.lvs()
[]
>> g.pvs()
[]
>> g.list_partitions()
['/dev/vda1', '/dev/vda2']
>> g.inspect_os()
['/dev/vda1']
Now I’m a little bit confused with the results of my research. I found that VM with the
only disk attached has at least two block devices mapped to the hypervisor’s file system
in fact – I mean /dev/dm-19 (raw) and /dev/dm-30 (qcow2). The RHEV-M API (aka Python oVirt
SDK) provides no info about the first one, but the second cannot be accessed from
libguestfs. I have an urgent need to work with a chosen VM disk images through the
libguestfs layer, but I don’t know which images belong to every VM exactly. It seems like
I’m going the hard way :) Sincerely,
Vitaly Isaev
Software engineer
Information security department
Fintech JSC, Moscow, Russia