-----Original Message-----
From: Richard W.M. Jones [mailto:rjones@redhat.com]
Sent: Friday, January 17, 2014 6:46 PM
To:
Исаев Виталий Анатольевич
Cc: libguestfs@redhat.com
Subject: Re: [Libguestfs] LVM mounting issue

 

On Fri, Jan 17, 2014 at 02:38:43PM +0000, Исаев Виталий Анатольевич wrote:

> 3.       Now I go to the RHEV-H to look for the disk image itself:

>

> [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3

> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cc6e4400-7c98-4170-9075-5f57

> 90dfcff3

> /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f

> -4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc

> 6e4400-7c98-4170-9075-5f5790dfcff3

> /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/ima

> ges/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f579

> 0dfcff3

>

> 4.       Note that all these files are symbolic links:

>

> [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3

> -exec readlink -f {} \;

> /dev/dm-40

> /dev/dm-40

> /dev/dm-40

>

> 5.       One more symbolic link is in /dev/mapper:

>

> [root@rhevh1 /]# ls -l

> /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4

> 170--9075--5f5790dfcff3 lrwxrwxrwx. 1 root root 8 2013-11-20 10:59

> /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4

> 170--9075--5f5790dfcff3 -> ../dm-40

>

> 6.       So I have no choice and I try to open /dev/dm-40 with libguestfs or guestfish. What's next, you already know.

 

You definitely do have a choice.  Don't open /dev/dm-40.  Open one of the other paths instead, eg:

 

virt-inspector2 -v -x -a /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3

 

It makes a big difference to qemu which path you use, because it searches for backing disks relative to the path of the original disk image.

 

Rich.

 

--

Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported.

http://fedoraproject.org/wiki/MinGW

 

 

Hello, Richard, I apologise for the late reply. It took me some time to work with a decision you have proposed and to continue discussion in bug 1053684.

What I tried to do is to launch libguestfs tests with disk images not from /dev/mapper or directly from /dev/ but from /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/<…> instead. Unfortunately, the results were poor: I could not even access some of the images with libguestfs, whereas before I succeeded to launch libguestfs with every image from /dev/dm-xx properly.

 

I launched the script (test2.py is attached to this message) on both nodes of my cluster (node1 and node2 outputs are attached too) in order to test libguestfs work with your approach.

Consider the following table (all vms are running):

 

     VM   RUNNING ON NODE     DISK IS ACCESSIBLE ON A NODE*

build_list      1                    1   

build-ss        1                    1

build-ss001     1                    -

build-ss002     1                    -

fs              2                    1,2

ipa1            1                    -

koji-build-test 2                    -

koji_hub        1                    1

koji-hub-test   2                    -

postgres        1                    -   

share           2                    1,2

test1           1                    1

ts2             1                    -

vc2             1                    1

vmbuild         1                    1

win7_32         2                    1,2

winxp           2                    1,2

    

* - disk was found in /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/

and could be handled correctly with libguestfs tool

 

So you can see that I did not manage to run libguestfs with disk 7 images. (It’s also quite strange for me that some disks are accesible on both nodes while the other disks are mapped to the only node – but this is oVirt-specified question).

However, Comment 14 contains my workaround of this problem (recursive resolving of qcow2 disks into raw discs using `qemu-img` output). This workaround allowed accessing all the disks with libguestfs.

 

Finally I would like to draw you attention to the machine that is different from all the other VMs (see the node1 output file). This the one from which we started this thread. It has a RHEL 6.4 on a board and working fine. And it is the only VM which operating system is not able to be detected by libguestfs. Even when I launch libguestfs from the folder you recommended in your last message, I receive the same (virt-inspector2 -v -x output is attached to this message too):

-------------------------------------------------------  2  ---------------------------------------------------------------------

VM: 'build-ss' - disk_image_id: cc6e4400-7c98-4170-9075-5f5790dfcff3

     Trying to open /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3

          guestfs succesfully launched

                Physical volumes: ['/dev/vda2', 'unknown device']

                Logical volumes: ['/dev/vg_kojit/lv_root', '/dev/vg_kojit/lv_swap']

                Partitions: ['/dev/vda1', '/dev/vda2']

                Operating systems: []

 

May be you have some idea about repairing this VM in order to provide the full libguestfs functionality? I need to get mountpoints and mount a filesystem.

 

Thank you in advance!

Sincerely,

Vitaly Isaev

Software engineer

Information security department

Fintech JSC, Moscow, Russia