Hello everybody,
I been trying to debug a problem for a month now and can use some
insights and advice.
This is the setup, I got two linux ha storage node providing iscsi disk,
the disks is mounted on two linux kvm host and one backup server.
The iscsi disk has lvm on it, the logical volume groups are visible on
all servers.
On the backup server I have the following running:
# guestmount --version
guestmount 1.40.2
# guestmount --ro -a /dev/lvm1-vol/sr8-disk1a -a /dev/lvm1-vol/sr8-disk2
-a /dev/lvm1-vol/sr8-disk3 -a /dev/lvm1-vol/sr8-disk4 -a
/dev/lvm1-vol/sr8-disk5 -a /dev/lvm1-vol/sr8-disk6 -m /dev/sdb2
/mnt/sr8-sdb2
# rsync --archive --delete --partial --progress --recursive --no-links
--no-devices --quiet /mnt/sr8-sdb2/
/srv/storage/backups/libvirt-filesystems/sr8-sdb2
This used to go fine for many years (we helped with the development of
deduplication ntfs support).
Now one of our windows vms has six logical volumes and grow to more then
5TB of space.
The backup started to get corrupted. My initial thought was that the lvm
snapshot sizes was to small and was getting invalided.
I started using guestmount --ro -a straight on the logical volumes and
the same behaviour is happening.
What I can see is this:
I do an ls -hal /mnt/sr8-sdb2/ after the guest mounts and everything
looks good and I can access the files.
Then after a while the whole content of /mnt/sr8-sdb2/ is changed and
show the content of an linux drive!! and the rsync commands fails and
stops. I can unmount /mnt/sr8-sdb2/ remount it and the data is there
again....
Help debugging please?
Are there other option I should use? --live?
Kind regards,
Jelle de Jong