Best regards
Graeme
On Tue, Jan 07, 2014 at 10:06:16AM +0000, Graeme Lambert wrote:Hi, I'm trying to run virt-sysprep against a disk in Ceph RBD storage but I appear to be unable to do so. The commands (bold) and outputs I've had are: On Ceph node: *virt-sysprep -a rbd://localhost/libvirt-pool/ubuntu-12-04-beanstalk001* libguestfs: new guestfs handle 0x113b060 rbd://localhost/libvirt-pool/ubuntu-12-04-beanstalk001: No such file or directory libguestfs: trace: close libguestfs: closing guestfs handle 0x113b060 (state 0)Ahhhh ... I think the problem is virt-sysprep is too old, ie. predates us adding support for remote disks. IIRC you need >= 1.22 and you say below you are using 1.14.8 which is very old. Note that even if you have the latest version, no one has actually tested Ceph support which probably means it's buggy. There is some discussion on the mailing list about this.On libvirt node: *virt-sysprep -d beanstalk002 --hostname beanstalk002 --format rbd* Examining the guest ... Fatal error: exception Guestfs.Error("libvirt domain has no disks")The error means that virt-sysprep was not able to find any "ordinary" disks by looking at the libvirt XML description of the guest. It's likely to be the same problem as above, since the old virt-sysprep will ignore anything which is not a local file.I'm using Ubuntu 13.04 on libvirt node and Ubuntu 12.04.3 LTS on the Ceph node. The libguestfs version showing in dpkg is 1:1.14.8-1. This works fine on qcow2 images that are on disk but I'm moving towards having all disks in Ceph RBD and would like to be able to virt-sysprep disks in there if at all possible. Please can you advise where I'm going wrong?If Ceph support even upstream is broken, then it will need development work to fix (starting with filing a bug ...) But anyway nothing is expected to work with 1.14. Rich.