On Tue, Apr 07, 2020 at 01:25:02PM +0200, Pino Toscano wrote:
The important thing is still that that you need to have space for
the
temporary files somewhere: be it /var/tmp, /mnt/scratch, whatever.
Because of this, and the fact that usually containers are created
fresh, the cache of the supermin appliance starts to make little sense,
and then a very simple solution is to point libguestfs to that extra
space:
$ dir=$(mktemp --tmpdir -d /path/to/big/temporary/space/libguestfs.XXXXXX)
$ export LIBGUESTGS_CACHEDIR=$dir
$ export TMPDIR=$dir # optionally
$ libguestfs-test-tool
[etc]
$ rm -rf $dir
Easy to use, already doable, solves all the issues.
So AIUI there are a few problems with this (although I'm still
investigating and trying to make a local reproducer):
- The external large space may be on NFS, with the usual problems
there like root_squash, no SELinux labels, slowness. This means
it's not suitable for the appliance, but might nevertheless be
suitable for overlay files.
- The external large space may be shared with other containers, and
I'm not convinced that our locking in supermin will be safe if
multiple parallel instances start up at the same time. We
certainly never tested it, and don't currently advise it.
One may well say (and I might even agree): don't do that, or change
the NFS configuration, or don't use NFS. But we may have to deal with
either a Kubernetes configuration as it exists already, or may find
that the tenant != Kubernetes admin.
Another issue, incidental to this but related, is that we cannot use
the fixed appliance because docker/podman have broken support for
sparse files. (It's possible to configure libguestfs to use qcow2 for
fixed appliances, but we don't do this now, we never really tested it,
and it both greatly slows down and adds more dependencies to the
supermin build step.)
This whole problem started from a QE report on leftover files after
failed migrations: bz#1820282.
(I should note also there are two bugs which I personally think we can
solve with the same fix, but they are completely different bugs.)
What this report doesn't say, however,
is that beside the mentioned files that virt-v2v creates, there are
also leftover files that libguestfs itself creates. These files are
usually downloaded from the guest for the inspection, and generally not
that big compared to e.g. the overlays that virt-v2v creates.
Nonetheless, an abrupt exit of virt-v2v will leave those in place, and
they will still slowly fill up the space on /var/tmp (or whatever is
the location of $LIBGUESTFS_CACHEDIR).
I guess that small files being left around aren't really a problem.
The problem they have is large files being left around, and I can
understand why that would be an issue and not the small files.
Sure, creating a special directory for virt-v2v solves /some/ of the
issues, and most probably the ones that concern most from a disk space
POV. However, since we "uncovered" the issue and started to get our
hands on it, I don't think it would be ideal to create an ad-hoc
solution that solves just some.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v