On Tue, 16 Jun 2020 22:06:58 +0100
"Richard W.M. Jones" <rjones(a)redhat.com> wrote:
On Tue, Jun 16, 2020 at 04:34:15PM +0200, Tomáš Golembiovský wrote:
> On Wed, 10 Jun 2020 11:31:33 +0100
> "Richard W.M. Jones" <rjones(a)redhat.com> wrote:
>
> > I finally got access to the container. This is how it's configured:
> >
> > * / => an overlay fs.
> >
> > There is sufficient space here, and there are no "funny"
restrictions,
> > to be able to create the libguestfs appliance. I proved this by
> > setting TMPDIR to a temporary directory under / and running
> > libguestfs-test-tool.
> >
> > There appears to be quite a lot of free space here, so in fact the
> > v2vovl files could easily be stored here too. (They only store the
> > conversion delta, not the full guest images.)
>
> The thing is that nobody can guarantee that there will be enough space.
> The underlying filesystem (not the mountpoint) is shared between all the
> containers running on the host. This is the reason why we have a PV on
> /var/tmp -- to make sure we have guaranteed free space.
This must surely be a problem for all containers? Do containers
behave semi-randomly when the host starts to run out of space? All
containers must have to assume that there's some space available in
/tmp or /var/tmp surely.
My understanding is that the container should not require significant
amount of free space for runtime. If you need a permanent data storage
or larger amount of temporary storage you need to use a volume.
If we can guarantee that each container has 1 or 2 G of free space
(doesn't seem unreasonable?) then virt-v2v should work fine and won't
need any NFS mounts.
NFS is just one of the methods to provision a volume. There is also a
host-local provisioner that makes sure there is enough space on the
local storage -- which seems what you are suggesting -- but I do not
recall what were the arguments against using it.
> > * /var/tmp => an NFS mount from a PVC
> >
> > This is a large (2T) external NFS mount.
>
> I assume that is the free space in the underlying filesystem. From there
> you should be guaranteed to "own" only 2GB (or something like that).
>
> > I actually started two pods
> > to see if they got the same NFS mount point, and they do. Also I
> > wrote files to /var/tmp in one pod and they were visible in the other.
> > So this seems shared.
>
> You mean you run two pods based on some YAML template or you run two
> pods from Kubevirt web UI?
Two from a yaml template, however ...
> If you run the pods manually you may have
> reused the existing PV/PVC. It is the web UI that should provision you
> new scratch space for each pod. If that is not working then that is a
> bug in Kubevirt.
... the PVC name was "v2v-conversion-temp" (and not some randomly
generated name) suggesting that either the user must enter a new name
every time or else they're all going to get the same NFS mount.
Can you explain a bit more about how they get different mounts?
Oh, I see. That seems wrong then and is probably a bug in Kubevirt web
UI. Do you still have access to the testing environment?
> > Also it uses root squash (so root:root is
> > mapped to 99:99).
>
> IMHO this is the main problem that I have been telling them about from
> the start. Thanks for confirming it. Using root squash on the mount is
> plain wrong.
This is definitely the main problem, and is the direct cause of the
error you were seeing. I'm still not very confident that our locking
will work reliably if two virt-v2v instances in different containers
or pods see a shared /var/tmp.
They should not. If they do then it is a bug somewhere else and not in
virt-v2v.
Tomas
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
--
Tomáš Golembiovský <tgolembi(a)redhat.com>