On Thu, 16 Nov 2017 13:56:16 +0100
Tomáš Golembiovský <tgolembi(a)redhat.com> wrote:
On Wed, 15 Nov 2017 21:41:20 +0100
Gandalf Corvotempesta <gandalf.corvotempesta(a)gmail.com> wrote:
> 2017-11-15 21:29 GMT+01:00 Richard W.M. Jones <rjones(a)redhat.com>:
> > Gandalf, is there an XVA file publically available (pref. not
> > too big) that we can look at?
>
> I can try to provide one, but it's simple:
>
> # tar tvf 20160630_124823_aa72_.xva.gz | head -n 50
> ---------- 0/0 42353 1970-01-01 01:00 ova.xml
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000000
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000000.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000001
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000001.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000003
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000003.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000004
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000004.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000005
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000005.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000006
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000006.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000007
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000007.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000009
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000009.checksum
> ---------- 0/0 1048576 1970-01-01 01:00 Ref:175/00000010
> ---------- 0/0 40 1970-01-01 01:00 Ref:175/00000010.checksum
>
>
> You can ignore the ova.xml and just use the "Ref:175" directory.
> Inside the XVA you'll fine one "Ref" directory for each virtual disk
> (ref number is different for each disk)
> Inside each Ref directory, you'll get tons of 1MB file, corrisponding
> to the RAW image.
> You have to merge these files in a single raw file with just an
> exception: numbers are not sequential.
> as you can see above, we have: 00000000, 00000001, 00000003
>
> The 00000002 is missing, because it's totally blank. XenServer doesn't
> export any empty block, thus it will skip the corrisponding 1MB file.
> When building the raw image, you have to replace empty blocks with 1MB
> full of zeros.
>
> This is (in addition to the tar extract) the most time-consuming part.
> Currently I'm rebuilding a 250GB image, with tons of files to be
> merge.
> If qemu-img can be patched to automatically convert this kind of
> format, I can save about 3 hours (30 minutes for extracting the
> tarball, and about 2 hours to merge 250-300GB image)
>
Hi,
I like what Richard and Max came up with. Pretty nifty solutions.
While there's nothing wrong with them I decided to take my own shot at
it. Since the blocks in tar file are pieces of raw image there is no
conversion happening. All in all it's just moving bytes from one place
to another. That means there shouldn't be a need for any heavy
machinery, right? :)
Attached is a shell script that uses dd to do the byte-shuffling.
I'm pretty sure this could even be safely parallelized by running
multiple instances of dd at the same time (e.g. with xargs). But I did
not try that.
Oh yes, and one more thing: as with the other solutions I didn't bother
reading the XML for the target size. So resize may be necessary
afterwards.
Tomas
--
Tomáš Golembiovský <tgolembi(a)redhat.com>