For the raw format local disk to local disk conversion, it's possible
to regain most of the performance by adding
--request-size=$(( 16 * 1024 * 1024 )) to the nbdcopy command. The
patch below is not suitable for going upstream but it can be used for
testing:
diff --git a/v2v/v2v.ml b/v2v/v2v.ml
index 47e6e937..ece3b7d9 100644
--- a/v2v/v2v.ml
+++ b/v2v/v2v.ml
@@ -613,6 +613,7 @@ and nbdcopy output_alloc input_uri output_uri =
let cmd = ref [] in
List.push_back_list cmd [ "nbdcopy"; input_uri; output_uri ];
List.push_back cmd "--flush";
+ List.push_back cmd "--request-size=16777216";
(*List.push_back cmd "--verbose";*)
if not (quiet ()) then List.push_back cmd "--progress";
if output_alloc = Types.Preallocated then List.push_back cmd "--allocated";
The problem is of course this is a pessimisation for other
conversions. It's known to make at least qcow2 to qcow2, and all VDDK
conversions worse. So we'd have to make it conditional on doing a raw
format local conversion, which is a pretty ugly hack. Even worse, the
exact size (16M) varies for me when I test this on different machines
and HDDs vs SSDs. On my very fast AMD machine with an SSD, the
nbdcopy default request size (256K) is fastest and larger sizes are a
very slightly slower.
I can imagine an "adaptive nbdcopy" which adjusts these parameters
while copying in order to find the best performance. A little bit
hard to implement ...
I'm also still wondering exactly why a larger request size is better
in this case. You can easily reproduce the effect using the attached
test script and adjusting --request-size. You'll need to build the
standard test guest, see part 1.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/