On Fri, Aug 27, 2010 at 04:27:23PM +0100, Matthew Booth wrote:
+#define HREAD_MESSAGE_MAX (GUESTFS_MESSAGE_MAX - \
+ sizeof(guestfs_message_header) - \
+ sizeof(guestfs_hread_args))
+ if (size > (int64_t) HREAD_MESSAGE_MAX) {
+ size = HREAD_MESSAGE_MAX;
+ }
+#undef HREAD_MESSAGE_MAX
This upper limit isn't correct because sizeof C structure has no
relationship with XDR serialization. Use GUESTFS_MESSAGE_MAX here to
prevent huge mallocs, and allow ordinary message serialization to
divine the true upper limit.
See how pread does it here:
http://git.annexia.org/?p=libguestfs.git;a=blob;f=daemon/file.c;h=476f445...
# Specification in the form of a few gnulib-tool.m4 macro
invocations:
gl_LOCAL_DIR([])
gl_MODULES([
+ array-list
I wonder if using a hash is better? It should be faster for large
numbers of handles, although that's an unlikely scenario. But also we
already use hash in a few places. However the API of Gnulib
hashtables is fearsomely complicated.
--- /dev/null
+++ b/regressions/test-hfile.pl
The tests are much better now.
Can we test that it works if you have two handles open? Use
guestfs_fill_pattern to fill a couple of files with test pattern data
and check that reads from each are handled independently.
+ ("hwrite", (RErr, [Int "handle"; BufferIn
"content"]), 272,
+ [ProtocolLimitWarning], [],
+ "write data to an open handle",
+ "\
+This command writes all the data in C<content> to C<handle>.");
Although this is fine for files, it ties us to this "full write"
behaviour even if we decided to implement pipes, say.
In general the API is fine and I'd like to apply it. See my previous
email too about problems with the patch format.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming blog:
http://rwmj.wordpress.com
Fedora now supports 80 OCaml packages (the OPEN alternative to F#)
http://cocan.org/getting_started_with_ocaml_on_red_hat_and_fedora