On Sat, Nov 06, 2021 at 01:22:50AM +0200, Nir Soffer wrote:
Minimize reallocs by growing the backing array by factor of 1.5.
s/urils/utils/ in the subject.
Testing show that now append() is fast without calling reserve()
upfront, simplifying code using vector.
NBDKIT_BENCH=1 ./test-vector
bench_reserve: 1000000 appends in 0.004496 s
bench_append: 1000000 appends in 0.004180 s
This can make a difference in code appending millions of items.
Yes, this is a well-known technique needed to turn what is otherwise a
quadratic O(n^2) growth loop into a much nicer amortized O(n) growth
pattern.
Ported from libnbd:
https://listman.redhat.com/archives/libguestfs/2021-October/msg00306.html
---
common/utils/vector.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/common/utils/vector.c b/common/utils/vector.c
index 00cd2546..7df17e1b 100644
--- a/common/utils/vector.c
+++ b/common/utils/vector.c
@@ -41,11 +41,21 @@ int
generic_vector_reserve (struct generic_vector *v, size_t n, size_t itemsize)
{
void *newptr;
+ size_t reqalloc, newalloc;
- newptr = realloc (v->ptr, (n + v->alloc) * itemsize);
+ reqalloc = v->alloc + n;
+ if (reqalloc < v->alloc)
+ return -1; /* overflow */
+
+ newalloc = (v->alloc * 3 + 1) / 2;
+
+ if (newalloc < reqalloc)
+ newalloc = reqalloc;
+
+ newptr = realloc (v->ptr, newalloc * itemsize);
Same potential for overflow as in the libnbd patches. Whatever we fix
there, we'll have to port here as well.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization:
qemu.org |
libvirt.org