Richard W.M. Jones wrote:
On Mon, Nov 09, 2009 at 11:35:55AM +0100, Jim Meyering wrote:
> Richard W.M. Jones wrote:
> > + if (stderror) {
> > + se_size += r;
> > + p = realloc (*stderror, se_size);
> > + if (p == NULL) {
> > + perror ("realloc");
> > + goto quit;
> > + }
> > + *stderror = p;
> > + memcpy (*stderror + se_size - r, buf, r);
>
> Hi Rich,
>
> This change looks fine.
>
> However, there's an unrelated problem:
> if stderr is very large, incrementing se_size by "r"
> could result in a smaller total size (overflow).
> Then, the following memcpy would clobber the heap
> by writing a potentially very large amount of data
> into a much smaller (or even 0-length) buffer.
>
> One way to avoid it:
>
> size_t new_size = se_size + r;
> if (new_size < se_size) {
> perror ("standard error too large");
> goto quit;
> }
> se_size = new_size;
>
> Or perhaps better: limit the buffer size to say 1MiB,
> and warn if truncation is required.
If I'm understand this correctly, se_size would have to be O(2GB) for
this to be a problem? I guess we'd have run out of memory long before
this because we only give the appliance ~500MB of memory, of which
much less than that is usable.
Right. For now, it's definitely only theoretical.
And since r is never larger than 256 (sizeof buf),
it'd take a long time to read that much data.
Or am I not understanding this right?
In any case it won't hurt to add a check here.