On Oct 20, 2011, at 04:46 , Richard W.M. Jones wrote:
On Thu, Oct 20, 2011 at 12:37:54PM +0100, Richard W.M. Jones wrote:
> of the tests could have picked up this type of bug. I think the tests
> (all of them) need to be changed to be more thorough.
And while you're doing that, is there a way to avoid the
hard-coded '297680'?
I suggest two possibilities: create another test hive (derived from
'large') which will be specific to this test and we will never change
in future. Since this won't change, we can go ahead and use
hard-coded offsets.
I see two options for the test hive.
The first, which I would prefer, is to use an openly available hive from a research disk
image. For example,
digitalcorpora.org has several Windows computer disk images
immediately available for analysis. Any hive from there could be included as a sample in
the images/ directory. Unfortunately, I don't know how well this would work with the
GPL.
Alternatively, there is the attached patch, which adds a not-too-creatively named hive
generated from ''minimal'. (I didn't see a benefit to deriving it from
'large,' but it's easy enough to modify that.)
Or properly iterate through the hive using the standard APIs, to pick
out the nodes we want to test.
Second is probably better, but obviously more work.
The second may be better if a
"walk" function in the fashion of Python's os.walk could be used. However,
there is no other API mechanism to check for the remotely stored value offsets, which the
hard-coded offset tests verify.
--Alex