On 5/17/23 23:05, Eric Blake wrote:
[Side note: if you really want a trip, read the 2023 SIGBOVIK
article
on "GradIEEEnt half decent" about 16-bit floating point values being
exploited for their non-linear rounding properties as a way to create
non-monotonic functions that can in turn form the basis of a Turing
complete system capable of running a 36-second solution of Mario level
1-1 in 19k minutes of wall time using only half-precision
floating-point operations...
https://sigbovik.org/2023/,
http://tom7.org/grad/murphy2023grad.pdf]
I've read the first few pages of this paper. It's amazing how capable
and dedicated the author is! (And I love his humor!)
Anyway: one of the footnotes says,
There is seldom reason to change the rounding mode, and since it is
a stateful act, you’re asking for it if you do. But the round-to-
negative-infinity and round-to-positive-infinity modes are are
useful for interval arithmetic, which is arguably the only truly
reasonable way to use floating point. What you do is represent
numbers as intervals (low and high endpoints) that contain the true
value, and then perform each calculation on both endpoints. For
computations on the low endpoint, you round down, and symmetrically
for the high endpoint. This way, the true value is always within the
interval, and you also know how much inaccuracy you have
accumulated!
This echoes my earlier encounters with "interval arithmetic", and only
strengthens my aversion towards floating point in the present context.
Please proceed with collaborating on this series, I need to withdraw
from it.
Laszlo