Not sure if this was written with AI assistance of not, but I've become allergic to linguistic triples as LLMs use them so much, reading "Same code. Same input. Different answer" makes me not want to read the rest.
Its weird that almost every AI generated blurb tends to have that. Usually followed by "Its not just X, its Y". I wonder where they picked that up from.
I think the problem is the mismatch between the intended evocative tone of tricolon crescens and the triviality of a 'computer quirk' in the grand scheme of things. "Use figures of speech, but don't sling them around like monkey shit" - my literature teacher.
At least the Rust compiler (this project is written in Rust) tries to configure LLVM specifically to avoid these discrepancies, and to treat all floating-point operations exactly as written with round-to-nearest behavior. It does not have any of the -ffast-math options that the author('s LLM) is panicking about.
The main caveat (except for NaN payload weirdness) is that on ancient x86 targets, LLVM is deeply wired to use the x87 instructions without attempting to emulate the IEEE overflow/underflow behavior [0]. So you'd have to be using a very unusual target to actually exhibit three such values.
If the original code was written in Rust, then I don't think the Rust compiler is allowed to do any of these "optimizations" of rewriting floating point expressions.
The C compiler also isn't allowed to do all of them. However some people use fast-math or compilers that default to fast-math to break the rules. Some older targets also may use the 80 bit fpu, which is its own mess and any sane compiler will default to properly sized SSE instructions instead.
That's why they always teach: "never compare floats for equality."
Or maybe they don't teach that anymore, I dunno.
See link for the Fundamental Axiom of Floating Point Arithmetic: All floating point arithmetic operations are exact up to a relative error of epsilon_machine.
I certainly teach that. When we work problems involving money, I always recommend students use integers for cents and only convert to dollars and cents when they have to print them.
On the other hand, if you store a small integer in a float it is generally reliable to compare to it. E.g., setting a float to zero and comparing whether the float is zero.
Yeah, in general, this is a problem that people have spent a lot of time thinking about; while floating-point numbers can be finicky, they're what you have to work with if you have inputs at multiple scales.
(Meanwhile, I wonder why it's a fair bit harder to look up Ozaki et al.'s optimized version [0] compared to Shewchuk's original paper [1], unless perhaps later authors have found it to be no improvement at all.)
his predicates paper opens with "Computational geometers despise floating-point arithmetic" same trick as the CG title: write the sentence a frustrated reader would write, then aren it.. if you like those the Triangle paper is the third one in the same key
careful what you wish for, x87's extended precision is what wrote the original bug: 80 bits in registers,64 on a spill, so the value depended on register pressure
ARM and WASM dropped it for a reason, and more bits would not help anyway, a sign is one bit, any rounding step that disagrees at the last bit flips it
The main caveat (except for NaN payload weirdness) is that on ancient x86 targets, LLVM is deeply wired to use the x87 instructions without attempting to emulate the IEEE overflow/underflow behavior [0]. So you'd have to be using a very unusual target to actually exhibit three such values.
[0] https://github.com/rust-lang/rust/issues/114479
Or maybe they don't teach that anymore, I dunno.
See link for the Fundamental Axiom of Floating Point Arithmetic: All floating point arithmetic operations are exact up to a relative error of epsilon_machine.
https://www.johnbcoughlin.com/posts/floating-point-axiom/
(Meanwhile, I wonder why it's a fair bit harder to look up Ozaki et al.'s optimized version [0] compared to Shewchuk's original paper [1], unless perhaps later authors have found it to be no improvement at all.)
[0] https://www.tuhh.de/ti3/paper/rump/OzBueOgOiRu15.pdf
[1] https://people.eecs.berkeley.edu/~jrs/papers/robust-predicat...
ARM and WASM dropped it for a reason, and more bits would not help anyway, a sign is one bit, any rounding step that disagrees at the last bit flips it