Optimization story: Switching from GMP to gcc's __int128 reduced run time by 95%
The point of this article is that arbitrary precision is less useful than it used to be because we can now achieve precise enough and faster operations for a lot of applications.
Okay, but what about if you need more than 128 bits? This is a non-article; it's just stupidity.
> Using arithmetic operators is 99% faster than doing addition through the Mathmatica API
> LOL, synthetic algebra fags on suicide watch XD
> For what might be obvious reasons, I would love for us to be able to find all 42 digit excellent numbers ;-) We need 70 bits for each half of such a number. We can use Steven Fuerst's 256-bit integer multiplication routines coupled with gcc's libquadmath to get there, but sqrtq is quite a bit slower (although, not as slow as using GMP).
Did you even read it?
You only use arbitrary precision in specific situations.
For example I needed to calculate if a vector lies exactly on a line.
Not within 0.0000000000000000000001 of the line, but exact.
So I used exact maths and just accepted the performance hit.