Way I understand it, computers can internally represent floating point numbers only as binary approximations with a finite number of binary digits. So since mantissa and exponent are stored only with an arbitrary precision, rounding errors happen.
Hope that's basically correct.
Way I understand it, computers can internally represent floating point numbers only as binary approximations with a finite number of binary digits.
So since mantissa and exponent are stored only with an arbitrary precision, rounding errors happen.
Hope that's basically correct.
(post is archived)