Hi!
Other than decreased performance and not using literals and operators directly, are there any other disadvantages to using BigInt and BigDecimal (in particular in Java)?
Thanks.
I think you pretty well summed it up. Decreased performance and more difficult to code and maintain (significantly so in comparison to native values and simple operators). Don’t get me wrong these classes absolutely have their place, but I wouldn’t use them without a compelling reason.
Serialisation, marshalling issues and mental overhead of using compareTo
They do the job but they are clunkier to use and use significantly more memory and processing power than the primitive data types. Use them if you need them, but don’t use them when a primitive would do the job just as well.
The arbitrary precision may cause your bignums to balloon out of control in memory/cpu usage after repeated multiplication, unless you can prove that it will not. For example:
double x = 1; while (true){ x *= Math.exp((Math.random()*2 - 1) / 1000); }
Would work perfectly fine with floats and x will remain about 1, but with BigDecimal it will grind to a halt.
Interesting. Is there a way to set a limit to the size of individual instances?
Give it a MathContext with the max precision that you want to allow.
Thanks, that doesn’t sound so bad.