BigDecimal vs floating points...
807580Oct 30 2009 — edited Nov 1 2009Hi all,
I know its probably been asked amillion times before but I need to finally fully understand and get my head around the two.
Firstly here are some bits I've been told by different people and read in different places (alot of people seem to think differently which is what confuses me):
- I've read that if you are wanting precision for currency for example that floating point shouldnt be used because of its accuracy down to the fact it cant represent every decimal number.
- The some people have told me that it doesnt matter and theres not much point ,ost the time in BigDecimal all you need to do is correct the floating point with formatting.
- I've asked about this before but people just seem to give me a short answer to it but without actually explaining why or where they get it from, you cant just assume an answer based on nothing...
I'm building some engineering software that has a general accuracy of 3 decmial places (millimeters from meters) and my first thought is that if currency at 2 decimal places requires BigDecimal then I surely require it (I cant afford to be missing off mm for every calculation, theres alot!) but the problem is this has resulted in me building pretty much the whole application with BigDecimal which you can probably imagine brings up thoughts about performance and memory uptake, I do calculations with BigDecimal, store data in BigDecimal and infact the only thing I do in double is the graphical display as the accuracy isnt so important.
My last question is if this is an ok way to build an accurate application it makes me start to wonder why is floating points used more than BigDecimals, surely most numbers are required to be accurate in applications especially of an enterprise scale?
Thanks,
Ken