I understand that BigDecimal is the most accurate way to express currency because treating currency values as a floating-point data type tends to cause rounding errors. However, I also understand that BigDecimal calculations require more memory. That said, is using BigDecimal instead of float or double really the best practice for programs that deal with currency values? If I make a program that prints off an itemized receipt for each order at a restaurant, am I more likely to run out of memory if I use BigDecimal or more likely to get rounding errors if I use floating-point values instead?
(Note: “What to do with Java BigDecimal performance?” is a slightly similar question, but I am more concerned with the least risky option for a relatively simple fast food transaction.)
Unless your program is dealing with millions of BigDecimals at a time, you won’t really notice a difference in memory consumption.
And if you do run a service with such a throughput, you can surely afford to buy an extra gigabyte of RAM instead of running into lawsuits for incorrect calculations ;-).