You can use `int`

, and consider everything in cents. $1.20 is just 120 cents. At display, you put the decimal in where it belongs.

Interest calculations would just be either truncated or rounded up. So

```
newAmt = round( 120 cents * 1.04 ) = round( 124.8 ) = 125 cents
```

This way you don’t have messy decimals always sticking around. You could get rich by adding the unaccounted for money (due to round-downs) into your own bank account

Okay, I’ll jump in.

## My advice: it’s a game. Take it easy and use `double`

.

Here is my rationale:

`float`

does have a precision issue that appears when adding units to millions, so though it might be benign, I would avoid that type.`double`

only starts getting problems around the quintillons (a billion billions).- Since you are going to have interest rates, you will need theoretical infinite precision anyway: with 4% interest rates, $100 will become $104, then $108.16, then $112.4864, etc. This makes
`int`

and`long`

useless because you don’t know where to stop with the decimals. `BigDecimal`

will give you arbitrary precision but will become painfully slow, unless you clamp the precision at some time. What are the rounding rules? How do you choose where to stop? Is it worth having more precision bits than`double`

? I believe not.

The reason fixed point arithmetic is used in financial applications is because they are deterministic. The rounding rules are perfectly defined, sometimes by the law, and must be strictly applied, **but rounding still happens** at some point. Any argument in favour of a given type based on precision is likely bogus. All types have precision issues with the kind of computations you are going to do.

## Practical examples

I see quite a few comments claiming things about rounding or precision that I disagree with. Here are a few additional examples to illustrate what I mean.

**Storing**: if your base unit is the cent, you’ll probably want to round to the nearest cent when storing values:

```
void SetValue(double v) { m_value = round(v * 100.0) / 100.0; }
```

You will get absolutely no rounding problems when adopting this method that you wouldn’t also have with an integer type.

**Retrieving**: all computations can be done directly on the double values, with no conversions:

```
double value = data.GetValue();
value = value / 3.0 * 12.0;
[...]
data.SetValue(value);
```

Note that the above code does not work if you replace `double`

with `int64_t`

: there will be an implicit conversion to `double`

, then *truncation* to `int64_t`

, with a possible loss of information.data.GetValue()

**Comparing**: comparisons are the one thing to get right with floating-point types. I suggest using a comparison method such as this one:

```
/* Are values equal to a tenth of a cent? */
bool AreCurrencyValuesEqual(double a, double b) { return abs(a - b) < 0.001; }
```

## Rounding fairness

Suppose you have $9.99 in the account with 4% interest. How much should the player earn?With integer rounding you get $0.03; with floating-point rounding you get $0.04. I believe the latter is more fair.

Floating point types in Java (`float`

, `double`

) are not good representation for currencies because of one main reason – there is a *machine error* in rounding. Even if a simple calculation returns a whole number – like `12.0/2`

(6.0), the floating point might wrongly round it (due tho the specific representation of these types in memory) as `6.0000000000001`

or `5.999999999999998`

or similar. This is a result of the specific machine rounding that occurs in the processor and it is unique to the computer that calculated it. Usually, it is rarely an issue to operate with these values, since the error is quite negligent, but its a pain to display that to the user.

A possible solution to this would be to use a custom implementations of floating point data type, like `BigDecimal`

. It supports better calculation mechanisms which at least isolate the rounding errors not to be machine specific, but are slower in terms of performance.

If you need high productivity, you’d better stick to the simple types. In case you operate with *important financial data*, and **each cent** is important (like a Forex application, or some casino game) then I’d recommend you to use `Long`

or `long`

. `Long`

would allow you to handle large amounts and good precision. Just assume you need, lets say, 4 digits after the decimal point, all you need is to multiply the amount by 10000. Having experience in developing on-line casino games, I’ve seen `Long`

to be often used to represent the money in cents. In Forex applications, the precision is more important, so you’ll need a greater multiplier – still, whole numbers are free of machine rounding issues (of course manual rounding like in 3/2 you should handle yourself).

An acceptable option would be to use the standard floating point types – `Float`

and `Double`

, if performance is more important than accuracy to hundredths of the cent. Then, on your display logic, all you need is to use a **predefined formatting**, so that the ugliness of potential machine rounding does not get to the user.