Float vs Decimal in Python

Table of Contents


Both the float and decimal types store numerical values in Python, and at the beginning, choosing when to use each can be confusing. Python’s decimal documentation is a good starting point to learn when to use decimals. Generally, decimals exist in Python to solve the precision issues of floats.


Use floats when convenience and speed matter. A float gives you an approximation of the number you declare. For example, if I print 0.1 with 18 decimals places, I don’t actually get 0.1 but instead an approximation.

>>> print(f"{0.1:.18f}")

Similarly, when doing operations, such as addition with floats, you get an approximation, which can lead confusing code, such as the following.

>>> .1 + .1 + .1 == .3
>>> .1 + .1 + .1

Intuitively, the addition makes sense, and at a glance, you expect the statement to be true. However, because of the float approximation it turns out to be false. This demonstrates one of the big issues with floats which is the lack of reliable equality testing. To fix this equality test without the use of decimals we need to use rounding.

>>> round(.1 + .1 + .1, 10) == round(.3, 10)
>>> round(.1 + .1 + .1, 10)

In this case, we round the floats to prevent any precision issues. If you find yourself using floats and rounding frequently in your codebase, this indicates that it’s time to use decimals.


Use decimals when precision matters, such as with financial calculations. Decimals can suffer from their own precision issues, but generally, decimals are more precise than floats. The performance difference between float and decimal, with Python 3, is not outlandish, and in my experience, the precision benefits of a decimal outweigh the performance benefits of a float.

Let’s look at the previous examples with decimals instead of floats.

>>> from decimal import Decimal
>>> print(f"{Decimal('0.1'):.18f}")
>>> Decimal('.1') + Decimal('.1') + Decimal('.1') == Decimal('.3')

Using decimals in these examples prevents the subtle bugs introduced by floats. If you notice, the decimals use strings for initialization. Once again, using floats causes precision issues.

>>> from decimal import Decimal
>>> Decimal(0.01) == Decimal("0.01")

In this example, we expect these decimals to be equal, but, because of the precision issues with floats, this decimal equality test returns false. If we look at each of these decimals, we’ll see why.

>>> Decimal(0.01)
>>> Decimal("0.01")

The decimal declared as a float is not technically 0.01, which results in an equality test of false. All decimals should be initialized using strings to prevent precision issues. If decimals aren’t initialized with strings, we lose some of the precision benefits of decimals and create subtle bugs.


Edited 2021-01-15 credit Michael Amrhein

Michael points out that decimals suffer from their own precision issues with this example.

>>> from decimal import Decimal
>>> Decimal('1') / Decimal('3') * Decimal('3') == Decimal('1')
>>> Decimal('1') / Decimal('3') * Decimal('3')

Michael also points out that the float equivalent does not run into precision issues.

>>> (1.0 / 3.0) * 3.0 == 1.0

Decimal have their own hidden rounding that cause precision issues, and to eliminate the hidden routing, you need to use Python’s fractions module.

>>> from fractions import Fraction
>>> Fraction('1') / Fraction('3') * Fraction('3') == Fraction('1')

The fractions module provides support for rational number arithmetic. Once again to avoid any precision issues, initialize the fraction with a string.

>>> Fraction(1.1) == Fraction("1.1")

Final Thoughts

For most use cases, I recommend using decimals. If you initialize them with strings, you prevent subtle bugs and get the increased precision benefits. If you need to eliminate all subtle rounding issues, use the fractions module. Even though floats perform better than decimals, I recommend avoiding floats.

Steven Pate

Senior Software Engineer with writing and people skills specializing in Python based solutions