They are all educated in a prestigious university specializing in computer science.

However the good old fundamentals seem to have gone from the courses they have taken, and from the coding field in general.

What is a float number?

A binary number with a given bits allowed before the binary point and after it.

For example:

110.0

-0.00001

The former will transform to decimal as 8+4=12,

while the latter will transform to decimal as -1/32 or equivalent.

The integer part is fine from and to decimal 12 <-> 110

But the fractional part (less than 1) has a natural inaccuracy

In decimal if you have a finite fraction that could be some integer multiple of power of 1/10

0.4

or 0.44

you will find it is not possible to transform it to finite fraction on 1/2

0.4 = 0.25+0.125+(0.025) = 0.25+0.125 + 0.015625+0.0078125 +(0.0015625)

-> 0.01100110011...

Of course the computer will simply cut off from some digits after the point

Thus you if you type in a float 0.4 into the computer, it looks as if normal 0.4,

but it will show off inaccuracy at the long end if you multiply a simple integer to it, say 3.

Because the programming language knows it will generally be inaccurate and usually round the last digit before presentation.

As you magnify the discrepancy by the multiplication, the programming language tells the little secret off.

The solution?

Almost all programming language and traditional database have a "decimal" number type.

A "decimal" number is actually two - the integer of the the number string, and the number of figures after the decimal point.

3.14 = Dec(314,2)

Thus the decimal is perfect for financial calculations to store, plus, minus or multiply.