# Fail with qlonglong

• Hallo

Pls can anyone check my issue?

double gross = 9.12;
qlonglong temp = QString::number( gross, 'f', 2).toDouble() * 100;

temp is allways 911

QT 5.6.1 Ubuntu 16.10

th Chris

• @ckvsoft Using floating point numbers brings nothing but double trouble. Avoid them if you don't know how they work and how to use them properly. Instead, use integers. For example 9.12 becomes 912. Select milli, micro, nano or whatever fits your needs as the base so that you don't need to use decimal point except when making a user readable string out of it.

If you for some reason can't/don't want to use integers and need reliable floating point calculations with no optimal perfomance, use a proper library, e.g. http://www.boost.org/doc/libs/1_63_0/libs/multiprecision/doc/html/boost_multiprecision/intro.html (I haven't used it, I don't know how well it works).

• i think i have fixed it.

I need the double for calculate some amounts. temp will be added to a long term counter.

qlonglong temp = QString::number( gross * 100, 'f', 0).toLongLong();

lg Chris

• @Eeli-K said in Fail with qlonglong:

Using floating point numbers brings nothing but double trouble. [...] Instead, use integers.

It may sound brusque, but that's the most ridiculous thing I've read in recent time.

If you for some reason can't/don't want to use integers and need reliable floating point calculations with no optimal perfomance, use a proper library, e.g. http://www.boost.org/doc/libs/1_63_0/libs/multiprecision/doc/html/boost_multiprecision/intro.html (I haven't used it, I don't know how well it works).

Which has no relation to this problem whatsoever. And to pile up using a heavy-weight arbitrary precision arithmetic without a (very) good reason is dubious decision just by itself.

• @kshegunov I appreciate your opinion and apparently machine level floating points exist for a reason. But I said "Avoid them if you don't know how they work and how to use them properly" and I don't see how that could be a bad advice. In many cases they can be replaced with integers and also the heavy-weight libraries exist for a reason. In many cases changing from shorter to longer floating point type can just hide a problem. This all of course depends on the case, so my advice should have been less confident. In any case, if 9.12*100 gives 911 it tells that conversions between decimal floating point and binary floating point are lossy, right? And and arbitrary precision library would give a correct answer, right? And using integers and formatting them for printing would have given a correct answer, right? And changing to longer precision gives correct results in some cases but don't remove the problem in many cases, right?

• @Eeli-K said in Fail with qlonglong:

In any case, if 9.12*100 gives 911 it tells that conversions between decimal floating point and binary floating point are lossy, right?

All floating points are lossy due to truncation, no matter the base they're represented in.

In any case, if 9.12*100 gives 911 it tells that conversions between decimal floating point and binary floating point are lossy, right?

No, because in this particular case you have 9.11xxxxx truncated to the second digit and then reconverted back to double, multiplied by 100 and then truncated again.

And and arbitrary precision library would give a correct answer, right?

If it uses base 2 representation internally, which it should anyway, no it wouldn't.

And using integers and formatting them for printing would have given a correct answer, right?

Multiplying before truncating (as in the second post) instead of truncating before multiplying (as in the first post) would give the correct answer. Fixed point arithmetic (what you're suggesting) has its uses, but in very specific circumstances ... and not here. One of the greatest drawbacks of fixed point arithmetic is that it doesn't scale well - i.e. the point is fixed and the dynamic range is abysmal (read as nonexistent).

And changing to longer precision gives correct results in some cases but don't remove the problem in many cases, right?

If you have a numerically bad algorithm all the arbitrary precision in the world can't help you. Typical example is a long summation. You could use 512 bit floating point representation and you'd get no better result than the native double implementation if you settled on the naïve loop-and-sum approach. For that reason there are piles upon piles of books (with algorithms) on how to write computational software and numerically stable algorithms.

• Some explanations how floating numbers work:

You assume that your double variable is strictly equals to 9.12
That’s wrong !
Looks at this:

``````double num = 9.119;
qDebug()<<QString::number(num, 'f', 2);
``````

it’s print 9.12 and it’s the correct value with a precision of two.

Now if I do:

``````double num = 9.12;
long l=(QString::number(num,'f',2).toDouble())*10000;
qDebug()<<l;
``````

the result is 91199

confused ?

The most important thing to understand with floating numbers is that the internal value is in a indeterminate state until you show it (like in quantum physics in some way)

When you print a float number the result will be the best number rounded to the precision you specify.
When you convert a float to long, the result maybe wrong because floats are always approximate values.

• @kshegunov said in Fail with qlonglong:

All floating points are lossy due to truncation, no matter the base they're represented in.

Arbitrary precision libraries keep as many digits as you give them if there's enough memory. Doing calculations may lead to loss. Not all results (like 1/3) can be represented exactly with a given base.

In any case, if 9.12*100 gives 911 it tells that conversions between decimal floating point and binary floating point are lossy, right?

No, because in this particular case you have 9.11xxxxx truncated to the second digit and then reconverted back to double, multiplied by 100 and then truncated again.

Ouch! I should have read more carefully. Not doing that lead to this awkward situation where I really deserved your brusque correction. I automatically presumed it was the "normal" floating point problem, but now it looks stupid to think that imprecision of the second decimal could have been due to base 2/base 10 mismatch.

And and arbitrary precision library would give a correct answer, right?

If it uses base 2 representation internally, which it should anyway, no it wouldn't.

I understand that converting decimal numbers first to base 2 leads to different results than doing the calculation directly in base 10 (like moving the decimal point when dividing by 10). But why "it should anyway"? Well, this is pretty technical discussion and isn't Qt related, so I don't expect an answer, although I'm ready to learn.

Multiplying before truncating (as in the second post) instead of truncating before multiplying (as in the first post) would give the correct answer. Fixed point arithmetic (what you're suggesting) has its uses, but in very specific circumstances ... and not here. One of the greatest drawbacks of fixed point arithmetic is that it doesn't scale well - i.e. the point is fixed and the dynamic range is abysmal (read as nonexistent).

So, I think we at least agree that each strategy serves some purpose. Now I understand better that none of them should be used uncritically.

• @Eeli-K said in Fail with qlonglong:

Arbitrary precision libraries keep as many digits as you give them if there's enough memory. Doing calculations may lead to loss. Not all results (like 1/3) can be represented exactly with a given base.

Not all digits are created equally, so to speak. Each operation you do will carry an error (at least one machine epsilon for the addition and multiplication), so doing additions repeatedly accumulates that error. So much so that doing enough additions and most digits you have are meaningless. Additionally converting from `double` to arbitrary precision floating point will give you at most the number of significant digits the double has, as you don't have any idea what should be written beyond the precision double has.

I'll go back to the addition as the simple and most basic of operations to illustrate. Suppose you have a floating point (in base 10) that gives you 3 digits to work with. This would mean the epsilon (the smallest possible "step" between numbers leaving the scaling the exponent introduces aside) is about 0.001.

So you start with 0 and add up 10 numbers in the range between 0 and 1 and get a number between 0 and 1 as result (so not to care about the exponent in the floating point - just for simplicity). The total error is 10 epsilons, or in this case 0.01. Ultimately, in your result the last digit is meaningless, irrespective of you having the memory to keep and represent it; it just gets eaten up by the error accumulation. So even if you get 0.546 the 0.006 doesn't carry any information anymore.

For that purpose when doing long summations (as in numerically stable computations) one has to use a numerically stable algorithm.