# QMatrix4x4 *QVector3D Precision is not enough, how to improve it?

• i found after i scale and rotate 3DOBject many times, then recalculate the bound the value always not same in the same place and have a 0.00001 error , how to Improve precision？

• have a about 0.00001 error

• how to Improve precision

Hi,
I never used QMatrix4x4, but having a look at its doc and its source code, internally everything seems to be done in single-precision floating point (i.e. `float`) rather than double-precision (`double` type), so you are bound to have rather poor precision after performing many scaling/rotation operations in succession I'm afraid.

Based on the orginal source code, you could implement the rotate and scale functions yourself, performing everything in double precision?

• do u have the opensource for do this? i wanna to replcae the QT scale ,rotate by QQuaternion ,and move ..

• @jimfar
Google for Quaternion + library. Plenty of stuff there it seems.

• internally everything seems to be done in single-precision floating point (i.e. float) rather than double-precision (double type)

Until a few years back video cards only supported single precision floats, with the exception of cards specifically made for HPC applications, so there was (and there still is) no real reason to have the class use double precision. Furthermore, multiple transformations in sequence will always accumulate error, doesn't matter if you use floats or doubles, or even quadruple precision.

• yes , multipling transformations in sequence will always accumulate error，but the margin of error is difference ,the vtk use double and its error is smaller than QT, comparing with the same operation

• multiple transformations in sequence will always accumulate error, doesn't matter if you use floats or doubles, or even quadruple precision

As @jimfar wrote, you can reduce the magnitude of the accumulated error by using higher-resolution/range number representations, assuming that the algorithms/equations you use are not ill-conditioned.

• thanks for ur two attention,i had already give the suggestion for Improve precision to "https://bugreports.qt.io/".and i now use GLm library. the librarys beecksche suggest i will take it into acount

• but the margin of error is difference ,the vtk use double and its error is smaller than QT, comparing with the same operation

As @jimfar wrote, you can reduce the magnitude of the accumulated error by using higher-resolution/range number representations,

You are wrong.

assuming that the algorithms/equations you use are not ill-conditioned.

Matrix multiplication (and summation, which is what matrix multiplication boils down to) is numerically unstable, unless special care is taken. A bound on the input error of the matrix elements does not translate to a bound on the output error.

• You are wrong

From now on, let's do everything in float then. Why did they even bother inventing double precision, I wonder... I'm sure that at CERN they only use floats for their matrix and tensor products.

I don't want to go into the details of numerical precision and stability and of how subtracting two almost identical values can lead to large errors etc... but you can improve such computations using tricks indeed. And once you apply those tricks, you do get better precision by using better resolution number formats.
http://www.oishi.info.waseda.ac.jp/~oishi/papers/OgRuOi05.pdf
The question then is of course if the precision gain you get using doubles is relevant or not for the targeted application.
Anyway... this is a Qt site, not a math forum.

@kshegunov by the way: instead of telling people they're wrong, maybe it is more helpful to help with the question at hand, given your extensive expertise...

• I'm sure that at CERN they only use floats for their matrix and tensor products.

I could ask my colleagues, if you're interested.

I don't want to go into the details of numerical precision and stability and of how subtracting two almost identical values can lead to large errors etc...

Kinda have to, as this is the original question.

but you can improve such computations using tricks indeed. And once you apply those tricks, you do get better precision by using better resolution number formats.

These are by no means tricks. Numerical analysis is a science of itself, but it does not depend on width of the floating point mantissa. You get better precision by using an appropriate algorithm, not by just extending the floating point format. Have you wondered why in the article you duly sourced, so much time is dedicated on the Kahan compensated summation and its related algorithms and not so much on whether a single or double precision is used?

Anyway... this is a Qt site, not a math forum.

Indeed, but the topic is relevant to both.

instead of telling people they're wrong, maybe it is more helpful to help with the question at hand, given your extensive expertise...

I hinted what a correct approach would be, but I could've elaborated, I'll grant you that.

@jimfar, Your problem stems from the following fact:
Each matrix multiplication will accumulate N times the error of each element (here N = 4 is the dimension of the matrix). Do that M times and you'd have total error in each element of `M * N`. From the above rudimentary observation I could even estimate how many times you applied the transformation if `0.00001` is the relative error - roughly 25 times.

What you should do is either one of two things:

1. Provide a matrix multiplication that's stable, which is somewhat involved as it'd require you to implement compensated summations, and to keep the compensations for each element of the matrices.

OR

1. Save the original orientation/position and have the changes be accumulated (i.e. rotation around x, y, z, translations and so on) in a numerically stable way (again think stable accumulators) and only at the end should you construct the transformation matrix and apply it. A letter of warning here - matrices in general do not commute so order matters.

• I could ask my colleagues, if you're interested.

Are you actually on side? I used to live close by, and a friend of mine made his doctoral thesis there, mybe you crossed paths :-)

• Are you actually on side?

No, I work in low energy nuclear, but I have colleagues both from uni and the institute that are doing the CERN dance (mainly with the CMS).

• @J-Hilk Sorry for the confusion~~

• @Diracsbracket
wrong @ target, I just quoted @kshegunov
:)

• @kshegunov

I could ask my colleagues, if you're interested

Yes please, ask if they use floats a lot, that would be fun to know ^^. Since @jimfar is interested in greater precision, maybe they would be so kind as to explain how they do such matrix operations at high precision...

These are by no means tricks.

Well, to me they are, as they are just tools to get the actual work done.

You get better precision by using an appropriate algorithm, not by just extending the floating point format. Have you wondered why in the article you duly sourced, so much time is dedicated on the Kahan compensated summation and its related algorithms and not so much on whether a single or double precision is used?

My "tricks" also included those algorithms, if you read my post correctly.

matrices in general do not commute so order matters.

Thanks for this reminder of very basic matrix arithmetic.

You always seem to want to have the last word, so I will gladly grant it to you: you are right! And I'm an idiot (that part is really true). I hope this closes the matter and that I can go back and try to learn some QML from this forum ;-)

• thanks for ur two help ,Using GLM , it have met my need for accuracy.The effect is even better than the VTK.