Qt3D: Co-ordinate system transform - where should it be performed?
As part of my university software project of building a 3D game level editor, I will be working with 3D geometry in a different co-ordinate space to OpenGL. In left-handed editor space, if positive X is right then positive Y is forwards and positive Z is up; this is in contrast to OpenGL's right-handed system where positive X is right, positive Y is up and positive Z is backwards. I have a matrix to convert between these two co-ordinate systems (editor to OpenGL shown below):
@QMatrix4x4( 1.0, 0.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, -1.0, 0.0, 0.0,
0.0, 0.0, 0.0, 1.0 );@
I understand that in Qt3D, using
will give the modelview matrix stack. However, what is the best point to apply this matrix? Should I convert editor object co-ordinates to OpenGL co-ordinates immediately upon rendering; or should I only apply the matrix after everything else has been computed, including the camera; or somewhere in-between? I would like the user to interact with geometry solely in editor space and not have to worry about any conversions whatsoever, so this leads me to prefer applying the transformation right at the end of the rendering process, just before perspective divide etc.
However, I also have the following aspects to consider: since I am using Qt3D, the QGLCamera works in OpenGL co-ordinate space. I would like to avoid having to write myself another camera class if I can help it, given that QGLCamera already exists, but I'm not sure if it's possible to transform the QGLCamera matrix given that QGLPainter seems to handle it all automatically after calling setCamera(). It is also required in my application that the user be able to set the position and orientation of in-editor objects from the current viewport camera's position and orientation, and they should be able to remain within editor co-ordinate space to do this - therefore it would be much simpler if the camera operated in editor space instead of OpenGL space as it makes user manipulation of the camera much more straightforward.
It's a similar story for lights - a useful feature would be to be able to simulate light from given light objects within the editor world, but once again Qt3D lights are handled internally by QGLPainter and are manipulated with OpenGL co-ordinates.
So, in a nutshell - is there a way using QGLPainter where I can treat all the QGL* classes I'm using as being in editor co-ordinate space? I tried transforming the modelview matrix and pushing the stack immediately after painter.begin(), but this caused the lighting in the scene to flicker rapidly between the two different light directions, I'm assuming because of QGLPainter's behind-the-scenes calculations.
If anyone has any information on the matter it would be much appreciated. Thanks.