Important: Please read the Qt Code of Conduct  https://forum.qt.io/topic/113070/qtcodeofconduct
Qt3D: Coordinate system transform  where should it be performed?

As part of my university software project of building a 3D game level editor, I will be working with 3D geometry in a different coordinate space to OpenGL. In lefthanded editor space, if positive X is right then positive Y is forwards and positive Z is up; this is in contrast to OpenGL's righthanded system where positive X is right, positive Y is up and positive Z is backwards. I have a matrix to convert between these two coordinate systems (editor to OpenGL shown below):
@QMatrix4x4( 1.0, 0.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 0.0, 1.0 );@I understand that in Qt3D, using
@painter.modelViewMatrix()@
will give the modelview matrix stack. However, what is the best point to apply this matrix? Should I convert editor object coordinates to OpenGL coordinates immediately upon rendering; or should I only apply the matrix after everything else has been computed, including the camera; or somewhere inbetween? I would like the user to interact with geometry solely in editor space and not have to worry about any conversions whatsoever, so this leads me to prefer applying the transformation right at the end of the rendering process, just before perspective divide etc.
However, I also have the following aspects to consider: since I am using Qt3D, the QGLCamera works in OpenGL coordinate space. I would like to avoid having to write myself another camera class if I can help it, given that QGLCamera already exists, but I'm not sure if it's possible to transform the QGLCamera matrix given that QGLPainter seems to handle it all automatically after calling setCamera(). It is also required in my application that the user be able to set the position and orientation of ineditor objects from the current viewport camera's position and orientation, and they should be able to remain within editor coordinate space to do this  therefore it would be much simpler if the camera operated in editor space instead of OpenGL space as it makes user manipulation of the camera much more straightforward.
It's a similar story for lights  a useful feature would be to be able to simulate light from given light objects within the editor world, but once again Qt3D lights are handled internally by QGLPainter and are manipulated with OpenGL coordinates.
So, in a nutshell  is there a way using QGLPainter where I can treat all the QGL* classes I'm using as being in editor coordinate space? I tried transforming the modelview matrix and pushing the stack immediately after painter.begin(), but this caused the lighting in the scene to flicker rapidly between the two different light directions, I'm assuming because of QGLPainter's behindthescenes calculations.
If anyone has any information on the matter it would be much appreciated. Thanks.