Rotating an object in a 3d scene
Okay, I was trying to rotate an object in 3d and whenever we see an object rotating in 3d, it is not the object which is rotating. It is the camera which rotates and gives us the desired effect.
Now, say, I have two objects in the scene, one should be static and the other should be rotating. We have a main camera for the scene and a secondary camera which rotates around the second object.
My question is how does the main camera capture a scene where one object is constant and the other is rotating. Because, essentially it's the secondary camera which is rotating.
Shouldn't the main camera capture a scene wherein both the objects are static and the secondary camera is rotating?static
I don't know which APIs you're referring to but the general idea is as follows.
Each object in the scene has three properties: translation, rotation and scale. These are used to calculate a Model Matrix (M).
Similarly a camera has position and rotation and it is used to calculate the Camera Matrix or more common name: the View Matrix (V).
Lastly the camera has a FOV (field of view), near and far clipping planes, and your window has dimensions, which are together used to calculate how the world is displayed on a 2D rectangular surface: the Projection Matrix (P).
Having these three matrices the final position of an object on the screen is defined by a MVP matrix (Model View Projection) which is calculated by multiplying the three above matrices together.
With the above, to render an object it doesn't matter if it's static, moving, rotating or doing a bounce - all you need is its model matrix. This matrix will be a unity matrix in case of static object and will change for the moving one.
If you have two different cameras and want to render the scene twice, then it just becomes:
Render 1 (with camera 1)
Render object 1 with: P * V1 * M1
Render object 2 with: P * V1 * M2
Render 2 (with camera 2):
Render object 1 with: P * V2 * M1
Render object 2 with: P * V2 * M2
thanks chris. I had another question regarding textures. While integrating opencv with QT, can we use a grayscale image as a texture binding for a QGL builder object?