What exactly is a QOffscreenSurface?
-
wrote on 29 Jul 2015, 21:18 last edited by
I'm having a bit of trouble understanding what the purpose of a
QOffscreenSurface
is. Initially I thought it was some kind of OpenGL frame buffer, but this is catered for byQOpenGLFramebuffer
and it seems that there isn't actually a way of manipulating theQOffscreenSurface
's pixels directly. Could someone explain exactly how one would go about using an offscreen surface? I'm wondering whether it's possible for the new Qt3D framework to render to different widgets (as opposed to a fullQWindow
) by using an offscreen surface, since the render aspect expects aQSurface
pointer, but I wouldn't really know where to start. -
QOffscreenSurface is not directly tied to any of OpenGL concepts. It's a lower level thing.
It's an abstraction of platform specific rendering surface, so something on top of WGL, GLX or such. The surface is required when making a context current on a given thread. The implementation will vary depending on a platform, but it will most probably be a pBuffer or a hidden window if pBuffers are not available. See WGL_ARB_pbuffer for a Windows specific concept.
The purpose is to provide means of making OpenGL context current on a thread without a need for creating a window for it. This would usually be something like a worker thread for loading resources or running some non-graphical OpenGL computation. Without QOffscreenSurface you would have to have a window for such thread, which specifically in Qt is problematic, since for example widgets can only be used in the main thread.
-
QOffscreenSurface is not directly tied to any of OpenGL concepts. It's a lower level thing.
It's an abstraction of platform specific rendering surface, so something on top of WGL, GLX or such. The surface is required when making a context current on a given thread. The implementation will vary depending on a platform, but it will most probably be a pBuffer or a hidden window if pBuffers are not available. See WGL_ARB_pbuffer for a Windows specific concept.
The purpose is to provide means of making OpenGL context current on a thread without a need for creating a window for it. This would usually be something like a worker thread for loading resources or running some non-graphical OpenGL computation. Without QOffscreenSurface you would have to have a window for such thread, which specifically in Qt is problematic, since for example widgets can only be used in the main thread.
wrote on 29 Jul 2015, 22:20 last edited by@Chris-Kawa So if I'm understanding correctly:
- A window by default provides its own surface, which is a "space" (conceptually) where OpenGL drawing can be done.
- A window (and widgets) can only live in the GUI thread, therefore any drawing done on the window's surface must also be done in the GUI thread.
- To do OpenGL drawing in a different thread, a
QOffscreenSurface
provides the platform-specific "space" (ie. surface) without creating a window, thereby avoiding the GUI thread limitation of widgets.
In order to use Qt3D, complete with its threaded rendering aspects et al., in conjunction with multiple viewports in the GUI, would the following approach be valid?
- Create a window that uses multiple
QOpenGLWidgets
. - Create a
QOffscreenSurface
that is passed to the Qt3D renderer. - Create one
QOpenglFramebuffer
object perQOpenGLWidget
, where the frame buffers belong to the offscreen surface. - Have the Qt3D renderer render a relevant viewpoint to each frame buffer.
- Have each OpenGL widget draw the contents of its frame buffer as a single quad.
-
So if I'm understanding correctly
To my understanding yes.
would the following approach be valid?
I'm afraid it's not that simple. Unfortunately OpenGL is for better or worse single threaded. An OpenGL context can be made current in only one thread at a time. If you want to render in multiple threads you have two choices:
- Have one context, synchronize the threads and switch the context to be current in only one of them at any given time. In practice this is useless as you are rendering in only one thread while the others wait for their turn, thus the whole effort of threading is wasted because the rendering is serialized anyway.
- Create multiple OpenGL contexts and make them share resources. This way you can make multiple threads, each with its own context, render at the same time. Each background thread would use a QOffscreenSurface and the main thread would use a window.
An FBO does not belong to a surface. It belongs to an OpenGL context. You can share resources like textures or buffers, but FBOs can't be shared between contexts.
Having said the above, one approach would be to create worker threads, each with its own OpenGL context made current on a QOffscreenSurface, rendering to a texture through an FBO. These textures would be then used to render them to an FBO in the main thread. Some synchronization will be needed to assure the textures updates are visible in other threads before using them for rendering.
Note however that, as I said earlier, concept of threads has been and, to some degree, still is alien to OpenGL. The above scenario will most probably be penalized, as most OpenGL drivers serialize the calls anyway and jump enormous hoops to give you the impression of multi-threading, with varying results. In many cases single threaded rendering turns out to be faster, mostly due to no need for synchronization. That is however something you will have to heavily profile for your case.
4/4