Using multiple OpenGL contexts? [ReOpened]

  • Hi everyone,

    I'm using a machine with 2 nvidia quadro cards. Is it somehow possible to get multiple QT OpenGL contexts to have one for each graphic card in different threads?


  • Well, what operating system are you using? In Qt the part of OpenGL that is not platform agnostic is context creation. You have GLX on *nix (including Linux), AGL on the Mac, WGL on Windows...
    Determining which graphics card is going to be bound to the context you will use is done at creation.
    Depending on your platform you should look for the proper way to do that using one of the above-mentioned APIs.

  • With Qt5 create a QOpenGLContext and call QopenGLContext::setScreen( QScreen* screen ) on it before you call QOpenGLContext::create(). This should ensure that the context is created using the implementation responsible for that screen.

  • I'm using Ubuntu 12.04 LTS.
    The thing is that these APIs can only provide that only one Context is active at the same time, which means, that one graphic card has to wait for the other card to be done rendering.
    The nVidia Quadro cards are able to do that parallel. That means 2 OpenGL contenxts running in two different threads on each graphic card. So the frames can be drawn synchronous. I guess I won't find a way to do that with QT OpenGL widgets.

  • Could you file a bug report about this at please? Sounds like the current context is probably stored in per-thread storage and needs to be extended to handle the case of high-end cards that can do this.

    If only I had a couple of £k to spend on getting some... ;)

  • OpenGL is not thread safe. What you are trying to do is feasible, you only have to call the makeCurrent on each created context in its own thread: Thread1 can deal with GPU1 and Thread2 with GPU2.

  • OK... Sounds good, but has QT any implemented classes/functions to use make current? Or do I have to use the APIs you mentioned lately?
    If I have to use a API for example GLX, is there a way to get the contexts/windows from QT or won't that be necessary anymore?

    Thanks for your answers.

  • You can use the class QGLContext and call makeCurrent() on that.
    As explained in ZapB's post, if you call setScreen on the context object before actually creating the context (by passing the object to a QGLWidget constructor), you can achieve what you want.

  • In Qt5 similar is achieved using QOpenGLContext instead of QGLContext which also avoids the QtWidgets dependency.

  • Hi there,

    I know this thread was closed, already, but i have a problem with the solution.
    I am creating the QOpenGLContext objects with the appropriate screens.
    (0 and 1 for graphics card 1 and 2)
    I have two threads each creating one context (i only use openGL commands within the rendering threads!).
    Before i render a frame with one of these threads i call the makeCurrent() method of the appropriate QOpenGLContext instance.

    That works fine so far... but my graphics card monitor (GPU-Z) does not show any activity on my second graphics card (but almost 100% gpu load on my first card) nor can i recognize any speed up while rendering.
    Is that a bug? Did anyone try this with nVidia? Do i have to create my context with AGL, WGL like rcari said?

    Thanks in advance

  • ZapB and rcari are wrong, there is no implementation for windows. To choose a graphics card on Windows OS's you need to create the window on the right screen for AMD cards. But Qt uses a window dummy to create the context with no position setting so i guess this won't work, because the DC is always created from the primary graphics card then. To do this on NVidia you need the WGL_NV_gpu_affinity extension to choose the DC. But Qt doesn't load these functions for this extension. I did the hack by modifying the qwindowsglcontext.h and qwindowsglcontext.cpp and load these functions in QOpenGLStaticContext class. Then I modified the Constructor of QWindowsGLContext and created the DC with the NVidia extension and this works. So there is need to improve this for the next version where you can explicitly choose the GPU you want to use for the context, how about 6.0 or 7.0? :D

  • Patches welcome. ;) Please submit it to gerrit.

Log in to reply