Important: Please read the Qt Code of Conduct -

[Solved] Qt5 OpenGL reports wrong glVersion

  • I've recently updated my project to compile with Qt5 (from Qt 4.8) and I've noticed that I can't create an OpenGL profile with anything above version 2.1. When I print out the version string in Qt5 I get "2.1.2 NVIDIA 294.49". In 4.8 I get "4.2.0 NVIDIA 294.49".

    Here is the code I'm using:
    @_mainGL = new QGLWidget();

    GLenum err = glewInit();
    if (GLEW_OK != err)
    throw Core::InsufficientVideoCard("GLEW initiation failed", (const char*)glewGetErrorString(err));

    qDebug() << "OpenGL Versions Supported: " << QGLFormat::openGLVersionFlags();

    QString versionString(QLatin1String(reinterpret_cast<const char*>(glGetString(GL_VERSION))));
    qDebug() << "Driver Version String:" << versionString;
    qDebug() << "Current Context:" << _mainGL->format();@

    Output in Qt 5:
    @OpenGL Versions Supported: QFlags(0x1|0x2|0x4|0x8|0x10|0x20|0x40)
    Driver Version String: "2.1.2 NVIDIA 295.49"
    Current Context: QGLFormat(options QFlags(0x1|0x2|0x4|0x20|0x80|0x400) , plane 0 , depthBufferSize 24 , accumBufferSize -1 , stencilBufferSize 8 , redBufferSize 8 , greenBufferSize 8 , blueBufferSize 8 , alphaBufferSize 0 , samples -1 , swapInterval -1 , majorVersion 1 , minorVersion 0 , profile 0 ) @

    Output in Qt 4.8:
    @OpenGL Versions Supported: QFlags(0x1|0x2|0x4|0x8|0x10|0x20|0x40|0x1000|0x2000|0x4000|0x8000|0x10000)
    Driver Version String: "4.2.0 NVIDIA 295.49"
    Current Context: QGLFormat(options QFlags(0x1|0x2|0x4|0x10|0x20|0x80|0x400) , plane 0 , depthBufferSize 24 , accumBufferSize 16 , stencilBufferSize 8 , redBufferSize 8 , greenBufferSize 8 , blueBufferSize 8 , alphaBufferSize -1 , samples -1 , swapInterval 0 , majorVersion 1 , minorVersion 0 , profile 1 )@

    Just for added information here is the configure line I used:
    @./configure -developer-build -opensource -nomake examples -nomake tests -no-gtkstyle -confirm-license@

  • Been browsing through the Qt code. The problem appears to be the conversion from QOpenGLContext to QSurfaceContext. The version and profile information isn't being passed along and the default QOpenGLContext version is 2.1.

  • This is on my list of things to take a look at very shortly.

  • This somewhat works:
    When creating the GLWidget I set the format again:
    @QGLFormat fmt = format();
    fmt.setVersion(4, 2);
    makeCurrent(); // prevents QGLTemporaryContext being used
    qDebug() << QGLFormat::openGLVersionFlags();@

    And in qgl_qpa.cpp I added toSurfaceFormat:
    @+ retFormat.setMajorVersion(format.majorVersion());

    • retFormat.setMinorVersion(format.minorVersion());@

    Doesn't copy across the profile, but I get a similar result to 4.8 with this.

  • OK I have some info on this. In Qt 4.8 the glxCreateContext call was used to create the context. This appears to return the newest compatibility profile context supported by the driver.

    In Qt5 glxCreateContextAttribsARB is preferred. In contrast, this function returns a context of the requested (if the version is >= 3.0). If the requested version is < 3.0 then the implementation is free to return any context version < 3 but no less than the requested version. Note that in Qt the QSurfaceFormat has a default version of 2.0. So the fact that we see a 2.1 version context created is consistent with this. According to the spec for glxCreateContextAttribsARB requesting a version of 1.0 shoudl get the odl behaviour of returning the newest supported version.

    I have a work in progress patch that now at least correctly queries the created context version.,35040

    Next up is to fix QGLFormat and QGLWidget so that the requested format is not lost on its way to the QPA plugin and to make sure it returns the correct list of supported versions.

  • See also,35151 and,35152 which address the issue with QGLFormat::openGLVersionFlags().

    Context creation within a QGLWidget is fixed by,35157 but be aware that you need to ask for the version and profile that you want now. By default you will get a 2.x compatible profile as that is the default set in QSurfaceFormat. Something like this should work:

    QGLFormat format;
    format.setVersion(4, 3);

    QGLWidget w;
    QString versionString1(QLatin1String(reinterpret_cast<const char*>(glGetString(GL_VERSION))));
    qDebug() << "Driver Version String:" << versionString1;


    The above patches are now going through review and the CI system. Please let me know if they fix it for you. I will now check and make sure that Mac and Windows are behaving consistently and try to fix them if not.

  • This is working for me, and it is nice that the default is now taking affect. Thanks for your help.

  • Great stuff! Thanks for testing. I'll take a look at the windows side of things today.

  • You should be able to request, get and verify an OpenGL 4.x context on Windows too now. I@ve tested up to 4.2 which is the highest my drivers supports at present.

  • I am facing this issue with the latest Qt code (gitorious). I have following environment.
    Qt build on windows 7 using vs2010:
    configure -developer-build -opensource -nomake examples -nomake tests -mp -opengl desktop

    I can see OpenGL 4.3 (Quadro K1000M/PCIe/SSE2 with 333 ext.) when i run Gpucaps. But when i run Qt applications and try to get GL_VERSION it returns 2.1.2

    Any idea?

  • How are you creating your context? Can you paste the code please?

  • ZapB thanks for reply.

    I am running this below sample app

    And inside "Squircle::paint()": function i am trying to get opengl verison like below

    @char *ver = (char *) glGetString(GL_VERSION);@

    And it returns opengl version 2.1.2.

    As a workaround i am doing following, this fixes the issue somewhat. But problem is that when i set version more than 3.1 i get a black blank screen.

    @ QQuickView view;
    QSurfaceFormat format;
    format.setVersion(3, 1);

    But i don't want to hard code any opengl version. Why Qt is not returning a correct opengl version? Other interesting thing which i noticed when i was debugging, inside the Qt in below code during initialization it returns correct version 4.3.0; but somehow it never gets reflected.

    @QWindowsOpenGLContextFormat QWindowsOpenGLContextFormat::current()
    QWindowsOpenGLContextFormat result;
    const QByteArray version = QOpenGLStaticContext::getGlString(GL_VERSION);

  • Hi,

    I'm trying to use the new Qt / openGL design. Every time I get a success, I realize there is something wrong once again.

    First I would like to point out that the sample here doesn't work (Win7 VS2010):
    "OpenGL Window Example":
    It gives me a black screen and that's it. But no worries, many samples out there seems to be from 5.0, 5.1 or an unknown version in between and none actually works on 5.2. Didn't try on 5.1, 5.0 nor 5.3 Beta.
    And I did install this version of Qt :
    "Qt5.2 VS2010 OpenGL":Qt 5.2.1 for Windows 32-bit (VS 2010, OpenGL, 517 MB)
    So I don't know if this bug applies to me :
    "Windows builds should ship a standard Open GL build":

    The trick I found was to remove the QOpenGLFunctions from the inheritance of OpenGLWindow and used a QOpenGLFunctions_4_3_Core instead as an attribute of the class.
    Then instead of calling "initializeOpenGLFunctions();", I get the functions from the newly created context with versionFunctions() and then initialize this set of functions.

    After this step, I finally got a visual. The magical triangle that is, even ten years later, still a long and painful process to obtain when changing API.

    Anyway, after that I though I was finally free to do whatever. But no.
    In the code I was barbarically casting the return of "m_context->versionFunctions();" to "QOpenGLFunctions_4_3_Core*". But when I try to be more gentle and dynamic cast it to the proper class, I get a NULL. When debugging I noticed that the original class was in fact a QOpenGLFunctions_2_1.

    So I came back to the source. I realized that my surface format was by default set for 2.1. So I changed that to 4.3 before calling show(). Then I checked and :
    My surface format is 4.3, horayyy
    My context format is 4.3 horayyyyy
    The dynamic cast works horaayyyy!!
    But the window is black.

    So what ? please what ?

  • First of all, yes, QSurfaceFormat defaults to requesting a 2.x context as this is the minimum required for Qt Quick 2. If you want an OpenGL 4.3 context you need to ask for it via QSurfaceFormat::setVersion() as you found.

    Second, there is no need to cast the result form QOpenGLContext::versionFunctions() if you use the templated version of it. e.g.

    m_funcs = context->versionFunctions<QOpenGLFunctions_4_3_Core>();
    if (!m_funcs)
    // something went wrong. fallback or bail out

    As to why you see nothing when using OpenGL 4.3 I suspect that is because that example is written against older style OpenGL as evidenced by the fact it worked with OpenGL 2.x profile. To test this you could ask for an OpenGL 4.3 Compatibility profile context so that the stuff which is deprecated and removed in the core profile is still available. See QSurfaceFormat::setProfile().

    Alternatively you would need to port the rest of the example to use only Core profile features. That is remove all of the fixed function pipeline calls and built in shader uniforms. Also you would need to tweak the shaders to use in/out rather than attribute/varying.

    Good luck.

  • Ohhhh god I love you.
    Compatibility profile worked. However, is down from my side. I can't check what's supported in 3.1 core and what has been thrown away.
    On the shaders, I clearly specify the version 420, so if it compile, am I good to assume there is no problem there ? I use gl_Position still, but I don't recall having an alternative.
    On the code, the only functions used with QOpenGLFunctions are :
    glClear(), glViewport(), glDrawArrays().
    The other calls are made from the program itself :
    link(), bind(), enableAttributeArray(), setAttributeBuffer(), disableAttributeArray(), release().
    And finally, I use QOpenGLBuffers for my meshes info :
    create(), setUsagePattern(), bind(), allocate().

    That's it. I know glViewport and glDrawArrays are not deprecated. Is glClear my issue ?
    Also, the context does a swapBuffers(), is it ok ?

  • I check the glError after each call and an error pops up for each call to QOpenGLShaderProgram::setAttributeBuffer :
    "m_program->setAttributeBuffer("posAttr", GL_FLOAT, 0, 2);"
    "m_program->setAttributeBuffer("colAttr", GL_FLOAT, 0, 3);"
    This error doesn't show up for compatibility profile. Safe to assume this is my culprit ?
    Any idea ?

  • The problems will be in the shaders themselves.

    The vertex shader has code like this for the input vertex attribute variables:

    attribute vec2 posAttr;
    attribute vec3 colAttr;

    The attribute keyword is gone in the Core profile. Instead yuo have to use "in":

    in vec2 posAttr;
    in vec3 colAttr;

    And in the vertex shader instead of using "varying" for the output you use "out". You still need to write to gl_Position as the input to the rasterizer though.

    Then in the fragment shader for the input variables you no longer use "varying" but rather "in".

    For the fragment shader output the old built-in variable gl_FragColor is gone. Instead you must define your own output variable for the fragment shader and write to that. Something like this...

    out vec4 fragColor;
    void main()
    fragColor = vec4(...);

    Hope this helps.

  • My shaders are arleady set to version 420

    @#version 420

    in vec2 posAttr;
    in vec3 colAttr;

    uniform mat4 matrix;

    out vec4 vCol;

    void main() {
    vCol = vec4(colAttr, 1.0);
    gl_Position = matrix * vec4(posAttr, 0.0, 1.0);

    @#version 420

    in vec4 vCol;

    out vec4 fCol;

    void main() {
    fCol = vCol;

    Is there an issue ?

  • You tell me ;) Do they compile and link? Can you obtain the locations of the attributes?

    Are you creating and binidng a vertex array object anywhere? Core profile requires that you bind a vertex array object. This can be as simple as creating a QOpenGLVertexArrayObject instance and in your init function doing:


    Of course that is the minimal way to do it. You can have multiple VAOs in a program.

  • Direct hit :D.
    My shaders were compiling, no problem on this side (otherwise I wouldn't be able to see anything with the 2.1 profile anyway).
    The problem was the VAO. As soon as I created/binded one in my init then bind it the rendering, "pouf" it worked.

    Thanks for the help. That was some kick ass troubleshooting you've done here :D.


    PS : Maybe update the sample in the Qt Wiki. I don't think there is any example that works for 5.2 out there.

  • Great! Glad you got it working.

    I plan to add some examples to Qt for 5.4 showing some of the newer OpenGL features and the helpers in Qt.

    BTW, in case you or anybody else is interested, KDAB offers training courses in "Modern OpenGL and Qt":

    Happy hacking!

  • Lifetime Qt Champion

    And if I may, the courses are worth both the time and money !

  • Hehe, thanks Samuel. Also good to get feedback :)

Log in to reply