Qt's OpenGL vertex binding funcs problem
-
Could you try to run the first example using
qmake
? WhyQOpenGLFunctions_3_3_Core
could not be found in my case? -
I have tried use
qmake
and it works.
My qmake file:greaterThan(QT_MAJOR_VERSION, 4): QT += core gui opengl openglwidgets widgets HEADERS += \ glwidget.h SOURCES += \ glwidget.cpp \ main.cpp
BTW. I paste also first part of
QOpenGLFunctions_3_3_Core
... #ifndef QOPENGLVERSIONFUNCTIONS_3_3_CORE_H #define QOPENGLVERSIONFUNCTIONS_3_3_CORE_H #include <QtOpenGL/qtopenglglobal.h> #if !defined(QT_NO_OPENGL) && !QT_CONFIG(opengles2) #include <QtOpenGL/QOpenGLVersionProfile> #include <QtOpenGL/QOpenGLVersionFunctions> #include <QtGui/qopenglcontext.h> QT_BEGIN_NAMESPACE class Q_OPENGL_EXPORT QOpenGLFunctions_3_3_Core : public QAbstractOpenGLFunctions { public: QOpenGLFunctions_3_3_Core(); ...
@8Observer8 What OS do you have ? Windows 10 (if yes I'll also later check it on this OS and try run it).
-
@Bondrusiek said in Qt's OpenGL vertex binding funcs problem:
What OS do you have ? Windows 10
Yes. It is why I write
win32: LIBS += -lopengl32
inside of the pro file. Some programs will not be compiled without this line on Windows.win32:
means that this line is for Windows only and theLIBS += -lopengl32
command will be ignored on Linux, macOS, Android, and WebAssembly. It is very interesting why there is noQOpenGLFunctions_3_3_Core
inside of this list: -
@8Observer8 Hi!
I've checked this project on my Windows 10 and it works. I used Desktop Qt6.7 MinGW 64bit compilator.
This is a
qmake
file:QT += core gui openglwidgets opengl greaterThan(QT_MAJOR_VERSION, 4): QT += widgets CONFIG += c++17 SOURCES += \ main.cpp \ widget.cpp HEADERS += \ widget.h
This is interesting that you have a problem with OpenGL 3.3 on Win 10 OS. I am not an OpenGL expert but I found good program which scans GPU and you can see OpenGL status(probably it is possible to found other the solution). This is
OpenGL Extensions Viewer
and my result from Windows 10:
-
My laptop has two video cards: integrated on CPU and discrete GeForce. Qt uses the first one by default:
qDebug() << "OpenGL version:" << (const char*) glGetString(GL_VERSION); qDebug() << "GLSL version: " << (const char*) glGetString(GL_SHADING_LANGUAGE_VERSION); qDebug() << "Vendor: " << (const char*) glGetString(GL_VENDOR);
Output:
OpenGL version: 3.1.0 - Build 9.17.10.4459 GLSL version: 1.40 - Intel Build 9.17.10.4459 Vendor: Intel
I am not expert in Qt but it is very interesting that the
QOpenGLFunctions_3_3_Core.h
header file is not available for me. I see it here:But I don't see it here:
Even
QOpenGLFunctions_3_1.h
is not in the list. -
Hmm, you can try run it and see result. Maybe compilation sees a OpenGL version and blocks some header files:
#include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); QSurfaceFormat fmt; fmt.setDepthBufferSize(24); // Request OpenGL 3.3 core or OpenGL ES 3.0. if (QOpenGLContext::openGLModuleType() == QOpenGLContext::LibGL) { qDebug("Requesting 3.3 core context"); fmt.setVersion(3, 3); fmt.setProfile(QSurfaceFormat::CoreProfile); } else { qDebug("Requesting 3.0 context"); fmt.setVersion(3, 0); } QSurfaceFormat::setDefaultFormat(fmt); return a.exec(); }
-
Output:
Requesting 3.3 core context OpenGL version: 3.1.0 - Build 9.17.10.4459 GLSL version: 1.40 - Intel Build 9.17.10.4459 Vendor: Intel
main.cpp
#include <QtGui/QOpenGLContext> #include <QtGui/QOpenGLFunctions> #include <QtGui/QSurfaceFormat> #include <QtWidgets/QApplication> #include <QtOpenGL/QOpenGLWindow> class OpenGLWindow : public QOpenGLWindow, private QOpenGLFunctions { void initializeGL() override { initializeOpenGLFunctions(); qDebug() << "OpenGL version:" << (const char*) glGetString(GL_VERSION); qDebug() << "GLSL version: " << (const char*) glGetString(GL_SHADING_LANGUAGE_VERSION); qDebug() << "Vendor: " << (const char*) glGetString(GL_VENDOR); } }; int main(int argc, char *argv[]) { QApplication::setAttribute(Qt::ApplicationAttribute::AA_UseDesktopOpenGL); QApplication app(argc, argv); QSurfaceFormat fmt; fmt.setDepthBufferSize(24); // Request OpenGL 3.3 core or OpenGL ES 3.0. if (QOpenGLContext::openGLModuleType() == QOpenGLContext::LibGL) { qDebug("Requesting 3.3 core context"); fmt.setVersion(3, 3); fmt.setProfile(QSurfaceFormat::CoreProfile); } else { qDebug("Requesting 3.0 context"); fmt.setVersion(3, 0); } OpenGLWindow w; w.setFormat(fmt); w.show(); return app.exec(); }
-
I can activate the discrete GeForce (or Radeon) card by adding this code:
#ifdef _WIN32 #include <windows.h> extern "C" __declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; extern "C" __declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001; #endif
Output:
Requesting 3.3 core context OpenGL version: 3.3.0 NVIDIA 391.35 GLSL version: 3.30 NVIDIA via Cg compiler Vendor: NVIDIA Corporation
main.cpp
#ifdef _WIN32 #include <windows.h> extern "C" __declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; extern "C" __declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001; #endif #include <QtGui/QOpenGLContext> #include <QtGui/QOpenGLFunctions> #include <QtGui/QSurfaceFormat> #include <QtWidgets/QApplication> #include <QtOpenGL/QOpenGLWindow> class OpenGLWindow : public QOpenGLWindow, private QOpenGLFunctions { void initializeGL() override { initializeOpenGLFunctions(); qDebug() << "OpenGL version:" << (const char*) glGetString(GL_VERSION); qDebug() << "GLSL version: " << (const char*) glGetString(GL_SHADING_LANGUAGE_VERSION); qDebug() << "Vendor: " << (const char*) glGetString(GL_VENDOR); } }; int main(int argc, char *argv[]) { QApplication::setAttribute(Qt::ApplicationAttribute::AA_UseDesktopOpenGL); QApplication app(argc, argv); QSurfaceFormat fmt; fmt.setDepthBufferSize(24); // Request OpenGL 3.3 core or OpenGL ES 3.0. if (QOpenGLContext::openGLModuleType() == QOpenGLContext::LibGL) { qDebug("Requesting 3.3 core context"); fmt.setVersion(3, 3); fmt.setProfile(QSurfaceFormat::CoreProfile); } else { qDebug("Requesting 3.0 context"); fmt.setVersion(3, 0); } OpenGLWindow w; w.setFormat(fmt); w.show(); return app.exec(); }
-
So you have a card that supports OpenGL 3.3. It's strange that Qt doesn't see this header files. If I had anything to suggest, you can update Qt via
MaintenanceTool
and update the graphics carddrivers
. -
@Bondrusiek said in Qt's OpenGL vertex binding funcs problem:
update the graphics card drivers
I have the latest: OpenGL 4.6 for GeForce and OpenGL 3.1.0 for Intel. I cannot update OpenGL for Intel because 3.1 is maximum for my CPU.
@Bondrusiek said in Qt's OpenGL vertex binding funcs problem:
It's strange that Qt doesn't see this header files.
I see them now:
-
I don't understand why but there is no the
QOpenGLFunctions_3_3_Core: No such file or directory
error in your first example. It crashed:debug\opengl33-test.exe crashed
. But it is crashed because I have a laptop and Qt runs with the integrated video card with OpenGL 3.1. I have added these lines to themain.cpp
:#ifdef _WIN32 #include <windows.h> extern "C" __declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; extern "C" __declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001; #endif
and your first example works now. I the the red triangle. But it prints the maximum version of OpenGL 4.6 on the discrete video card because
setVersion(3 , 3)
was not called:OpenGL version: 4.6.0 NVIDIA 391.35 GLSL version: 4.60 NVIDIA Vendor: NVIDIA Corporation
-
Sorry. I have forgot to add
setVersion(3, 3)
to themain.cpp
file. Now your first example prints correct version of OpenGL:Requesting 3.3 core context OpenGL version: 3.3.0 NVIDIA 391.35 GLSL version: 3.30 NVIDIA via Cg compiler Vendor: NVIDIA Corporation
But I don't see a red triangle. I can fix it only by comment the
QSurfaceFormat
code with settingsetVersion
andsetProfile
but it uses OpenGL 4.6 now:main.cpp
#ifdef _WIN32 #include <windows.h> extern "C" __declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; extern "C" __declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001; #endif #include "glwidget.h" #include <QtGui/QSurfaceFormat> #include <QtWidgets/QApplication> int main(int argc, char *argv[]) { QApplication::setAttribute(Qt::ApplicationAttribute::AA_UseDesktopOpenGL); QApplication a(argc, argv); QSurfaceFormat fmt; fmt.setDepthBufferSize(24); // // Request OpenGL 3.3 core or OpenGL ES 3.0. // if (QOpenGLContext::openGLModuleType() == QOpenGLContext::LibGL) { // qDebug("Requesting 3.3 core context"); // fmt.setVersion(3, 3); // fmt.setProfile(QSurfaceFormat::CoreProfile); // } else { // qDebug("Requesting 3.0 context"); // fmt.setVersion(3, 0); // } QSurfaceFormat::setDefaultFormat(fmt); MyGLWidget w; w.show(); return a.exec(); }
-
Thanks for checking. As far as I know, in the case of shaders, you can also indicate what version of OpenGL you want to use.
#version 330 core
This means that you want to use 3.3 version of OpenGL.#version 460 core
means set OpenGL to 4.6.Perhaps the most important thing is that the second example does not render the triangle correctly for you either. Just like in my case, so I guess we can talk about a bug here.
-
I believe there is no need to spend time on OpenGL 3.3 Core, especially if you have just started learning the basics of OpenGL. You can defer learning OpenGL 3.3 Core until the next iteration. Take OpenGL ES 2.0. It's much smaller. It has all the basic things you need now. It works on iOS, Android and web browser. In the near future, focus on OpenGL ES 2.0 and make demos and small games on it. It will serve you for several years, and maybe a lifetime.
My simple triangle example is written using OpenGL ES 2.0. It is compiled to Desktop, Mobile, and Web from one code base.
- Demo in the browser using QOpenGLWindow
- Demo in the browser using QOpenGLWidget
main.cpp
#include <QtGui/QOpenGLFunctions> #include <QtOpenGL/QOpenGLBuffer> #include <QtOpenGL/QOpenGLShader> #include <QtOpenGL/QOpenGLShaderProgram> #include <QtOpenGLWidgets/QOpenGLWidget> #include <QtWidgets/QApplication> class OpenGLWindow : public QOpenGLWidget, private QOpenGLFunctions { public: OpenGLWindow() { setWindowTitle("OpenGL ES 2.0, Qt6, C++"); resize(350, 350); } void initializeGL() override { initializeOpenGLFunctions(); glClearColor(48.f / 255.f, 56.f / 255.f, 65.f / 255.f, 1.f); QString vertShaderSrc = "attribute vec2 aPosition;\n" "void main()\n" "{\n" " gl_Position = vec4(aPosition, 0.0, 1.0);\n" "}\n"; QString fragShaderSrc = "#ifdef GL_ES\n" "precision mediump float;\n" "#endif\n" "void main()\n" "{\n" " gl_FragColor = vec4(0.2, 0.7, 0.3, 1.0);\n" "}\n"; m_program.create(); m_program.addShaderFromSourceCode(QOpenGLShader::ShaderTypeBit::Vertex, vertShaderSrc); m_program.addShaderFromSourceCode(QOpenGLShader::ShaderTypeBit::Fragment, fragShaderSrc); m_program.link(); m_program.bind(); float vertPositions[] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.f, 0.5f }; m_vertPosBuffer.create(); m_vertPosBuffer.bind(); m_vertPosBuffer.allocate(vertPositions, sizeof(vertPositions)); } void paintGL() override { glClear(GL_COLOR_BUFFER_BIT); m_program.bind(); m_vertPosBuffer.bind(); m_program.setAttributeBuffer("aPosition", GL_FLOAT, 0, 2); m_program.enableAttributeArray("aPosition"); glDrawArrays(GL_TRIANGLES, 0, 3); } private: QOpenGLShaderProgram m_program; QOpenGLBuffer m_vertPosBuffer; }; int main(int argc, char *argv[]) { QApplication::setAttribute(Qt::ApplicationAttribute::AA_UseDesktopOpenGL); QApplication app(argc, argv); OpenGLWindow w; w.show(); return app.exec(); }
QT += core gui openglwidgets win32: LIBS += -lopengl32 CONFIG += c++17 SOURCES += \ main.cpp
-
I have created the bug report: https://bugreports.qt.io/browse/QTBUG-126389
I have tested with
QOpenGLWindow
. The result is the same. -
My bug report was closed with the comment:
You need a VAO to use a Core profile.
I have added solution examples to the bug report:
Steps to solve the problem:
- Add
#include <QtOpenGL/QOpenGLVertexArrayObject>
inside of theglwidget.h
- Create the
QOpenGLVertexArrayObject vao;
member inside of theMyGLWidget
class - Create and bind
vao
inside of theinitializeGL
method:
void MyGLWidget::initializeGL() { initializeOpenGLFunctions(); vao.create(); vao.bind();
- Delete a binding of the
vbo
inside of thepaintGL()
- Add a binding of the
vao
inside of thepaintGL()
- Move
attributeLocation
,enableAttributeArray
andsetAttributeBuffer
inside of theinitializeGL
method: - I don't think that you should release the shader program inside of the
paintGL()
method - Release
vao
inside of thepaintGL()
method - I think you don't need to call
disableAttributeArray
inside of thepaintGL()
method
So, VAO allows to bind VBO and call
enableAttributeArray
andsetAttributeBuffer
only one time. You can bind VAO only inside of thepaintGL()
method. Now the program works:pro
QT += core gui openglwidgets opengl win32: LIBS += -lopengl32 CONFIG += c++17 SOURCES += \ main.cpp \ glwidget.cpp HEADERS += \ glwidget.h
glwidget.h
#ifndef GLWIDGET_H #define GLWIDGET_H #include <QOpenGLWidget> #include <QOpenGLFunctions_3_3_Core> #include <QOpenGLBuffer> #include <QOpenGLShaderProgram> #include <QtOpenGL/QOpenGLVertexArrayObject> class MyGLWidget : public QOpenGLWidget, protected QOpenGLFunctions_3_3_Core { Q_OBJECT public: MyGLWidget(QWidget *parent = nullptr); ~MyGLWidget(); protected: void initializeGL() override; void resizeGL(int w, int h) override; void paintGL() override; private: QOpenGLBuffer vbo; QOpenGLVertexArrayObject vao; QOpenGLShaderProgram *shaderProgram; }; #endif // GLWIDGET_H
glwidget.cpp
#include "glwidget.h" MyGLWidget::MyGLWidget(QWidget *parent) : QOpenGLWidget(parent), vbo(QOpenGLBuffer::VertexBuffer) { setWindowTitle("Trianlge, OpenGL 3.3 Core"); resize(500, 500); } MyGLWidget::~MyGLWidget() { makeCurrent(); vao.destroy(); vbo.destroy(); delete shaderProgram; doneCurrent(); } void MyGLWidget::initializeGL() { initializeOpenGLFunctions(); qDebug() << "OpenGL version:" << (const char*) glGetString(GL_VERSION); qDebug() << "GLSL version: " << (const char*) glGetString(GL_SHADING_LANGUAGE_VERSION); qDebug() << "Vendor: " << (const char*) glGetString(GL_VENDOR); shaderProgram = new QOpenGLShaderProgram(); shaderProgram->addShaderFromSourceCode(QOpenGLShader::Vertex, "#version 330 core\n" "layout(location = 0) in vec3 position;\n" "void main()\n" "{\n" " gl_Position = vec4(position, 1.0);\n" "}\n" ); shaderProgram->addShaderFromSourceCode(QOpenGLShader::Fragment, "#version 330 core\n" "out vec4 fragColor;\n" "void main()\n" "{\n" " fragColor = vec4(1.0, 0.0, 0.0, 1.0);\n" "}\n" ); shaderProgram->link(); shaderProgram->bind(); vao.create(); vao.bind(); vbo.create(); vbo.bind(); GLfloat vertices[] = { 0.0f, 0.5f, 0.0f, // Top vertex -0.5f, -0.5f, 0.0f, // Bottom left vertex 0.5f, -0.5f, 0.0f // Bottom right vertex }; vbo.allocate(vertices, sizeof(vertices)); int posLocation = shaderProgram->attributeLocation("position"); shaderProgram->enableAttributeArray(posLocation); shaderProgram->setAttributeBuffer(posLocation, GL_FLOAT, 0, 3); vao.release(); } void MyGLWidget::resizeGL(int w, int h) { glViewport(0, 0, w, h); } void MyGLWidget::paintGL() { glClear(GL_COLOR_BUFFER_BIT); shaderProgram->bind(); vao.bind(); glDrawArrays(GL_TRIANGLES, 0, 3); vao.release(); }
main.cpp
#ifdef _WIN32 #include <windows.h> extern "C" __declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; extern "C" __declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001; #endif #include "glwidget.h" #include <QtGui/QSurfaceFormat> #include <QtWidgets/QApplication> int main(int argc, char *argv[]) { QApplication::setAttribute(Qt::ApplicationAttribute::AA_UseDesktopOpenGL); QApplication a(argc, argv); QSurfaceFormat fmt; fmt.setDepthBufferSize(24); // Request OpenGL 3.3 core or OpenGL ES 3.0. if (QOpenGLContext::openGLModuleType() == QOpenGLContext::LibGL) { qDebug("Requesting 3.3 core context"); fmt.setVersion(3, 3); fmt.setProfile(QSurfaceFormat::CoreProfile); } else { qDebug("Requesting 3.0 context"); fmt.setVersion(3, 0); } QSurfaceFormat::setDefaultFormat(fmt); MyGLWidget w; w.show(); return a.exec(); }
- Add
-
I think it is a solution. You can mark it as a solution:
-