Solved Render QImage with QOpenglWidget, need improvment, for video/camera play
-
Hi, guys, I have a source working fine with qimage and qopenglwidget ,but i am new to opengl and not sure it is the most efficient way, could some one correct the function setBackground() ? thank you.
h file:#ifndef MYGLWIDGET_H #define MYGLWIDGET_H #include <QObject> #include <QOpenGLWidget> #include <GL/gl.h> #include <GL/glu.h> #include <QOpenGLFunctions> #include <QOpenGLShaderProgram> #include <QOpenGLTexture> #include <QGLWidget> #include <QImage> class MyGLWidget : public QOpenGLWidget, protected QOpenGLFunctions { Q_OBJECT public: explicit MyGLWidget(QWidget *parent = 0); ~MyGLWidget(); signals: public slots: void initializeGL() Q_DECL_OVERRIDE; void resizeGL(int w, int h) Q_DECL_OVERRIDE; void paintGL() Q_DECL_OVERRIDE; void setBackground(QImage image); void initTextures(); void initShaders(); private: QVector<QVector3D> vertices; QVector<QVector2D> texCoords; QOpenGLShaderProgram program; QOpenGLTexture *texture; QImage m_image; QMatrix4x4 projection; }; #endif // MYGLWIDGET_H
cpp file:
//基于QOpenGLWidget自适应大小显示图片 // 实际测试,第一次可以,后面就不变了 //https://blog.csdn.net/yl409519139/article/details/85338581 #include "myglwidget.h" #include <QDebug> #include <QGraphicsOpacityEffect> #include <QDebug> #define TRACE qDebug() << "[" << __FILE__ << ":" << __LINE__ << ":" << __func__ << "]" MyGLWidget::MyGLWidget(QWidget *parent) : QOpenGLWidget(parent), texture(nullptr) { } MyGLWidget::~MyGLWidget() { } // 纹理初始化 void MyGLWidget::initTextures() { if(m_image.isNull()) { qDebug("Cannot open the image..."); QImage dummy(128, 128, QImage::Format_RGB32);//当没找到所需打开的图片时,创建一副128*128大小,深度为32位的位图 dummy.fill(Qt::green); m_image = dummy; } // 加载 Avengers.jpg 图片 texture = new QOpenGLTexture(m_image); //texture->textureId(); // 纹理过滤方式 // 这是s.t 两个坐标方向上的属性。暂时先记住结论性的一句话:“纹理被缩小的时候使用邻近过滤, // 被放大时使用线性过滤。”以后对它有深刻认识在深入讨论。 // 设置最近的过滤模式,以缩小纹理 texture->setMinificationFilter(QOpenGLTexture::Nearest); //滤波 // 设置双线性过滤模式,以放大纹理 texture->setMagnificationFilter(QOpenGLTexture::Linear); //重复使用纹理坐标,纹理环绕方式 //纹理坐标(1.1, 1.2)与(0.1, 0.2)相同 texture->setWrapMode(QOpenGLTexture::Repeat); } // 着色器 void MyGLWidget::initShaders() { //纹理坐标 texCoords.append(QVector2D(0, 1)); //左上 texCoords.append(QVector2D(1, 1)); //右上 texCoords.append(QVector2D(0, 0)); //左下 texCoords.append(QVector2D(1, 0)); //右下 //顶点坐标 vertices.append(QVector3D(-1, -1, 1));//左下 vertices.append(QVector3D(1, -1, 1)); //右下 vertices.append(QVector3D(-1, 1, 1)); //左上 vertices.append(QVector3D(1, 1, 1)); //右上 //添加顶点着色器 QOpenGLShader *vshader = new QOpenGLShader(QOpenGLShader::Vertex, this); const char *vsrc = "attribute vec4 vertex;\n" "attribute vec2 texCoord;\n" "varying vec2 texc;\n" "void main(void)\n" "{\n" " gl_Position = vertex;\n" " texc = texCoord;\n" "}\n"; vshader->compileSourceCode(vsrc);//编译顶点着色器代码 //添加纹理碎片着色器,这里可以调用函数进行rgb yuv转换 // 看 https://blog.csdn.net/wanghualin033/article/details/79683836 QOpenGLShader *fshader = new QOpenGLShader(QOpenGLShader::Fragment, this); const char *fsrc = "uniform sampler2D texture;\n" "varying vec2 texc;\n" "void main(void)\n" "{\n" " gl_FragColor = texture2D(texture,texc);\n" "}\n"; fshader->compileSourceCode(fsrc); //编译纹理着色器代码 program.addShader(vshader);//添加顶点着色器 program.addShader(fshader);//添加纹理碎片着色器 // 这2个变量名称跟 vsrc (vec4/vec2)的代码对应起来 program.bindAttributeLocation("vertex", 0);//绑定顶点属性位置 program.bindAttributeLocation("texCoord", 1);//绑定纹理属性位置 // 链接着色器管道 if (!program.link()){ TRACE<<("!program.link()"); } // 绑定着色器管道 if (!program.bind()){ TRACE<<("!program.bind()"); } // lyj 改到这里? // enable 后 setAttributeArray 才有效,否则用到了 setAttributeValue() program.enableAttributeArray(0); program.enableAttributeArray(1); program.setAttributeArray(0, vertices.constData()); program.setAttributeArray(1, texCoords.constData()); // 这个 参数要跟 代码 fsrc 的变量对起来 program.setUniformValue("texture", 0); } void MyGLWidget::initializeGL() { qDebug("initializeGL"); initializeOpenGLFunctions(); //初始化OPenGL功能函数 glClearColor(0,0.5,0, 0.0); //设置背景为 0.5 green glEnable(GL_CULL_FACE); glEnable(GL_TEXTURE_2D); //启用纹理 //glShadeModel(GL_SMOOTH);//设置阴影平滑模式,别的考过来测试 lyj initTextures(); initShaders(); } void MyGLWidget::resizeGL(int w, int h) { TRACE<<("resizeGL"); // 计算窗口横纵比 qreal aspect = qreal(w) / qreal(h ? h : 1); // 设置近平面值 3.0, 远平面值 7.0, 视场45度 const qreal zNear = 3.0, zFar = 7.0, fov = 45.0; //float fov = 45.0f, zNear = 0.1f, zFar = 100.f; // 没什么变化 // 重设投影 // 您实际上将当前点移到了屏幕中心,X坐标轴从左至右,Y坐标轴从下至上,Z坐标轴从里至外 projection.setToIdentity(); // 设置透视投影 projection.perspective(fov, static_cast<float>(aspect), zNear, zFar); } void MyGLWidget::paintGL() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //清除屏幕缓存和深度缓冲 texture->bind(); //绑定纹理 //QMatrix4x4 matrix; //matrix.translate(0.0, 0.0, -5.0); //矩阵变换 -5.0/-3没大 区别 #if 1 // enable 后 setAttributeArray 才有效,否则用到了 setAttributeValue() program.enableAttributeArray(0); program.enableAttributeArray(1); program.setAttributeArray(0, vertices.constData()); program.setAttributeArray(1, texCoords.constData()); // 这个 参数要跟 代码 fsrc 的变量对起来 program.setUniformValue("texture", 0); #else #endif //使用顶点数组方式绘制图形 glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); //GL_TRIANGLE_FAN : 只有对角线一半画面 texture->release(); TRACE << ("paint"); } void MyGLWidget::setBackground(QImage image) { //qDebug("setBackground"); m_image = image; /* always destroy->create seems not good here */ if(texture != nullptr){ texture->destroy(); texture->create(); texture->setSize(image.width(),image.height(),1); texture->setData(image); } update(); }
-
Hi,
Something is strange with your query. You seem to have a class in which you can update some scene background while your thread title implies that you are going use it for video or camera rendering.
What is your exact use case ?
-
Indeed I have a usb camera and need to grab buffer and display it in widget. the fps will be up to 200fps.
If use paintevent()->paint.drawimage(). the cpu will occupy to 20%.
So i think if using opengl to render image buffer will be better.The calling procedure:
grab image Qthread ->signal to gui -> gui got the Qimage and setBackground() ->qopenglwidget renderGl() and paint the QImage.So far the class work fine to render QImage(pbuffer,width,height), but i don't think textrue always destroy->create is the good way. Could we use a framebuffer? how?
Thank you. -
If the frames go up to 200fps and there are performance issues I would just skip some frames because you cannot see that fast anyway.
About the opengl widget: I dont't think that the opengl widget will take less cpu than just rendering a simple image.
-
Beside the good point of @gde23, does the constructor of the webcam provide any SDK to work at that rate ?
Note that QtMultimedia provides QCamera and QCameraViewFinder that you might want to check.