Qt World Summit: Register Today!

Create a dynamic QOpenGLTexture from external device data

  • I'm trying to render camera stream, the problem is camera isn't recognized and all the Qt machinery is useless.
    What I have is a pointer void* data and pixel format for data gathered there.
    The solution I've came up is to follow this tutorial here and use QOpenGLTexture filled with my custom video data from camera stream.

    Redering a frame looks is the same is in tutorial.
    My only change at start of paint() function call i call updateTexture.

    Two important functions are below. updateTexture has also overloaded version that fills that gets the data from device and calls the "main" updateTexture.

    void RsFrameProvider::updateTexture(const void*  data, int width, int height, rs::format format, int stride = 0) {
    stride = stride == 0 ? width : stride;
    QOpenGLTexture::PixelType pixType;
    QOpenGLTexture::PixelFormat pixFormat;
    QOpenGLTexture::TextureFormat texFormat;
        case rs::format::any:
            throw std::runtime_error("not a valid format");
        case rs::format::rgb8: case rs::format::bgr8: // Display both RGB and BGR by interpreting them RGB, to show the flipped byte ordering. Obviously, GL_BGR could be used on OpenGL 1.2+
            texFormat = QOpenGLTexture::RGBFormat;
            pixFormat = QOpenGLTexture::RGB;
            pixType = QOpenGLTexture::UInt8;
            std::cerr << "Texture data set.";
            //glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
                std::stringstream ss;
                ss << rs_format_to_string((rs_format)format) << " RS: Pixel format is not supported";
                throw std::runtime_error(ss.str().c_str());
    if (!texture->isStorageAllocated()) {
        texture->setSize(width, height);
        texture->allocateStorage(pixFormat, pixType);
    texture->setData(pixFormat, pixType, data);
    std::cerr << "VALUE: " << ((uchar*) data);
    // Texture updated ready to bind
    std::cerr << "Texture ready to bind, FPS:" << fps << " pixFormat" << pixFormat << std::endl;


    I'm sure that each frame there is non-zero texture produced since I can print the data from device on the cerr stream.

    Paint function:

     void Renderer::paint()
    if (!m_program) {
        m_program = new QOpenGLShaderProgram();
                                           "attribute highp vec4 vertices;"
                                           "varying highp vec2 coords;"
                                           "attribute highp vec4 texCoord;"
                                           "varying highp vec4 texc;"
                                           "void main() {"
                                           "    gl_Position = vertices;"
                                           "    coords = vertices.xy;"
                                           "    texc = texCoord;"
                                           "uniform lowp float t;"
                                           "varying highp vec2 coords;"
                                           "varying highp vec4 texc;"
                                           "uniform sampler2D tex;"
                                           "void main() {"
                                           "    lowp float i = 1. - (pow(abs(coords.x), 4.) + pow(abs(coords.y), 4.));"
                                           "    i = smoothstep(t - 0.8, t + 0.8, i);"
                                           "    i = floor(i * 20.) / 20.;"
                                           "    highp vec3 color = texture2D(tex, texc.xy).rgb;"
                                           "    gl_FragColor = vec4(color, 1.0);"
        m_program->bindAttributeLocation("vertices", 0);
        m_program->bindAttributeLocation("texCoord", 1);
        float values[] = {
            -1, -1,
            1, -1,
            -1, 1,
            1, 1
        float texCoords[] = {
            0, 0,
            1, 0,
            0, 1,
            1, 1
        m_program->setAttributeArray(0, GL_FLOAT, values, 2);
        m_program->setAttributeArray(1, GL_FLOAT, texCoords, 2 );
        m_program->setUniformValue("t", (float) m_t);
        m_program->setUniformValue("tex", 0);
        glViewport(0, 0, m_viewportSize.width(), m_viewportSize.height());
        glClearColor(0, 0, 0, 1);
        glBlendFunc(GL_SRC_ALPHA, GL_ONE);
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
        // Not strictly needed for this example, but generally useful for when
        // mixing with raw OpenGL.


    All I get is a black screen.
    If I use non-dynamic texture created from Image by QOpenGLTexture( url ...) binded the same way as dynamic one - I can see textured quad.

    Is it even correct way to approach video rendering and dynamic texture (texture that is updated every frame)?

    Ant suggestions to what is wrong?

  • Lifetime Qt Champion

    Hi and welcome to devnet,

    What type of camera is it ? Where are you getting theses images from ?

  • The camera is Intel RealSense SR300 RGBD.

    Here is the library that handles i/o with camera and here is the

    Basically library provides the end user with synchronous call:

    const void*  rs_get_frame_data(rs::stream stream, rs::format format, int width, int height ... )

    That gives the the user a pointer to data gathered by last frame.
    Since camera is RGB+D you have multiple streams of data with different internal pixel formats.

    You select the stream providing: rs::stream stream parameter and pixel format by providing rs::format format param.
    One can also ensure that current thread execution is blocked till the "next frame" is available" by calling:


    How fast internal frame buffer changes is set during initalization of camera device (but its always : 30FPS, 60 FPS or 10FPS)

    In my code I use RGB color stream and rgb8 pixel format (3 channels, 8bits per one, no compression).

    I've get the streaming *semi-working by completly removing any dependence on QOpenGLTexture and using raw low-lvl OpenGL calls inside paint().
    This is a pathetic solution though.

    What is the elegant, proper way to get it working (one that involwes using Qt for actuall stuff) ?

  • Lifetime Qt Champion

    Like I wrote on the other thread, I usually wrote a backend that allows to integrate the sutff in the QtMultimedia pipeline.

    In the most simplified way, it comes to create a QVideoBuffer with the proper data to pass to QtMultimedia.

  • Thanks, what do you mean exactly by QtMultimedia - this is a global object from what I know.
    Do I need implement any other class than QVideoBuffer?
    To what object, of what class I should pass my QVideoBuffer instance after I fill it with data?


  • Lifetime Qt Champion

    I mean the QtMultimedia module and more precisely what's behind the QCamera class

  • Behind (in inheritance chain) QCamera class there's QMediaObject I don't see any properties or methods that deal with actual frame data or allow me to change the data source QCamera is using.
    I don't see any QVideoBuffer object associated with QCamera class.

    The whole frame data flow is really obscure and unclear to me.
    The objects named in a way that suggests they provide frame stream (i.e QCamera) don't seem to provide any access to frames at all.

    OpenGL approach is more straightforward.
    I admit only worked with Qt with since four days and I am not very fond of it.

  • Lifetime Qt Champion

    You should take a look at the QtMultimedia plugins sources.

  • I looked at them and I see a dozen of different classess just for camera.

    I've got the feed from camera with 150 lines of code using raw GL and qt scene graph.
    Unfortunately I don't have enough time to become Qt developer and read whole QMultimedia codebase to do it the way you suggest.

    Thanks anyway.

  • Lifetime Qt Champion

    While I completely understand your lack of time I'd like to point that you don't need to know the whole QtMultimedia codebase to get the hang of the QCamera backends.

    In any case, I'm glad you found a working solution and since you have it running now, please mark the thread as solved using the "Topic Tools" button so that other forum users may know a solution has been found :)

    Happy coding ! :)

  • @SGaist said in Create a dynamic QOpenGLTexture from external device data:

    While I completely understand your lack of time I'd like to point that you don't need to know the whole QtMultimedia codebase to get the hang of the QCamera backends.

    This is most likely true, however you're able to say it only beacuse you already know how QMultimedia works and therefore you're able to pick the most efficient learning/reading route for QCamera backed thing.
    Without your knowledge the learning experience would be closer to "random-walk over seemingly related files" :)

    Marked as solved.

  • Lifetime Qt Champion

    The first time I wrote a QCamera backend I didn't knew much about the internals of QtMultimedia (and I still don't know all the details of every aspect of the module).

    I took a look at several of the existing QCamera related plugins that did at least part of what I was interested in and then applied the same concepts. I was not a random walk but looking at the sources provided a pretty quick insight of how I could write my own.

Log in to reply