Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Qt Development
  3. General and Desktop
  4. Create a dynamic QOpenGLTexture from external device data
Forum Update on Monday, May 27th 2025

Create a dynamic QOpenGLTexture from external device data

Scheduled Pinned Locked Moved Solved General and Desktop
12 Posts 2 Posters 4.6k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • P Offline
    P Offline
    Pietrko
    wrote on last edited by Pietrko
    #1

    I'm trying to render camera stream, the problem is camera isn't recognized and all the Qt machinery is useless.
    What I have is a pointer void* data and pixel format for data gathered there.
    The solution I've came up is to follow this tutorial here and use QOpenGLTexture filled with my custom video data from camera stream.

    Redering a frame looks is the same is in tutorial.
    My only change at start of paint() function call i call updateTexture.

    Two important functions are below. updateTexture has also overloaded version that fills that gets the data from device and calls the "main" updateTexture.

    void RsFrameProvider::updateTexture(const void*  data, int width, int height, rs::format format, int stride = 0) {
    stride = stride == 0 ? width : stride;
    
    QOpenGLTexture::PixelType pixType;
    QOpenGLTexture::PixelFormat pixFormat;
    QOpenGLTexture::TextureFormat texFormat;
    
    switch(format)
    {
        case rs::format::any:
            throw std::runtime_error("not a valid format");
        case rs::format::rgb8: case rs::format::bgr8: // Display both RGB and BGR by interpreting them RGB, to show the flipped byte ordering. Obviously, GL_BGR could be used on OpenGL 1.2+
            texFormat = QOpenGLTexture::RGBFormat;
            pixFormat = QOpenGLTexture::RGB;
            pixType = QOpenGLTexture::UInt8;
            std::cerr << "Texture data set.";
            //glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
            break;
    
        default:
            {
                std::stringstream ss;
                ss << rs_format_to_string((rs_format)format) << " RS: Pixel format is not supported";
                throw std::runtime_error(ss.str().c_str());
            }
        }
    
    if (!texture->isStorageAllocated()) {
        texture->setSize(width, height);
        texture->setFormat(texFormat);
        texture->allocateStorage(pixFormat, pixType);
    }
    texture->setData(pixFormat, pixType, data);
    
    std::cerr << "VALUE: " << ((uchar*) data);
    // Texture updated ready to bind
    std::cerr << "Texture ready to bind, FPS:" << fps << " pixFormat" << pixFormat << std::endl;
    

    }

    I'm sure that each frame there is non-zero texture produced since I can print the data from device on the cerr stream.

    Paint function:

     void Renderer::paint()
     {
      frameProvider.updateTexture();
      frameProvider.texture->bind(0);
    
    
    if (!m_program) {
        initializeOpenGLFunctions();
    
        m_program = new QOpenGLShaderProgram();
        m_program->addShaderFromSourceCode(QOpenGLShader::Vertex,
                                           "attribute highp vec4 vertices;"
                                           "varying highp vec2 coords;"
                                           "attribute highp vec4 texCoord;"
                                           "varying highp vec4 texc;"
                                           "void main() {"
                                           "    gl_Position = vertices;"
                                           "    coords = vertices.xy;"
                                           "    texc = texCoord;"
                                           "}");
        m_program->addShaderFromSourceCode(QOpenGLShader::Fragment,
                                           "uniform lowp float t;"
                                           "varying highp vec2 coords;"
    
                                           "varying highp vec4 texc;"
                                           "uniform sampler2D tex;"
                                           "void main() {"
                                           "    lowp float i = 1. - (pow(abs(coords.x), 4.) + pow(abs(coords.y), 4.));"
                                           "    i = smoothstep(t - 0.8, t + 0.8, i);"
                                           "    i = floor(i * 20.) / 20.;"
                                           "    highp vec3 color = texture2D(tex, texc.xy).rgb;"
                                           "    gl_FragColor = vec4(color, 1.0);"
                                           "}");
    
        m_program->bindAttributeLocation("vertices", 0);
        m_program->bindAttributeLocation("texCoord", 1);
    
        m_program->link();
    
    }
    m_program->bind();
    
        m_program->enableAttributeArray(0);
        m_program->enableAttributeArray(1);
        float values[] = {
            -1, -1,
            1, -1,
            -1, 1,
            1, 1
        };
        float texCoords[] = {
            0, 0,
            1, 0,
            0, 1,
            1, 1
        };
        m_program->setAttributeArray(0, GL_FLOAT, values, 2);
        m_program->setAttributeArray(1, GL_FLOAT, texCoords, 2 );
    
        m_program->setUniformValue("t", (float) m_t);
        m_program->setUniformValue("tex", 0);
    
        glViewport(0, 0, m_viewportSize.width(), m_viewportSize.height());
    
        glDisable(GL_DEPTH_TEST);
    
        glClearColor(0, 0, 0, 1);
        glClear(GL_COLOR_BUFFER_BIT);
    
        glEnable(GL_BLEND);
        glBlendFunc(GL_SRC_ALPHA, GL_ONE);
    
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
        m_program->disableAttributeArray(0);
        m_program->disableAttributeArray(1);
    
    
        frameProvider.texture->release();
        m_program->release();
    
        // Not strictly needed for this example, but generally useful for when
        // mixing with raw OpenGL.
        m_window->resetOpenGLState();
    

    }

    All I get is a black screen.
    If I use non-dynamic texture created from Image by QOpenGLTexture( url ...) binded the same way as dynamic one - I can see textured quad.

    Is it even correct way to approach video rendering and dynamic texture (texture that is updated every frame)?

    Ant suggestions to what is wrong?

    1 Reply Last reply
    0
    • SGaistS Offline
      SGaistS Offline
      SGaist
      Lifetime Qt Champion
      wrote on last edited by
      #2

      Hi and welcome to devnet,

      What type of camera is it ? Where are you getting theses images from ?

      Interested in AI ? www.idiap.ch
      Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

      1 Reply Last reply
      0
      • P Offline
        P Offline
        Pietrko
        wrote on last edited by Pietrko
        #3

        The camera is Intel RealSense SR300 RGBD.

        Here is the library that handles i/o with camera and here is the
        API.

        Basically library provides the end user with synchronous call:

        const void*  rs_get_frame_data(rs::stream stream, rs::format format, int width, int height ... )
        

        That gives the the user a pointer to data gathered by last frame.
        Since camera is RGB+D you have multiple streams of data with different internal pixel formats.

        You select the stream providing: rs::stream stream parameter and pixel format by providing rs::format format param.
        One can also ensure that current thread execution is blocked till the "next frame" is available" by calling:

        rs_wait_for_frames(...)
        

        How fast internal frame buffer changes is set during initalization of camera device (but its always : 30FPS, 60 FPS or 10FPS)

        In my code I use RGB color stream and rgb8 pixel format (3 channels, 8bits per one, no compression).

        EDIT:
        I've get the streaming *semi-working by completly removing any dependence on QOpenGLTexture and using raw low-lvl OpenGL calls inside paint().
        This is a pathetic solution though.

        What is the elegant, proper way to get it working (one that involwes using Qt for actuall stuff) ?

        1 Reply Last reply
        0
        • SGaistS Offline
          SGaistS Offline
          SGaist
          Lifetime Qt Champion
          wrote on last edited by
          #4

          Like I wrote on the other thread, I usually wrote a backend that allows to integrate the sutff in the QtMultimedia pipeline.

          In the most simplified way, it comes to create a QVideoBuffer with the proper data to pass to QtMultimedia.

          Interested in AI ? www.idiap.ch
          Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

          1 Reply Last reply
          0
          • P Offline
            P Offline
            Pietrko
            wrote on last edited by Pietrko
            #5

            Thanks, what do you mean exactly by QtMultimedia - this is a global object from what I know.
            Do I need implement any other class than QVideoBuffer?
            To what object, of what class I should pass my QVideoBuffer instance after I fill it with data?

            http://stackoverflow.com/questions/43854589/custom-source-property-for-videooutput-qml

            1 Reply Last reply
            0
            • SGaistS Offline
              SGaistS Offline
              SGaist
              Lifetime Qt Champion
              wrote on last edited by
              #6

              I mean the QtMultimedia module and more precisely what's behind the QCamera class

              Interested in AI ? www.idiap.ch
              Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

              1 Reply Last reply
              0
              • P Offline
                P Offline
                Pietrko
                wrote on last edited by Pietrko
                #7

                Behind (in inheritance chain) QCamera class there's QMediaObject I don't see any properties or methods that deal with actual frame data or allow me to change the data source QCamera is using.
                I don't see any QVideoBuffer object associated with QCamera class.

                The whole frame data flow is really obscure and unclear to me.
                The objects named in a way that suggests they provide frame stream (i.e QCamera) don't seem to provide any access to frames at all.

                OpenGL approach is more straightforward.
                I admit only worked with Qt with since four days and I am not very fond of it.

                1 Reply Last reply
                0
                • SGaistS Offline
                  SGaistS Offline
                  SGaist
                  Lifetime Qt Champion
                  wrote on last edited by
                  #8

                  You should take a look at the QtMultimedia plugins sources.

                  Interested in AI ? www.idiap.ch
                  Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

                  1 Reply Last reply
                  0
                  • P Offline
                    P Offline
                    Pietrko
                    wrote on last edited by
                    #9

                    I looked at them and I see a dozen of different classess just for camera.

                    I've got the feed from camera with 150 lines of code using raw GL and qt scene graph.
                    Unfortunately I don't have enough time to become Qt developer and read whole QMultimedia codebase to do it the way you suggest.

                    Thanks anyway.

                    1 Reply Last reply
                    0
                    • SGaistS Offline
                      SGaistS Offline
                      SGaist
                      Lifetime Qt Champion
                      wrote on last edited by
                      #10

                      While I completely understand your lack of time I'd like to point that you don't need to know the whole QtMultimedia codebase to get the hang of the QCamera backends.

                      In any case, I'm glad you found a working solution and since you have it running now, please mark the thread as solved using the "Topic Tools" button so that other forum users may know a solution has been found :)

                      Happy coding ! :)

                      Interested in AI ? www.idiap.ch
                      Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

                      P 1 Reply Last reply
                      0
                      • SGaistS SGaist

                        While I completely understand your lack of time I'd like to point that you don't need to know the whole QtMultimedia codebase to get the hang of the QCamera backends.

                        In any case, I'm glad you found a working solution and since you have it running now, please mark the thread as solved using the "Topic Tools" button so that other forum users may know a solution has been found :)

                        Happy coding ! :)

                        P Offline
                        P Offline
                        Pietrko
                        wrote on last edited by Pietrko
                        #11

                        @SGaist said in Create a dynamic QOpenGLTexture from external device data:

                        While I completely understand your lack of time I'd like to point that you don't need to know the whole QtMultimedia codebase to get the hang of the QCamera backends.

                        This is most likely true, however you're able to say it only beacuse you already know how QMultimedia works and therefore you're able to pick the most efficient learning/reading route for QCamera backed thing.
                        Without your knowledge the learning experience would be closer to "random-walk over seemingly related files" :)

                        Marked as solved.

                        1 Reply Last reply
                        0
                        • SGaistS Offline
                          SGaistS Offline
                          SGaist
                          Lifetime Qt Champion
                          wrote on last edited by
                          #12

                          The first time I wrote a QCamera backend I didn't knew much about the internals of QtMultimedia (and I still don't know all the details of every aspect of the module).

                          I took a look at several of the existing QCamera related plugins that did at least part of what I was interested in and then applied the same concepts. I was not a random walk but looking at the sources provided a pretty quick insight of how I could write my own.

                          Interested in AI ? www.idiap.ch
                          Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

                          1 Reply Last reply
                          0

                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • Users
                          • Groups
                          • Search
                          • Get Qt Extensions
                          • Unsolved