Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Qt Development
  3. Mobile and Embedded
  4. Qt 5.12 - QVideoFrame to QImage conversion (black image)
Forum Updated to NodeBB v4.3 + New Features

Qt 5.12 - QVideoFrame to QImage conversion (black image)

Scheduled Pinned Locked Moved Unsolved Mobile and Embedded
6 Posts 3 Posters 1.4k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Z Offline
    Z Offline
    zubozrout
    wrote on last edited by
    #1

    Hello,
    I've spent some time in trying to get data from QCamera to QML's VideoOutput, having to take a combination of approaches described here:

    • https://stackoverflow.com/questions/62381912/operate-qcamera-with-qml
    • https://stackoverflow.com/a/40513362/1642887 (as QVideoProbe doesn't seem to work on Ubuntu Touch)

    With that being said, after some trial and error I started seeing the images from camera displayed in QML.

    The next thing I wanted to do is to actually be able to touch the camera output that I am getting as QVideoFrames.

    In order to do that I need to convert these to QImage, but with my level of skills and even more with the older version of QT Ubuntu Touch comes with this is quite painful.

    I started trying to convert the image myself, but found out that in order to run QVideoFrame::map, QOpenGLContext is needed. Took me a while to figure out how to create one: https://github.com/qt/qtmultimedia/blob/5.12/src/plugins/avfoundation/mediaplayer/avfvideoframerenderer.mm#L123

    and even more time to figure out it can't be a code as part of QObject (QML plugin) as that is crashing the app right on startup, but rather in main.cpp. With that, this part works.

    Btw, the map implementation Ubuntu Touch has can be found here:
    https://gitlab.com/ubports/development/core/qtubuntu-camera/-/blob/main/src/aalvideorenderercontrol.cpp and as it is taking the QAbstractVideoBuffer::GLTextureHandle way which is also the handleType of the QVideoFrames I am getting, I presume that is the correct implementation for Ubuntu Touch and also the reason why QML Camera works just fine.

    So I started looking around and found this function: qt_imageFromVideoFrame, which however is not something I can include in my app any more due to these private headers not being available in distros for ages:

    https://github.com/qt/qtmultimedia/blob/5.12/src/multimedia/video/qvideoframe.cpp#L1094
    https://bugs.launchpad.net/ubuntu/+source/qtvideo-node/+bug/1267818

    But anyway, it is not complicated to clone most of the function code in the app and trying to use it there which is what I've done and my frames don't seem to need any additional conversion. On the contrary, I believe this should work exactly as so:

    QImage(frame.bits(), frame.width(), frame.height(), imageFormat).copy();
    

    I ended up trying a lot of random approaches I stumbled upon, but there are only a few anyway. My image is not null and frame I convert the image back to says it is valid (I need to convert back and forth to then show in the QML's VideoOutput again). Yet the image I am seeing is all black or black with some artifacts, if I take a bit different approach suggested here:
    https://forum.qt.io/topic/128837/qvideoframe-map-fails-on-ubuntu-touch

    If I fill the image with some color, I can see the area corresponding to the area of the frame filled in with the color of my choice (the dimensions are correct, matching the QVideoFrame I am receiving from the camera).

    With the code seemingly not throwing any further errors or warnings and with the little info I was able to find out online about this I am unfortunately lost atm and not sure how to proceed.

    Any idea on how to debug this further or what to try next would be very much appreciated. Thank you so much!

    TomZT 1 Reply Last reply
    0
    • Z zubozrout

      Hello,
      I've spent some time in trying to get data from QCamera to QML's VideoOutput, having to take a combination of approaches described here:

      • https://stackoverflow.com/questions/62381912/operate-qcamera-with-qml
      • https://stackoverflow.com/a/40513362/1642887 (as QVideoProbe doesn't seem to work on Ubuntu Touch)

      With that being said, after some trial and error I started seeing the images from camera displayed in QML.

      The next thing I wanted to do is to actually be able to touch the camera output that I am getting as QVideoFrames.

      In order to do that I need to convert these to QImage, but with my level of skills and even more with the older version of QT Ubuntu Touch comes with this is quite painful.

      I started trying to convert the image myself, but found out that in order to run QVideoFrame::map, QOpenGLContext is needed. Took me a while to figure out how to create one: https://github.com/qt/qtmultimedia/blob/5.12/src/plugins/avfoundation/mediaplayer/avfvideoframerenderer.mm#L123

      and even more time to figure out it can't be a code as part of QObject (QML plugin) as that is crashing the app right on startup, but rather in main.cpp. With that, this part works.

      Btw, the map implementation Ubuntu Touch has can be found here:
      https://gitlab.com/ubports/development/core/qtubuntu-camera/-/blob/main/src/aalvideorenderercontrol.cpp and as it is taking the QAbstractVideoBuffer::GLTextureHandle way which is also the handleType of the QVideoFrames I am getting, I presume that is the correct implementation for Ubuntu Touch and also the reason why QML Camera works just fine.

      So I started looking around and found this function: qt_imageFromVideoFrame, which however is not something I can include in my app any more due to these private headers not being available in distros for ages:

      https://github.com/qt/qtmultimedia/blob/5.12/src/multimedia/video/qvideoframe.cpp#L1094
      https://bugs.launchpad.net/ubuntu/+source/qtvideo-node/+bug/1267818

      But anyway, it is not complicated to clone most of the function code in the app and trying to use it there which is what I've done and my frames don't seem to need any additional conversion. On the contrary, I believe this should work exactly as so:

      QImage(frame.bits(), frame.width(), frame.height(), imageFormat).copy();
      

      I ended up trying a lot of random approaches I stumbled upon, but there are only a few anyway. My image is not null and frame I convert the image back to says it is valid (I need to convert back and forth to then show in the QML's VideoOutput again). Yet the image I am seeing is all black or black with some artifacts, if I take a bit different approach suggested here:
      https://forum.qt.io/topic/128837/qvideoframe-map-fails-on-ubuntu-touch

      If I fill the image with some color, I can see the area corresponding to the area of the frame filled in with the color of my choice (the dimensions are correct, matching the QVideoFrame I am receiving from the camera).

      With the code seemingly not throwing any further errors or warnings and with the little info I was able to find out online about this I am unfortunately lost atm and not sure how to proceed.

      Any idea on how to debug this further or what to try next would be very much appreciated. Thank you so much!

      TomZT Offline
      TomZT Offline
      TomZ
      wrote on last edited by
      #2

      @zubozrout check out how I did it here;

      https://codeberg.org/Flowee/pay/src/tag/2024.01.3/src/CameraController.cpp#L419

      the linked method does nothing but convert from QVideoFrame to QImage.

      1 Reply Last reply
      0
      • Ronel_qtmasterR Offline
        Ronel_qtmasterR Offline
        Ronel_qtmaster
        wrote on last edited by
        #3

        @zubozrout it is better to implement your own QAbstractVideoFilter and pass it to the QML Camera.

        After that, QAbstractVideoFilter has a function called run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
        where input represents the current VideoFrame.

        You can now take that input and create a cloned QVideoFrame from it and then map the input in order to read the data fraom the videoframe

        QVideoFrame *cloneFrame(input);
        cloneFrame->map(QAbstractVideoBuffer::ReadOnly);
        QImage img = QVideoFrameToQImage(*cloneFrame );

        QVideoFrameToImage is a library for converting videoframes to images.Check the code below

        QVideoFrameToImage.cpp

        #include <QOpenGLContext>
        #include <QOpenGLFunctions>
        #include <QQmlContext>
        #include "private/qvideoframe_p.h"

        QImage QVideoFrameToQImage( const QVideoFrame& videoFrame )
        {
        if ( videoFrame.handleType() == QAbstractVideoBuffer::NoHandle )
        {
        QImage image = qt_imageFromVideoFrame( videoFrame );
        if ( image.isNull() )
        {
        return QImage();
        }

            if ( image.format() != QImage::Format_RGB32)
            {
                image = image.convertToFormat(QImage::Format_RGB32 );
            }
        
            return image;
        }
        
        if ( videoFrame.handleType() == QAbstractVideoBuffer::GLTextureHandle )
        {
            QImage image( videoFrame.width(), videoFrame.height(), QImage::Format_ARGB32 );
            GLuint textureId = static_cast<GLuint>( videoFrame.handle().toInt() );
            QOpenGLContext* ctx = QOpenGLContext::currentContext();
            QOpenGLFunctions* f = ctx->functions();
            GLuint fbo;
            f->glGenFramebuffers( 1, &fbo );
            GLint prevFbo;
            f->glGetIntegerv( GL_FRAMEBUFFER_BINDING, &prevFbo );
            f->glBindFramebuffer( GL_FRAMEBUFFER, fbo );
            f->glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0 );
            f->glReadPixels( 0, 0,  videoFrame.width(),  videoFrame.height(), GL_RGBA, GL_UNSIGNED_BYTE, image.bits() );
            f->glBindFramebuffer( GL_FRAMEBUFFER, static_cast<GLuint>( prevFbo ) );
            return image.rgbSwapped();
        }
        
        return QImage();
        

        }

        QVideoFrameToImage.h

        #ifndef QVideoFrameToQImage
        #define QVideoFrameToQImage

        #include <QVideoFrame>
        #include <QImage>

        QImage QVideoFrameToQImage( const QVideoFrame& videoFrame );

        #endif

        Hope it helps.

        1 Reply Last reply
        0
        • Z Offline
          Z Offline
          zubozrout
          wrote on last edited by
          #4

          Thank you both so much for your quick replies. This is really appreciated.

          I should have probably done that earlier, but here is what the code I have currently looks like: adapter.cpp

          The above points to the processFrame function through which QVideoFrames are supposed to pass before thy get to VideoOutput in QML. That is all fine.

          But as you can see I've also tried various things to get the QImage working, but unfortunately nothing has enabled me to work with the actual image so far.

          I think I am really close to what you've suggested @Ronel_qtmaster, but even if I simply re-use the code 1:1 to what you have outlined, the image is still all a black canvas :(. As suggested earlier the handle type is QAbstractVideoBuffer::GLTextureHandle so it must be going that way. And the frame format is saying Format_RGB32.

          @TomZ This looks like possibly newer Qt I am unable to use? Like for instance QVideoFrame::toImage(), which would be awesome to have, or even this one:
          https://doc.qt.io/qt-5/qvideoframe.html#image
          but that is still 5.15, while I am stuck with 5.12 unfortunately :'(

          Ronel_qtmasterR 1 Reply Last reply
          0
          • Z zubozrout

            Thank you both so much for your quick replies. This is really appreciated.

            I should have probably done that earlier, but here is what the code I have currently looks like: adapter.cpp

            The above points to the processFrame function through which QVideoFrames are supposed to pass before thy get to VideoOutput in QML. That is all fine.

            But as you can see I've also tried various things to get the QImage working, but unfortunately nothing has enabled me to work with the actual image so far.

            I think I am really close to what you've suggested @Ronel_qtmaster, but even if I simply re-use the code 1:1 to what you have outlined, the image is still all a black canvas :(. As suggested earlier the handle type is QAbstractVideoBuffer::GLTextureHandle so it must be going that way. And the frame format is saying Format_RGB32.

            @TomZ This looks like possibly newer Qt I am unable to use? Like for instance QVideoFrame::toImage(), which would be awesome to have, or even this one:
            https://doc.qt.io/qt-5/qvideoframe.html#image
            but that is still 5.15, while I am stuck with 5.12 unfortunately :'(

            Ronel_qtmasterR Offline
            Ronel_qtmasterR Offline
            Ronel_qtmaster
            wrote on last edited by
            #5

            @zubozrout You are welcome.I would only like to precise that i am using that same code and it is working fine here.Maybe you missed something during your implementation?

            Also the filter class should be added in the qml side as well through the argument filters of VideoOutput. Moreover, it is important when a frame is presented to read its content in the Buffer, before converting it to image.That is probably why you always get a black image.

            1 Reply Last reply
            1
            • Z Offline
              Z Offline
              zubozrout
              wrote on last edited by
              #6

              Ok, a very interesting thing I just noticed is happening to me, yet I don't have a clue why yet :(.

              It is possible the conversion might be working, IDK, but whenever I feed the QML's VideoOutput with raw QVideoFrame I get from the camera it processes every image presumably, as I am getting a lot of console outputs (if I have such output in the code). But the minute I start feeding it with an image I only get the first frame and that might be black/broken because the camera is not yet fully initialised perhaps?

              Here is I believe one of the places I can check the frames:
              https://gitlab.com/zubozrout/ut-camera/-/blob/35dca02e8471caf53d1d011ff79ecd6d5ac2f948/plugins/camera/camera.cpp#L93

              connect(this, &PoorMansProbe::videoFrameProbed, this, [=](const QVideoFrame &frame) {
                  std::cout << "New frame" << std::endl;
                  this->setFrame(frame);
              });
              

              So perhaps it is some issue with the custom probe: https://stackoverflow.com/questions/25438843/how-to-make-qvideoprobe-work/40513362#40513362 but I have no idea atm :(

              1 Reply Last reply
              0

              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • Users
              • Groups
              • Search
              • Get Qt Extensions
              • Unsolved