QImage::Format_Invalid from MedaiPlayer QVideoFrames

  • Hi,

    I have created a simple QML/C++ application that applies a simple filter to each frame provided by a MacOS laptop Camera. When my MyFilterRunable::run() gets called, I get the QImage from the QVideoFrame passed to it, call invertPixels(), construct a new QVideoFame with the new inverted QImage, and return it. I have a VideoOutput with a the camera as the source and it works. Yay!

    However, I created a MediaPlayer, configured the source to the mp4 video file on my local disk and changed the VideoOutput source to the MediaPlayer (instead of the Camera). It plays the original video without applying my filter. I can see that it is indeed calling MyFilterRunnable::run(), because I print something out there to prove it to myself. After further investigation, the QImage I get from the QVideoFrame in MyFilterRunnable::run() has the format: QImage::Format_Invalid. It seems passing this QImage back in a QVideoFrame is ultimately ignored (which makes sense) and it just plays the original frame instead.

    What am I doing wrong? I was expecting a valid QImage I could operate on. I'm using Qt 5.15.2.

  • Lifetime Qt Champion

    Hi and welcome to devnet,

    You should check the QVideoFrame::handle. You might be getting something else than what you are expecting.

  • @SGaist Thanks. It would appear that the QImage is valid when QVideoFrame.handleType() is QAbstractVideoBuffer::NoHandle from the camera, which makes sense according to the Qt documentation.

    When playing a recorded video, I get QAbstractVideoBuffer::GLTextureHandle from QVideoFrame::handleType(). So, this means that QVideoFrame::handle() returns an OpenGL texture ID. So, I looked for an example of converting the OpenGL texture ID into a OpenCV cv::Mat type. So I came up with this:

    cv::Mat matImage = GLTextureToCvMat(input->handle().UInt);

    With this definition of GLTextureToCvMat(GLuint) borrowed from here:

    cv::Mat GLTextureToCvMat(GLuint ogl_texture_id)
          glBindTexture(GL_TEXTURE_2D, ogl_texture_id);
          GLenum gl_texture_width, gl_texture_height;
          glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, (GLint*)&gl_texture_width);
          glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, (GLint*)&gl_texture_height);
          unsigned char* gl_texture_bytes = (unsigned char*) malloc(sizeof(unsigned char)*gl_texture_width*gl_texture_height*3);
          glGetTexImage(GL_TEXTURE_2D, 0 /* mipmap level */, GL_BGR, GL_UNSIGNED_BYTE, gl_texture_bytes);
          cv::Mat mat(gl_texture_height, gl_texture_width, CV_8UC3, gl_texture_bytes);
          cv::Mat result = mat.clone(); // Own the copy of image memory
          return result;

    However, when I display the matImage with OpenCV's imshow(matImage), it is all mostly black with the wrong width and height and a few color pixels. I think we're getting somewhere, but not there yet.

    I found this similar issue with a proposed solution that may not have addressed the issue. It’s a shame Qt doesn’t provide code to get you closer than a handle. Somehow it knows how to turn the OpenGL texture into a video frame to render.

    And this example code may ultimately be the trick.


  • Lifetime Qt Champion

    There's a fragment shader that does a color space conversion if needed e.g. NV21 to RGB but that's all. Since it's already a texture there's no need to do other conversions.

  • @SGaist I copied the example into a project and had to make few modifications. Like they were using QList<T>::resize(int) and QList<T>::data(). So changed those to QVector<T> and got it to work. So it embossed the video frames from camera and video playback. Stole those filters into my code and working on adapting them to my task now. So, I’m probably on my way now with a working example being adapted to my needs. These examples need to be updated, or maybe I’m looking at old ones.

    Thanks again for pointing out the handle() issue or I’d still be wondering why it doesn’t work. The readme for the example does a decent job explaining why cameras produce QImages and playbacks often produce GLTextures. I don’t think I would have ever figured the conversion code out on my own. :-/

  • Lifetime Qt Champion

    Which examples are you referring to ?