Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

QAbstractVideoBuffer::ReadWrite is not recommended?



  • Hi,

    I was trying this example from here:
    https://github.com/theshadowx/Qt_OpenCV

    It basically gets a frame from camera and apply a simple filtering effect on it using opencv. Finally, it sends an QImage back for the display. It basically uses the QAbstractVideoBuffer class to manipulate the frames.

    input->map(QAbstractVideoBuffer::ReadWrite);
    
       this->deleteColorComponentFromYUV(input);
    
       cv::Mat mat(input->height(),input->width(), CV_8U, input->bits());
       cv::GaussianBlur(mat, mat, Size(gaussianBlurSize,gaussianBlurSize), gaussianBlurCoef, gaussianBlurCoef);
    
       input->unmap();
    

    On ubuntu 20.04, Gstreamer complains doing the above with
    GStreamer-CRITICAL **: 09:15:21.068: write map requested on non-writable buffer

    I am wondering if this method is the recommended way to apply filters on camera? I see in the latest qt 5.15 multimedia examples that they use shaders instead and that seems to work on Linux. But i am interested to grab a qml camera frame, send to c++, do some computations and send back the QImage. Any suggestions?

    Thanks
    Suvir


  • Lifetime Qt Champion

    Hi,

    What do you get if you use the version provided by your distribution ?



  • Yes, i tried ubuntu supplied qt 5.12. Same error

    
    (CannyQml:6010): GStreamer-CRITICAL **: 10:27:31.871: write map requested on non-writable buffer
    

  • Lifetime Qt Champion

    From the looks of it, you should create a new video frame and store the computation result in it before returning it.



  • @SGaist From the official blog, it says " // Convert the input into a suitable OpenCV image format, then run e.g. cv::CascadeClassifier,
    // and finally store the list of rectangles into a QObject exposing a 'rects' property."

    The github example that i shared (link here: https://github.com/theshadowx/Qt_OpenCV) is pretty much straight forward implementation of the official blog here: https://www.qt.io/blog/2015/03/20/introducing-video-filters-in-qt-multimedia , without any mumbo jumbo.

    It could be that something is broken in gstreamer?, maybe not many have used ubuntu 20.04 with QAbstractVideoSurface Class ...i do not have older ubuntu to give it a try but i have feeling that it might work out of the box in ubuntu 16.04.


  • Lifetime Qt Champion

    At no point does the blog post modify the original frame.

    Quoted from the blog: The filter can provide a new video frame, which is used in place of the original, calculate some results or both.

    The documentation of QVideoFilterRunnable::run state that: Implementations that do not modify the video frame can simply return input.

    You want to modify the frame, therefore, you have to create a new one. In your case, create the new frame from the right size and type. Make your new frame data the destination of the first OpenCV transformation and continue in-place after that.


Log in to reply