How to use QAbstractVideoFilter to process frames from camera



  • Hello

    I want to do image processing to every frames from camera, so I searched Google and I found a new class - QAbstractVideoFilter that is a convenient function in QT 5.5.

    But I can't find any examples, the only thing I can do is to try it by myself using the information of QT documentation.

    After Trying it, I found it reported that the function "finished()" doesn't exist...
    I have no idea how to declare this function as a signal function to emit a signal from "Run" function.

    Could anybody point me in the right direction?

    Here is the code I try to implement:

    //main(C++)

    #include <QApplication>
    #include <QQmlApplicationEngine>
    #include <QQMlEngine>
    #include <QtQml>
    #include <QtMultimedia/QAbstractVideoFilter>
    
    class MFRunnable: public QVideoFilterRunnable{
    public:
    	explicit MFRunnable(QAbstractVideoFilter *filter): QVideoFilterRunnable(){
    		m_filter = filter;
    	}
    
    	QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags){
    		QString* result;
    		*result = "123";
    		emit m_filter->finished(result);
    		return *input;
    
    	}
    private:
    	QAbstractVideoFilter* m_filter;
    
    };	
    class MF: public QAbstractVideoFilter{
    public:
    	QVideoFilterRunnable *createFilterRunnable(){return new MFRunnable(this);}
    
    signals:
    	void finished(QObject *result);
    };
    
    
    
    int main(int argc, char *argv[])
    {
    	qmlRegisterType<MF>("AVF", 1, 0, "MF");
    	QApplication app(argc, argv);
    
    	QQmlApplicationEngine engine;
    	engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
    
    	return app.exec();
    }
    

    //main(QML)

    import QtQuick 2.4
    import QtQuick.Controls 1.3
    import QtQuick.Window 2.2
    import QtQuick.Dialogs 1.2
    import QtMultimedia 5.5
    import AVF 1.0
    
    
    ApplicationWindow {
    	title: qsTr("Hello World")
    	width: 640
    	height: 480
    	visible: true
    
    	Camera {
    		id: camera
    		exposure.exposureCompensation:  -1.0
    		exposure.exposureMode: Camera.ExposurePortrait
    
    	}
    	MF {
    		id:filter
    		onFinished: console.log("result of the computation: " + result)
    	}
    
    	VideoOutput {
    		source: camera
    		filters: [filter]
    		anchors.fill: parent
    		focus: visible
    	}
    }


  • Hi. What kind of image processing are you trying to achieve?



  • @Leonardo Thanks for your applying!

    I want to implement ViBe Algorithm and Human Skeletonization by OpenGL ES 2.0(Shader Language), and because of heavy computation needed by these Algorithms, I must implement them using OpenGL ES 2.0.

    I know there is a convenient function in QtQuick - that is ShaderEffect , but I can't use it because it limits too many functions such as creating external texture buffer for further computing, so I need to focus on the original way to use OpenGL ES (using C++) in cooperation with QAbstractVideoFilter.


  • Lifetime Qt Champion

    Hi and welcome to devnet,

    You can find some more information in this blog post

    Note that the finished signal is just for the example here to communicate a result to the outside. If you only modify the image then there's no need for it.



  • Hi @DennisZhang ,

    I have the same problem as you. First when I emit the signal from QVideoFilterRunnable , I got :

    Undefined reference to subFilterClass::finished(...)

    So then I comment it out.

    and in the QML : if I call onFinished, I got this error while executing the app:

    Cannot assign to non-existent property "onFinished"

    have you been able to solve this problem ?



  • There is no built-in finished() signal. You have to declare it yourself and can be called anything.

    So in the QAbstractVideoFilter subclass, have

    signals:
    void finished();

    then in the QVideoFilterRunnable, do

    emit m_filter->finished();

    when the computation is done.



  • Hi DennisZhang,

    Is your file processed by moc? I guess you forget the QOBJECT macro.



  • @agocs of course I declared it in signals



  • I had the same problem, tried diffrent ways and it worked when I declared both childs of QVideoFilterRunnable and QAbstractVideoFilter in the same header and source file(so I have 2 files, one .h and one .cpp)

    My header file looks like this:

    class QRCode_filter : public QAbstractVideoFilter
    {
        Q_OBJECT
    public:
        QRCode_filter();
        QVideoFilterRunnable *createFilterRunnable();
    private:
    signals:
        void finished(QObject *result);
    public slots:
    
    };
    
    class MyQRCodeFilterRunnable : public QVideoFilterRunnable
    {
    public:
        QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
        MyQRCodeFilterRunnable(QRCode_filter *filter = NULL);
    private:
        QRCode_filter *m_filter;
    };
    


  • @kolegs Thanks for sharing this solution.



  • @kolegs Have you tried using OpenCV in

    QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
    

    I have a problem about that :

    QVideoFrame FilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
    {
        if(input->isValid()){
            input->map(QAbstractVideoBuffer::ReadWrite);
            /// QVideoFrame to cv::Mat
            Mat cvImg = Mat(input->height(),input->width(), CV_8UC3,input->bits(),input->bytesPerLine());
    
            /// Apply Filter
            Mat edges;
    
            /// to grayscale
            cvtColor(cvImg, edges, COLOR_YUV2GRAY_420);
    
            /// apply filters
            GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
            Canny(edges, edges, 0, 30, 3);
    
            /// convert to YUV420
            cvtColor(edges,edges, COLOR_GRAY2RGB);
            cvtColor(edges,edges, COLOR_RGB2YUV_I420);
    
            ///  what to do here to send back the modified frame .....   ???
        }
    
         return *input;
    }
    


  • @theshadowx I've never used openCV so I'm not sure how to use this libraries, but about run() function as far as I know you need to modify the frame that you get(QVideoFrame *input) and simply return it in the same format.



  • @theshadowx, i don't know how to work with YUV format, but with QVideoFrame::Format_RGB32 you can do something like that:

    QVideoFrame ThresholdFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
    {
        if (!input->isValid())
            return *input;
    
        input->map(QAbstractVideoBuffer::ReadWrite);
    
        cv::Mat mat = cv::Mat(input->height(),input->width(), CV_8UC4, input->bits());
    
        cv::Mat gray, threshold, color;
        cv::cvtColor(mat, gray, COLOR_RGB2GRAY);
        cv::threshold(gray, threshold, 150.0, 255.0, THRESH_BINARY);
        cv::cvtColor(threshold, color, COLOR_GRAY2RGB);
        this->toVideoFrame(input, color);
    
        input->unmap();
        return *input;
    }
    
    void ThresholdFilterRunnable::toVideoFrame(QVideoFrame *input, cv::Mat &mat)
    {
        int aCols = 4*mat.cols;
        int aRows = mat.rows;
        uchar* inputBits = input->bits();
    
        for (int i=0; i<aRows; i++)
        {
            uchar* p = mat.ptr<uchar>(i);
            for (int j=0; j<aCols; j++)
            {
                int index = i*aCols + j;
                inputBits[index] = p[j];
            }
        }
    }
    

    I can't test it right now, because i'm in hospital.



  • @medyakovvit Hi, thanks for responding, I hope everything is fine for you.

    if my camera takes video in RGB32 it would be easier as I can create QImage from the edge matrix and then create a QVideoFrame from QImage.:

     ///cv::Mat to QImage
     QImage ImgDest(edges.data, edges.cols, edges.rows, edges.step, QImage::Format_RGB888);
     QVideoFrame *output = new QVideoFrame(ImgDest);
    

    And then I would return the output.

    The problem (From what I understood) is that my camera generates images with YUV color scheme. which means that QVideoSurfaceFormat is in YUV, so if I use RGB32 format it won't work.

    Unfortunately QImage doesn't support YUV format. :(



  • @theshadowx So, if correctly understood this YUV420 and YUV format, Y - luma component, U and V - color component. And in single frame Ys go first, then Us and Vs. So if you need grayscale image, you can work with Ys and ignore Us and Vs. Maybe you can try:

    QVideoFrame YUVFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
    {
        if (!input->isValid())
            return *input;
    
        input->map(QAbstractVideoBuffer::ReadWrite);
    
        this->deleteColorComponentFromYUV(input);
        cv::Mat mat(input->height(),input->width(), CV_8UC1, input->bits()); // create grayscale mat with input's Ys only and avoid additional color conversion and copying
    
        cv::GaussianBlur(mat, mat, Size(7,7), 1.5, 1.5);
        cv::Canny(mat, mat, 0, 30, 3);
    
        input->unmap();
        return *input;
    }
    
    void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input)
    {
        // Assign 0 to Us and Vs
        int firstU = input->width()*input->height(); // if i correctly understand YUV420
        int lastV = input->width()*input->height() + input->width()*input->height()/4*2;
        uchar* inputBits = input->bits();
    
        for (int i=firstU; i<lastV; i++)
            inputBits[i] = 0;
    }
    


  • @medyakovvit

    Yes that's right, there is one porblem in the code is that with you code I get a greenscale. So to convert to grayscale the solution is that instead of 0 for chroma, it should be 127 (http://stackoverflow.com/a/20609599/2775917) :

    void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input)
    {
        // Assign 0 to Us and Vs
        int firstU = input->width()*input->height(); // if i correctly understand YUV420
        int lastV = input->width()*input->height() + input->width()*input->height()/4*2;
        uchar* inputBits = input->bits();
    
        for (int i=firstU; i<lastV; i++)
            inputBits[i] = 127;
    }
    

    Thanks a lot for your help



  • @theshadowx I'm glad to help



  • Hello!

    Is it possible to use the QVideoFilterRunnuble without QML?

    As an example I would like to embed it into Camera example ( http://doc.qt.io/qt-5/qtmultimediawidgets-camera-example.html ) to be able to change data format.

    Thank you.



  • @cs_a994 Don't think so. But you can try to subclass QAbstractVideoSurface.



  • @medyakovvit

    Thank you. I know about this way. But use of filters seams to me more neat.

    I tryed to use filters with QML ( in the Camera example ) and did not yet get a result.



  • @cs_a994
    OK. I f you have a questions about qml camera and filters, I will try to help you.



  • @cs_a994

    I have put a QtQuick app in github using QAbstractVideoFilter with OpenCV.

    I hope it helps you.

    Cheers



  • @theshadowx

    Thank you very much. I'll study it soon. I hope it will help.


Log in to reply
 

Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.