How to use QAbstractVideoFilter to process frames from camera
-
Hi @DennisZhang ,
I have the same problem as you. First when I emit the signal from QVideoFilterRunnable , I got :
Undefined reference to subFilterClass::finished(...)
So then I comment it out.
and in the QML : if I call onFinished, I got this error while executing the app:
Cannot assign to non-existent property "onFinished"
have you been able to solve this problem ?
-
There is no built-in finished() signal. You have to declare it yourself and can be called anything.
So in the QAbstractVideoFilter subclass, have
signals:
void finished();then in the QVideoFilterRunnable, do
emit m_filter->finished();
when the computation is done.
-
I had the same problem, tried diffrent ways and it worked when I declared both childs of QVideoFilterRunnable and QAbstractVideoFilter in the same header and source file(so I have 2 files, one .h and one .cpp)
My header file looks like this:
class QRCode_filter : public QAbstractVideoFilter { Q_OBJECT public: QRCode_filter(); QVideoFilterRunnable *createFilterRunnable(); private: signals: void finished(QObject *result); public slots: }; class MyQRCodeFilterRunnable : public QVideoFilterRunnable { public: QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags); MyQRCodeFilterRunnable(QRCode_filter *filter = NULL); private: QRCode_filter *m_filter; };
-
@kolegs Have you tried using OpenCV in
QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
I have a problem about that :
QVideoFrame FilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags) { if(input->isValid()){ input->map(QAbstractVideoBuffer::ReadWrite); /// QVideoFrame to cv::Mat Mat cvImg = Mat(input->height(),input->width(), CV_8UC3,input->bits(),input->bytesPerLine()); /// Apply Filter Mat edges; /// to grayscale cvtColor(cvImg, edges, COLOR_YUV2GRAY_420); /// apply filters GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5); Canny(edges, edges, 0, 30, 3); /// convert to YUV420 cvtColor(edges,edges, COLOR_GRAY2RGB); cvtColor(edges,edges, COLOR_RGB2YUV_I420); /// what to do here to send back the modified frame ..... ??? } return *input; }
-
@theshadowx I've never used openCV so I'm not sure how to use this libraries, but about run() function as far as I know you need to modify the frame that you get(QVideoFrame *input) and simply return it in the same format.
-
@theshadowx, i don't know how to work with YUV format, but with QVideoFrame::Format_RGB32 you can do something like that:
QVideoFrame ThresholdFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags) { if (!input->isValid()) return *input; input->map(QAbstractVideoBuffer::ReadWrite); cv::Mat mat = cv::Mat(input->height(),input->width(), CV_8UC4, input->bits()); cv::Mat gray, threshold, color; cv::cvtColor(mat, gray, COLOR_RGB2GRAY); cv::threshold(gray, threshold, 150.0, 255.0, THRESH_BINARY); cv::cvtColor(threshold, color, COLOR_GRAY2RGB); this->toVideoFrame(input, color); input->unmap(); return *input; } void ThresholdFilterRunnable::toVideoFrame(QVideoFrame *input, cv::Mat &mat) { int aCols = 4*mat.cols; int aRows = mat.rows; uchar* inputBits = input->bits(); for (int i=0; i<aRows; i++) { uchar* p = mat.ptr<uchar>(i); for (int j=0; j<aCols; j++) { int index = i*aCols + j; inputBits[index] = p[j]; } } }
I can't test it right now, because i'm in hospital.
-
@medyakovvit Hi, thanks for responding, I hope everything is fine for you.
if my camera takes video in RGB32 it would be easier as I can create QImage from the edge matrix and then create a QVideoFrame from QImage.:
///cv::Mat to QImage QImage ImgDest(edges.data, edges.cols, edges.rows, edges.step, QImage::Format_RGB888); QVideoFrame *output = new QVideoFrame(ImgDest);
And then I would return the output.
The problem (From what I understood) is that my camera generates images with YUV color scheme. which means that QVideoSurfaceFormat is in YUV, so if I use RGB32 format it won't work.
Unfortunately QImage doesn't support YUV format. :(
-
@theshadowx So, if correctly understood this YUV420 and YUV format, Y - luma component, U and V - color component. And in single frame Ys go first, then Us and Vs. So if you need grayscale image, you can work with Ys and ignore Us and Vs. Maybe you can try:
QVideoFrame YUVFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags) { if (!input->isValid()) return *input; input->map(QAbstractVideoBuffer::ReadWrite); this->deleteColorComponentFromYUV(input); cv::Mat mat(input->height(),input->width(), CV_8UC1, input->bits()); // create grayscale mat with input's Ys only and avoid additional color conversion and copying cv::GaussianBlur(mat, mat, Size(7,7), 1.5, 1.5); cv::Canny(mat, mat, 0, 30, 3); input->unmap(); return *input; } void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input) { // Assign 0 to Us and Vs int firstU = input->width()*input->height(); // if i correctly understand YUV420 int lastV = input->width()*input->height() + input->width()*input->height()/4*2; uchar* inputBits = input->bits(); for (int i=firstU; i<lastV; i++) inputBits[i] = 0; }
-
Yes that's right, there is one porblem in the code is that with you code I get a greenscale. So to convert to grayscale the solution is that instead of 0 for chroma, it should be 127 (http://stackoverflow.com/a/20609599/2775917) :
void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input) { // Assign 0 to Us and Vs int firstU = input->width()*input->height(); // if i correctly understand YUV420 int lastV = input->width()*input->height() + input->width()*input->height()/4*2; uchar* inputBits = input->bits(); for (int i=firstU; i<lastV; i++) inputBits[i] = 127; }
Thanks a lot for your help
-
@theshadowx I'm glad to help
-
Hello!
Is it possible to use the QVideoFilterRunnuble without QML?
As an example I would like to embed it into Camera example ( http://doc.qt.io/qt-5/qtmultimediawidgets-camera-example.html ) to be able to change data format.
Thank you.
-
Thank you. I know about this way. But use of filters seams to me more neat.
I tryed to use filters with QML ( in the Camera example ) and did not yet get a result.
-
Thank you very much. I'll study it soon. I hope it will help.
-
I get the below error if I try the canny code from git hub. (https://github.com/theshadowx/Qt_OpenCV)
I don't know much about mql but I know QT intermediately.
Thank you
01:45:55: Starting /home/mike/Downloads/Qt_OpenCV-master/QtQuick/build-CannyQml-Desktop_Qt_5_12_0_GCC_64bit-Debug/CannyQml...
QML debugging is enabled. Only use this in a safe environment.(CannyQml:24471): GStreamer-CRITICAL **: 01:45:57.866: write map requested on non-writable buffer
01:45:57: The program has unexpectedly finished.
01:45:57: The process was ended forcefully.
01:45:58: /home/mike/Downloads/Qt_OpenCV-master/QtQuick/build-CannyQml-Desktop_Qt_5_12_0_GCC_64bit-Debug/CannyQml crashed.