construct QVideoFrame with Format_YUV420P
-
Hi,
I was myself wondering if I should be required to do that. But, here is the detail.
I have implemented QAbstractVideoSurface to capture QVideoFrame. The Video has YUV420P format. Now, the isFormatSupported obviously returns Format_Invalid because YUV420P is not supported.
bool VideoFrameGrabber::start(const QVideoSurfaceFormat &format) { const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat()); const QSize size = format.frameSize(); if (imageFormat != QImage::Format_Invalid && !size.isEmpty()) { this->imageFormat = imageFormat; imageSize = size; sourceRect = format.viewport(); QAbstractVideoSurface::start(format); widget->updateGeometry(); updateVideoRect(); return true; } else if (imageFormat == QImage::Format_Invalid && !size.isEmpty()) { this->imageFormat = QImage::Format_Invalid; imageSize = size; sourceRect = format.viewport(); QAbstractVideoSurface::start(format); widget->updateGeometry(); updateVideoRect(); return true; } else { return false; } }
I am then converting it to ARGB32 format and manipulating it a bit.
But when i try to see the pixel format of cloneFrame, it is YUV420P and I have a ARGB32 image. It shows some wierd shades on the surface but not the edited image while the dumped image is what I expected. A reduced version of the code is as below.bool VideoFrameGrabber::present(const QVideoFrame &frame) { if (frame.isValid()) { QVideoFrame cloneFrame(frame); cloneFrame.map(QAbstractVideoBuffer::ReadOnly); globalTime = seekOffset * 1000 + cloneFrame.endTime(); emit setSlider(globalTime); if(imageFormat != QImage::Format_Invalid){ qDebug() << " Valid Format Case "; imageEdited = QImage(cloneFrame.bits(), cloneFrame.width(), cloneFrame.height(), imageFormat); } else { qDebug() << " Invalid Format Case "; imageEdited = qt_imageFromVideoFrame(cloneFrame); } if(rectMetaData.count() > 0){ while(rectMetaData.at(rectSequence).frameNumber == frameSequence && rectSequence < rectCount-1){ if(rectMetaData.at(rectSequence).fillStatus){ p.setBrush(QBrush(QColor(rectMetaData.at(rectSequence).color))); } else { p.setPen(rectMetaData.at(rectSequence).color); } p.drawRect(rectMetaData.at(rectSequence).x1, rectMetaData.at(rectSequence).y1, rectMetaData.at(rectSequence).x2, rectMetaData.at(rectSequence).y2); rectSequence++; } } if(circleMetaData.count() > 0){ while(circleMetaData.at(circleSequence).frameNumber < frameSequence){ circleSequence++; } while(circleMetaData.at(circleSequence).frameNumber == frameSequence && circleSequence < circleCount-1){ if(circleMetaData.at(circleSequence).fillStatus){ p.setBrush(QBrush(QColor(circleMetaData.at(circleSequence).color))); } else { p.setPen(circleMetaData.at(circleSequence).color); } p.drawEllipse(circleMetaData.at(circleSequence).centerX, circleMetaData.at(circleSequence).centerY, circleMetaData.at(circleSequence).radiusX, circleMetaData.at(circleSequence).radiusY); circleSequence++; } } p.end(); cloneFrame.unmap(); } imageEdited.save(QString("/home/gaurav/ImageTest/image_"+QString::number(frameSequence)+".jpg").toLatin1()); if (surfaceFormat().pixelFormat() != frame.pixelFormat() || surfaceFormat().frameSize() != frame.size()) { setError(IncorrectFormatError); stop(); return false; } else { currentFrame = QVideoFrame(imageEdited); widget->repaint(targetRect); if(saveStatus) { fflush(stdout); av_image_fill_arrays(tempFrame->data, tempFrame->linesize, imageEdited.bits(), AV_PIX_FMT_BGRA, imageEdited.width(), imageEdited.height(), 1); sws_scale(ctx, tempFrame->data, tempFrame->linesize, 0, imageEdited.height(), avFrame->data, avFrame->linesize); avFrame->pts = frameSequence; encode(); } frameSequence++; return true; } }
I thought I should be doing it this way but not sure if this is the correct process. Maybe there is something else which is wrong.
-
@SGaist can you point me to some solution for this case.
The summary is I implement the start method which is then calling start of QAbstractVideoSurface with the Pixel Format of incoming QVideoFrames, as required.
I can only start it with correct input format else the start call will work but supplied frames will not be read properly. When i say read properly, i tried to read them and then display as well as write them. In case of writing it gave me an exception. To display, it just kept the screen blank.
If I start it with the correct format with little tweak where i call start of QAbstractVideoSurface even if the result of imageFormatFromPixelFormat if FORMAT_Invalid, I could get the valid QVideoFrame frame in present since I bypass the check in start to ensure start if image format is supported. But to get my task done I need to edit this frame and so I can read this YUV420P image by non-standard library and successfully edit/dump(to test) it, I cannot convert it back to QVideoFrame because no constructor except with supplied QImage would work. At this point my Image in the converted format FORMAT_ABGR32 creates a corresponding pixel format and not of YUV420P. The Surface is initialized with this and does not show anything.
I am stuck and any help is appreciated.
-
Something is not clear, since you control the frame grabber, why not tell that you'll be providing ARGB32 images directly ?
The non-standard library you are using is an "implementation detail" and the rest of the pipeline doesn't need to know that.
-
@SGaist What do you mean by Images directly?
What i understand from you is that I should initialize my surface with ARGB Format and I tried that but the problem is as follows. I tested both options ARGB and YUV.
If I set the ARGB format in start function for surface via "bool start(const QVideoSurfaceFormat &format);" I get wrong frame data. As the frame has pixel format of YUV. I see that the image conversion fails. I separately constructed a QVideoSurfaceFormat for correct size and Format_YUV420P and passed as argument.
Else, if I set correct format which is YUV bypassing the check of image format before initializing surface I get a QVideoFrame with Format_YUV420P in the "bool present(const QVideoFrame &frame);" function. Reading QImage from this frame works with non standard library. But since the surface is initialized with YUV we have to reconstruct a QVideoFrame of YUV format from RGB Image.
And no constructor of QVideoFrame does that as the only feasible argument to supply is QImage which does not support YUV and resulting frame is not YUV, hence incompatible with surface initialization.Am I understanding it wrong.
-
Ho wait, I was not at the right place of the pipeline. Can you tell me where the images are coming from ?
-
That part is clear but you wrote that you are using a non-standard library to get your images. How are you doing that ?
-
@SGaist Hi, Sorry for the delayed reply. I got into another problem with the same code. By the way I am currently importing some code from windows to linux by copying the code and getting these issues.
I used the wrong term to call it non-standard library. I was referring to
imageEdited = qt_imageFromVideoFrame(cloneFrame);
by non standard library function.Also in the present function when i call widget->repaint(), i do not get a paintEvent in the derived class of this class. The implementation is inline with videoWidgetExample from QT (https://doc.qt.io/archives/4.6/multimedia-videowidget.html). Can you help me in that because I am currently checking the pipeline to be sure that except for the said issue of the topic, at least the source frame is painted.
-
Where does the media come from ? A video file ? A camera ?
-
@SGaist Its a video File being read by QMediaPlayer and set on a QAbstractVideoSurface which is then capturing the QVideoFrame and sending to present. This present is extracting the image from the frame.
Also the QAbstractVideoSurface is casted into a QWidget.
Since the conversion from frame to image and vice versa is what I am struggling with, I tried to test the pipeline by trying to draw the same frame which I received in present without any modification. For this I saw people call repaint(QRect) over the widget which has the QAbstractVideoSurface. But I do not get the paintEvent in the derived class of above class.
The last statement might be confusing but the analogy is from the videoWidgetExample. I derived this class into another which is a widget.
-
I see that you are also using ffmpeg in your code. Then you could leverage it to do the format conversions between YUV and RGB and back to YUV.