How to speed up video refreshing on QT UI?
For couple of days, I have been struggling to show 2 video in QLabels on my qt GUI. The data format is RGB888, well-prepared by other modules and I just use regular tech skills, say, mylabel->setPixmap(QPixmap::fromImage(image, Qt::AutoColor));, to show them. The problem is that CPU consumption is too high. In my case, cpu is S5PV210, and video resolution is 640x480. Whenever I start drawing, the CPU usage would increase by more than 50%, which takes away almost rest of resource for other jobs. Does anyone know to do performance tuning on this respect?
And I tried directfb. The result is about 10% cpu usage reduce, which is definitely not enough. I heard that QDirectPainter directly manipulates framebuffer. If so, how could I make use of it?
Thanks a lot in advance! I would highly appreciate anyone's help on this problem.
What bout using the QtMultimedia module ?
I am not sure if that's available in QT version 4.8.5 which I am using. And by reading some articles, I understand this module is targeting multimedia file's rendering. Does it fit my case? And the most important is if this module utilizes some low level access to frame buffer or something else, so that the presentation performance could be imporved.
If in it's not from a file, where does your video come from ?
The video is from a live stream over ethnet, orginating from another device. Here is the more detailed story, I am developping a video + audio commnication device, which is to send and receive video data in H.264 and audio in AAC to/from each other. My problem about video presentation performance occurs after receiving and decoding H.264 video data from the peer. Right now, if I disabled this part, the CPU usage is only 20% or less, in other words, the performance pre displaying has been tuned well enough. But when I added this part, the CPU usage would rush to 80%, which is not satisfying, for I have not added audio sending,receiving, encoding, decoding and etc.
I am thinking about underlying driver, about LCD or HDMI, maybe, just maybe, the breakthough could be there.
What are you currently using to decode that stream ?
@SGaist S5PV210 itself ofcuase. It has MFC( multi format codec ) support.
I meant software wise, what are you using to decode the images ?
The proccess is below,
(H.264 stream Data)-->(S5PV210's built-in decoder)--->(NV12 data)-->(ColorConversion)-->(RGB888 Data)-->QPixelMap->QLabel
First, stream data is decoded by hardware decoder built in S5PV210, which is manipulated by software APIs, demonstrated in samsung examples.( They are public on the net) The output data is NV12 data from built-in decoder.
Second, I change those NV12 data to RGB888, which is supported by Qt presentation UI.
At last, just put RGB into QPixlMap and let QLabel to refresh content by calling mylabel->setPixmap(QPixmap::fromImage(image, Qt::AutoColor));
Then why not build a QtMultimedia backend ?
Maybe you are right, but in my dev env, this module, QtMultimedia, is not available. Now, I managed to get RGB data being written to FrameBuffer directly, with QDirectPainter and vaguely see the video refreshing. However, the video area is flickering. I guess it's caused by QT GUI thread's overwriting on my video. The following is code snippet,
void MainUiWdt::paintVideo ( )
QDirectPainter painter( this, QDirectPainter::Reserved);
unsigned char* pBuf = (unsigned char*)painter.frameBuffer();
unsigned char* pSrc = image.bits();
for( ... )
pBuf[ dest++ ] = pSrc[ src+2 ];//
pBuf[ dest++ ] = pSrc[ src+1 ];
pBuf[ dest++ ] = pSrc[ src ];
pBuf[ dest++ ] = 0;
How could reserve the target region totally, making Qt UT thread ignoring that part?
Please note, I've already used reserved mode to draw that part of frame buffer.
In Qt 4.x QtMultimedia is available as separate module that You can compile Yourself.
- why QLable? Isn't it better to do painting Yourself? I.e. QWidget::paintEvent().
- acceleration will speed up, for maximising compatibility I would use OGL with GLS and have OpenGL shader (version 1.0 - for displaying it's just fine) do the hard work - I was using this approach for image correction of images of size 5k and it did work in real time on vintage laptop with IGM'a (old model from 2006 or 2007).
Probably you are right, OpenGL could be another feasible solution. Now, I solved the flickering problem by removing the last call, 'painter.endPainting' (which is not in my code snippet above), i.e, keeping that area reserved all the time and directly writting the RGB data to framebuffer. However, I encountered new problem that the picture resolution is 640x480, but the target area is 600x350, which means I need scale the original image to new dimension. Since you just mentioned open GL, may I ask, could open GL help me to scale image in such a kind of acceleration mode that cpu would not be choked?