Efficient raw video display
-
Hello all,
I am receiving raw frames from a camera and need to implement an efficient way to display them. The incoming data rate can vary from a single frame for hours to a 50fps stream, so a limited the amount of allocation/re-allocation of resources, and efficient display is required.
The receiving pipeline is as follows:
- Frame data is received by packets and stored in a pre-allocated array with one byte per sample (this can be monochromatic or RGB, i.e., 1 byte per pixel or 3 bytes per pixel).
- Some operations are performed in the data while it is copied to another pre-allocated array.
- The second array can be used to display the data.
So far, for single frames, I have been using a QLabel and using its pixmap to display the data using the following lines:
QPixmap pixmap; // QLabel ui->frame pixmap = QPixmap::fromImage(QImage(display_data, frame.rows, frame.cols, QImage::Format_RGB888)); ui->frame->setPixmap(pixmap);
When going for 50fps, this seems like a very inefficient way to do this, so I have been experimenting with QOpenGLWidget and overriding paintGL(). Although I have still not been successful in the implementation, I have also been investigating the Qt Multimedia module. I find myself wondering if using this QOpenGLWidget approach is the best way to go about this problem, or if there is a combination of QCamera* classes that I have overlooked and might be the best approach.
So the question is, for a 50fps video display, where each incoming frame is placed in the same pre-allocated unsigned char array, what would be the best approach:
- Using QLabel and QPixmap as described above
- Using QLabel and overriding the paintEvent(), making use of painter.drawImage()
- Using QOpenGLWidget and overriding paintGL(), creating a QImage from the data array
- Using a combination of Qt Multimedia modules, with a combination of QVideoSink, QMediaPlayer, QVideoFrame, etc.
- Any other combination that I'm failing to see?
Thank you very much for the help!
-
Hi,
6: QOpenGLBuffer so you can either write your image data directly there or memcpy them.
The QVideoSink approach could follow a similar pattern.
-
Hello, thank you very much for your answer!
Just to confirm that I understood the answer correctly. So the way to go around this is:
- Create a QOpenGLWidget.
- Get the widget QOpenGLContext using its context() method
- Creating a QOpenGLBuffer where to copy the frame data to.
- Bind the QOpenGLBuffer to the QOpenGLContext using bind()
- Each time a frame is received, copy it to the buffer and call update() on the QOpenGLWidget
Am I failing to see any other steps? Do I need to mess around paintGL() with this? I still need to override it and create a QImage from the buffer?
-
The idea of a buffer is that you put the data to draw directly there not do additional back and forth.
-
https://doc.qt.io/qt-5/qvideowidget.html#details
Try to use qvideowidgetQImage img = QImage("images/qt-logo.png").convertToFormat(QImage::Format_ARGB32);
QVideoSurfaceFormat format(img.size(), QVideoFrame::Format_ARGB32);
videoWidget = new QVideoWidget;
videoWidget->videoSurface()->start(format);
videoWidget->videoSurface()->present(img);
videoWidget->show(); -
@SGaist thank you very much for a non-answer. I have been programming for a few decades now, and the idea of a buffer can be any, according to the purpose you want to give it. The idea of a QOpenGLBuffer however is not very clear to me, and I can't seem to find relevant examples, and that's where I needed help.
Not only did you not answer my questions, but you also managed to provide a patronising comment. Not exactly what I expected from someone that is awarded a title for years of helping people and contributing for an awesome project.
So yes, thank you very much for pointing me in the right direction. However, that should not be an excuse for thinking you know more than others. If you have out grown the community, maybe it's time you find something else to do.
-
@cposse Please read and follow https://forum.qt.io/topic/113070/qt-code-of-conduct
No need to insult other people here (or anywhere)! -
@jsulm I understand that my comment does not follow the code of conduct and I can only apologise for that. I could've just closed the topic and continue with my day. However, I did not insult anyone.
I asked for help and in return received a patronising comment. I felt disrespected and called out the behaviour of someone that, according to its title, should be the face of the code of conduct.
-
@cposse
I/we just don't see @SGaist's answers as having any "patronising" comments in them.One thing I would say: you are a new user, when we answer we have no idea what standard/background/knowledge posters have. If you look around you will see anything from full-blown experts to people who do not know the absolute basics of programming --- and I do mean basics. Please bear that in mind, it can be very difficult to know where to pitch one's answers, one person's "too simple" is another person's "too complex".
-
@cposse said in Efficient raw video display:
I have been programming for a few decades now
Please share where, I would not like to work with someone who possesses such self-entitlement level.
Your profile gives literally nothing away as to what level of experience you have, nor do your posts such as they are and what's of them. And then you are offended. It is brilliant, in a fashion.
-
@JonB I can understand your point of view and acknowledge I am in the wrong and should've handled this way better. My analysis of @SGaist 's response as patronising is fully based on my experience/knowledge, and I failed to analyse it from a more general perspective.
I apologise to the whole community for my behaviour, especially @SGaist, and will do better in the future. Thank you all for the help.
-
@cposse apologies accepted (note that I don't make assumption on any level of knowledge of anybody here).
As we are talking about OpenGL specifically (which is one hell of a subject in itself), my answer was just geared toward that subject directly, though, I agree, it could have been more verbose. The original intent was for you to avoid creating an OpenGL buffer, putting the data there, grabbing it back, and then create a QImage to load again to the OpenGL context to finally show it.
The short way I suggest is, if possible, to do most if not all the tasks required directly on the GPU side. Performance wise it is where you will have most gains.