Important: Please read the Qt Code of Conduct -

How best to construct a QVideoFrame

  • Hello all. I have a relatively simple application that receives an RTP video stream and displays it. For simple testing purposes, when I've received an image into my buffer, I create a QImage out of it and then simply using QLabel::setPixmap() in the GUI to display it. Obviously that's pretty slow.

    Now that I've got the video stuff working correctly, I'm ready to start optimizing this. I saw and thought that looked perfect, but it's creating QVideoFrames out of QImages.

    My question is this: Since I initially have access to the image in buffer form, instead of first creating a QImage, would it be fastest to map a QVideoFrame to memory and simply use it as my image buffer? If yes, since QVideoFrames are explicitly shared, does that require creating a new QVideoFrame each time?

    I'm just looking for the fastest solution here. Thank you for any advice!

  • Unfortunately I cannot much of much help on this subject as I am very new to Qt and have long since been away from C++, but I am trying to do exactly the same thing you are with an RTMP streams. If you don't mind, would you share what you have working at the moment? I have a byte stream from librtmp and am trying to figure out how to get the byte streams into a QVideoFrame format.


  • CrimsonGT, no problem, what I have is pretty simple: Use "this": constructor to turn your byte stream into a QImage, and from there you can get to the QVideoFrame with with "this": . If you need any more guidance you'll probably want to start your own thread :) .

Log in to reply