Offscreen display without Opengl
We are working on a Qt application (widget base) that need to generate a Gui but not display it by itself.
We would like to "render" it in a buffer (or a double buffer to avoir tearing).
The remaining of the app will do some post-processing and finally display it...
So (it is on a Linux/Tx1), I assume that there is a way to not using a windows manager (xcb and/or fb).
Also, I would like to continue to have my input event (mouse and keyboard).
Qt need to "belive" that it is show, in fact it will be!
We discover a lot of buzzword / object that look near:
QOffscreenSurface: Look strongly related to OpenGl
QBackingStore: Seam to work well with a QWindow without child Widget
But up to now, any experimented solution work well.
Looking forward to receive any clue
Hi and welcome to devnet,
Something is not clear. You have a Qt application but you don't want to render its GUI while still showing it later on ?
Thanks for your reply.
We need a really really low latency overlay. The qt widget can lag a little but not the video behind. So we will send the qt rendered pixmap to a Fpga and he will ultimately display it on the screen.
Can you describe what your application does ? It seems you might be using the wrong tool to get what you want done.
Fpga acquire an analog video.
CPU generate a hi quality Qt? overlay / gui (button, menu, ... up to a webbrowser)
CPU send the resulting gui to the Fpga. (what we are trying to do!)
Fpga handle fusion of the overlay and display the result.
It is a kind of augmented reality app.
For now we have:
QWidget* TopWindow = QApplication::topLevelWidgets().first(); QBackingStore *store = TopWindow->backingStore(); QPaintDevice *pdev = store->paintDevice(); const auto image = dynamic_cast<QImage *>(pdev); image->save("test.bmp"); // ultimately we will not save it like that...
It work if we put the code in a button event (but we need to click to refresh the bmp).
But in a paintEvent, it is pretty glitchy, showing sometime one button, sometime the other...
I know Qt is not typically used in this way, but if we want the quality requested for our Gui (and think about the web browser), we cannot do this only with a simple widget lib...
A GUI a bit like the QML camera example ?
Yes, with the difference that we don't plan to use QML (but why not?) and it is not QT that will handle overlay cause we require no additional delay on the video.
To be really clear, here a simple block diagram:
I was rather wondering if you should not consider making a backend for Qt Multimedia that would provide your video image to Qt through a QVideoWidget for example.
We had already considered the use of CPU / GPU to make the composition.
I know that this way would be easier...
But we are targeting 80 ms latency glass to glass.
60 ms already gone by the HW.
In SW I cannot genlock/control VSync so I will have another frame of delay on the output side.
Fpga can sync his input VSync on the output VSync.
So, we need to find on which event the
is complete cause look like paintEvent is call many time with an incomplet pixmap.
We are looking for an event probably just before the flush() or something similar.
Just thinking out loud, shouldn't you then have a custom QPA that would send the UI directly to the FPGA ?
Yes a QPA sound good but not well documented.
We also hesitate between that and writing a new "Video driver" (LinuxFB?)
This solution will also facilitate portability and demo/test using local display.
Finally, we know that the
in a timer, work.
We will see if it is acceptable (robust and responsiveness) and if recuperate input device will be possible.
I just wanted to be sure we were not doing anything complicated for nothing.
Thanks for your help.