I found a the following solution.
I simply use the QImage as my rendering space and export that image. Drawing is still done with a QPainter and the QQuickPaintedItem is only used to display the image for the user.
QML Items like a Label are drawn via QPainter onto the QImage surface via
QImage image;
QPainter painter(&image);
painter.setPen("#FFFFFF");
painter.setFont(QFont("Helvetica", 15));
int x = 20;
int y = 20;
painter.drawText(x,y, "Hello World");
painter.done();
For more generic use I provide a paint(QPainter & painter, int x, int y, QSize resolution) interface in C++ for each "rendered object" as alternative compared to a Q_OBJECT where the rendering is provided by Qt.
This looks like this:
class Renderer : public QObject
{
...
Q_SLOT void processImage(const QImage image)
{
_image = image.copy();
QPainter painter;
painter.begin(&_image);
_draw_fps_text(painter);
...
painter.end();
}
Q_SLOT void setFPSOptions(const QList<FPSOptions> fpsOptionsList)
{
_fps_options_list = fpsOptionsList;
}
void _draw_fps_text(QPainter & painter)
{
int video_count = _get_video_count();
int image_width = _image.size().width();
int image_height = _image.size().height();
for(int i = 0; i < _fps_options_list.size(); ++i)
{
int x_padding = static_cast<int>(image_width / 28);
int x_step = static_cast<int>(image_width / video_count);
int y_step = static_cast<int>(image_height / 15);
int x = x_padding + x_step * i; // width
int y = y_step; // height
_fps_options_list[i].paint(painter, x, y, image.size());
}
}
My rendering pipeline changed from
VideoCapture -> (frame) -> Processing -> (QImage, data) -> QML Viewer -> (Grab QImage from View) -> Exporter -> (IO)
to
VideoCapture -> (frame) -> Processing -> (QImage, data) -> Renderer -> (QImage) -> Exporter -> (QImage) + (IO) -> QML Viewer
The Exporter currently pipes the QImage to the QML Viewer as it's currently impossible to let a QML Object run in another thread. The optimal solution would be to duplicate the output of the Renderer to both Exporter and QML Viewer, see this thread.