Important: Please read the Qt Code of Conduct -

Advice on Integrating Camera + Video Processing

  • Hi all,

    Qt newbie here looking for pros/cons on integrating a machine vision camera with a Qt-based C++ app.

    We are building a real time machine vision application that uses the PointGrey API to fetch raw frames with some metadata from a USB3 camera. The raw frames are processed through a proprietary GPU algorithm and the output is rendered. For test purposes, we want the ability to open videos recorded with the camera and pass them through the processing pipeline.

    We currently have a barebones application that reads, processes and display frames in a QGraphicsView. It works, but feels clunky.

    Looking at the "multimediawidgets" example projects, I really like the clean integration of the camera or video file input sources with the QMediaPlayer and QVideoAbstractVideoSurface. The examples give access to the video frame before rendering. That does not work as I need to replace the input frame with its processed version. The processing should ideally happen on a separate thread to keep the UI responsive.

    I need some high level guidance on the following items:

    1. how to best integrate the Point Grey camera with the multimedia sub-system so I can easily swap between live camera and file input

    2. intercept a frame from either of the two sources, process it in a separate thread and display the output of the processing

    Ideally I would leverage as much Qt framework as possible. I can roll my own implementation, but it will not be as complete or clean.

    Your guidance will be much appreciated.


Log in to reply