Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

pyqt5 multiple video streams



  • Hello,

    I have a question regarding streaming video to my GUI. I currently have two threads that manage two different cameras. They both run at different speeds. One at 120fps and the other at 40fps. In reality, While one camera will be running at 120fps, every three frames will be co-added, so the display output will actually be only 40fps.

    I laid out my GUI with two qLabels side by side, one for each of the two camera streams. I set the first one up with an emitter that signals for each frame of the video stream. Works perfect.

            rawImage = QtGui.QImage(img.data, img.shape[1], img.shape[0], QtGui.QImage.Format_Grayscale8)
            self.utcPixmap.emit(rawImage)
    ...
            self.utcThread.utcPixmap.connect(lambda p: self.setUTCPixMap(p))
    ...
        def setUTCPixMap(self, p):
    
            p = QtGui.QPixmap.fromImage(p)
            p = p.scaled(self.sysV.XDim, self.sysV.YDim, QtCore.Qt.KeepAspectRatio)
            self.UTCImage.setPixmap(p)
    

    I then set up the next camera with its own thread and matching instructions to above.

    However, the emitters are obviously triggering so fast and often that there is conflict between the two streams. I'm getting very choppy images on both displays for the two streams.

    The emitters are obviously being sent to the same main window. How do I go about reducing the conflict between the two streams.


  • Banned

    Not sure if I can help you on this specifically but I can express some thoughts on it as it matters not what you are streaming as all have the some issues.

    So VideoStream-A at 120fps means you are needing to update the GUI about every 8 milliseconds and VideoStream-B at 40fps means you are needing to update the GUI about every 25 milliseconds then there is the time it takes to render that frame which needs to be taken into consideration.

    Still to make all this work properly means you a need to coordinate the updates which once you know about the high-end average of rendering time you add that to both so make it easy let's just say it takes 2 ms to render so you have 1 f/10ms and 1 f/25ms meaning that you are pulling 2 f-A for each 1 f-B ... this can probably be best handled by having those frames being pushed into a Queue as that would speed up the transfer and let you move the incoming stream captures off into their own processes so as not to bog down the rendering process. Then within the rendering you are pulling in A-frame > render, A-frame > render, B-frame > render, .... repeat So this is to say you cannot rely on an straight stream you will have to handle it more manually within the code in order to keep them synced ...

    Of course with video streaming one should keep in mind that movie quality video is 24 fps so anything faster than that is going to overkill and might mean you really ought to slow that 120 fps down by dropping 2 frames for every frame you put into the queue and then on the GUI side you basically are doing pull A-frame > render then pull B-frame > render then .... repeat basically displaying both of them at about movie quality

    So roll that around a bit and see if you can figure out how you might want to implement that


  • Lifetime Qt Champion

    Hi,

    One very important thing: to the best of my knowledge there's currently no consumer graphics card that goes at 120fps and you would also need the adequate screen.

    The technique you are currently using is also not optimized for speed nor fast rendering.

    If you need to go fast you are going to have to consider OpenGL and likely C++ to move things around faster.

    What kind of camera are you using ?



  • @SGaist

    The two cameras are similiar to these. the high speed camera is connected to a framegrabber.
    http://www.sensorsinc.com/products/detail/microswir-camera
    https://en.ids-imaging.com/store/ui-3271le.html

    This is being used in a R&D environment. Only working with fast exposure, low resolution images, 600x300 for example. Nothing that is movie quality.



  • @Denni-0

    I'm assuming that your suggesting using a single queue. I can insert from either A or B camera, in their respective threads. Then perhaps by using QTimer, I can set it up the main window to read the queue at, lets say 10ms, Then as the frames I would pop and insert into the qlabel that its associated with?

    to be fair the top speed is actually going to be more like 40fps. As the camera will be collecting at 120fps, but I'll be co-adding frames in groups of three. So frame 1, 2, & 3 will be co-added to reduce noise, and that combined frame would be displayed on the UI.


  • Banned

    Actually no I was suggesting using to separate queues -- actually Pipes ... this way your capturing is done in Process-B and Process-C and your video rendering is done in Process-A

    As I am assuming with all this video processing you are using a multi-processing core in order to render things cleaner

    Okay so your using an algorithm to merge (?"co-added") the 3 frames into 1 updating frame okay -- in the end you are still doing the @ 24 fps video as you really do not need anything faster than that.

    Yes you could have a QTimer that fires off every 20ms to pull a frame from a Queue but that would actually better handled by feeding them at whatever rate you want the updated at.

    Basically you handle a Pipe (just another Queue) by pushing data in one end and pulling it from the other end -- but you use an event to tell the other end there is something pushed that needs to be pulled --- so you set up your feeds to be in-sync Push-A at 0 then 13, then 25, then etc.... while you Push-B at 12, then 24, then etc... this way the receiver gets the signal to pull from A then pull from B at about the same intervals -- of course that would be one way to do it but not sure if its the best

    Another way would be to just push 1 fp 24ms down each Pipe(Queue) and then have the pull using a QTimer to pull the frames periodically -- the problem with this version is that you may over stuff your queue if the pushing is occurring faster than the pulling which is why I do not do that with my streaming

    Of course there might be even a third way to do this that I have not considered -- either way what you are trying to do is on side Render Video at Visual Quality and on the other side you are trying to provide that Video in a timely but not too timely a manner and in either case the Queue or Queues would be best -- now with this I would go with 2 queues because otherwise you have to have a means of identifying which feed belongs to which video



  • @Denni-0

    Ok, this is a rabbit hole. haha. No I am not currently using multi-processing. I'm using threads via QtCore.QThread.

    I understand the logic of using multiprocessing over threads. I'm not entirely up on how pipes would be used in this context. How do i set up an event that is triggered when something is available, but synchronized between two events.

    It sounds to me that its very similar to how emit() works, but instead I would be using a queue like means of transferring data, FIFO. The synchronization between the two Queues is where I'm getting lost.

    I guess for now the first step is to set things up as Processes rather than threads and figure out how to set an event related to the pipe containing data.


  • Lifetime Qt Champion

    To achieve the speed you want, you really should consider using something closer to the hardware for your image processing.

    Writing a QtMultimedia camera backend could be an option and you would get the QCamera/QVideoWidget integration "for free".

    Out of curiosity, on what OS are you running ?



  • @SGaist

    linux on a TX2.


  • Lifetime Qt Champion

    Can you get the camera video stream using GStreamer ?



  • In order to capture frames from each setup, I'm utilizing the respective libraries. For the UTC camera connected to the framegrabber, I'm using the xlib library from epix. While the iDS camera is utilizing their own library, well the python wrapper pyueye.


  • Banned

    @nightpoison said in pyqt5 multiple video streams:

    @Denni-0

    No I am not currently using multi-processing. I'm using threads via QtCore.QThread.

    Great and I hope you did not sub-class the QThread as the current documentation is still Qt4 and that is not how its done in Qt5

    I understand the logic of using multiprocessing over threads. I'm not entirely up on how pipes would be used in this context. How do i set up an event that is triggered when something is available, but synchronized between two events.

    multiprocessing.Event() <-- look that up as that is what you use with a Pipe. You need pipes to communicate between processes as Signal/Slots only communicate across threads not across processes as such Pipes are like Queues and the Event is sort of like a very simplistic Signal/Slot and Pipes/Events are used across processes while Queues/Signals/Slots are used across or within Threads

    It sounds to me that its very similar to how emit() works, but instead I would be using a queue like means of transferring data, FIFO. The synchronization between the two Queues is where I'm getting lost.

    Yes similar but not the same its basically just an on/off flag

    I guess for now the first step is to set things up as Processes rather than threads and figure out how to set an event related to the pipe containing data.

    And see my chat message to you because I have already gone through all the headaches to find out how to do this


  • Lifetime Qt Champion

    @nightpoison do they provide example applications to use their libraries ? Especially at 120 FPS



  • @SGaist

    yes they do. I'm able to pull the frames and update the UI at 120fps, for a single stream. Once I added the second stream, that's were things went south. Using emit() just caused issues, that resulted in reduced speeds, torn and distorted images.

    keep in mind, in reality, I actually don't need the 120fps output on the UI, I probably don't even really need the 40fps. So, I've slowed everything down for right now. I'm using a single Queue() to load frames from each camera and a Qtimer() to check if the Queue() is empty or not. Each frame is saved as a tuple, with a camera ID, so I know which frames go to which QLabel. I'm limited to about 20fps right now. Any faster and the queue builds up a backlog fast.


  • Banned

    @nightpoison see Chat


  • Lifetime Qt Champion

    Then as already suggested, you should write a QCamera backend using these libraries. You'll have better performances.


Log in to reply