Display images(or image streams) in QML from Python
-
wrote on 9 Feb 2021, 23:16 last edited by
Hello,
I have a stream of np.array images that were being outputted by cv2.imshow in a preliminary engineering app using cvui for control. We have decided to make a QT Quick Application for Python since all our math and background programming is done already and this looked like a good fit. I am trying to find an example or supported method to display the stream of images to the user in the QML. I do NOT want to use the widgets ( i can see issues mixing the two ) as we already made all the buttons and event handlers work in with Python and the QML. I see some examples in C++ (https://www.huber.xyz/?p=477) which are essentially hacks to complete the task but I am having a hard time combing through the documentation and converting to Python to replicate. I assume the steps would be convert to QImage or PixelMap from our array, then either emit (when python knows a new image is ready or QML function detects a change in the source image) to a Image View in the QML somehow but I have yet to successfully do this.
1.Is this supported/ if so what are the imports needed?
2.Are there examples in Python?
3.What is the work around in Python if not supported?
4. If I have to use a widget that somehow is tied to the QML positionally/overlayed/underlayed, what is a good example in Python for displaying the image stream? -
Hello,
I have a stream of np.array images that were being outputted by cv2.imshow in a preliminary engineering app using cvui for control. We have decided to make a QT Quick Application for Python since all our math and background programming is done already and this looked like a good fit. I am trying to find an example or supported method to display the stream of images to the user in the QML. I do NOT want to use the widgets ( i can see issues mixing the two ) as we already made all the buttons and event handlers work in with Python and the QML. I see some examples in C++ (https://www.huber.xyz/?p=477) which are essentially hacks to complete the task but I am having a hard time combing through the documentation and converting to Python to replicate. I assume the steps would be convert to QImage or PixelMap from our array, then either emit (when python knows a new image is ready or QML function detects a change in the source image) to a Image View in the QML somehow but I have yet to successfully do this.
1.Is this supported/ if so what are the imports needed?
2.Are there examples in Python?
3.What is the work around in Python if not supported?
4. If I have to use a widget that somehow is tied to the QML positionally/overlayed/underlayed, what is a good example in Python for displaying the image stream?wrote on 9 Feb 2021, 23:43 last edited by -
wrote on 11 Feb 2021, 14:22 last edited by
@eyllanesc Yes this is a good example, however it is in pyqt5 and not Pyside6. I could make it work if there were instructions on how to make a qml extension with python. This article but geared towards python folk. https://doc.qt.io/qt-5.12/qtqml-tutorials-extending-qml-example.html
-
@eyllanesc Yes this is a good example, however it is in pyqt5 and not Pyside6. I could make it work if there were instructions on how to make a qml extension with python. This article but geared towards python folk. https://doc.qt.io/qt-5.12/qtqml-tutorials-extending-qml-example.html
wrote on 11 Feb 2021, 14:28 last edited by eyllanesc 2 Nov 2021, 14:33@SpencerD The modifications are minimal, for example you will have to change pyqtProperty with Property, pyqtSignal with Signal and pyqtSlot with Slot. I recommend you try to translate from PyQt5 to PySide6 and point out what errors you have to help you with those minor inconveniences.
On the other hand about the tutorial, you can use the PySide6 examples as a base:
-
wrote on 11 Feb 2021, 21:59 last edited by
Converting the "unable-to-stream-frames-from-camera" S.O post
The actual slots and signals was easy and done in 2 minutes. the next hours were spent on...
Biggest Hurdle was the Q_args not supported in PySide6.what was
QtCore.QMetaObject.invokeMethod(self,"setImage",QtCore.Qt.QueuedConnection,QtCore.Q_ARG(QtGui.QImage, image))
had to be turned into:
self.setImage()
or
#QtCore.QMetaObject.invokeMethod(self,"setImage",QtCore.Qt.QueuedConnection)""" @Slot() def setImage(self): self.imageReady.emit() """
from
""" @Slot(QtGui.QImage) def setImage(self, image): if self._image == image: return self._image = image self.imageReady.emit() """
But now I learned the hard way that these images were being created from within the QMLRegistered Class CVCapture from the video capture. Now I am lost in that you can't create a
QtQml.qmlRegisterType(CVCapture,"CvCaptures", 1, 0, "CVCapture")
and pass another class into it or some parameters. I have a class called image handler with an instance of the class called image_handler. It produces all the np.array images. I want to do this:
QtQml.qmlRegisterType(CVCapture,"CvCaptures", 1, 0, "CVCapture(image_handler)")
or however that could be done. But it is not allowed since it's not a Qtype(s). How would I pass the whole instance of data to the python code that create the custom QML? Oh and these images change so I cant just pass it once but continuously.
-
wrote on 15 Feb 2021, 21:06 last edited by
Can you pass a singleton into a QML register Type?
or rather import a singleton so that this python file can access the continuously changing data from another python file or class instance. For example "from main import myTestClasse" so far the data is not updating to match the singleton myTestClasse and was wondering if there is an extra step that needs to be taken?From the example listed above with modifications :...
"""
import numpy as np
import threading
#from PySide6 import Qt
import cv2"""
from PySide6 import QtCore, QtGui, QtQml
from PySide6.QtCore import QObject, Signal, Slot, Propertyfrom main import myTestClasse
def max_rgb_filter(image):
# split the image into its BGR components
(B, G, R) = cv2.split(image)# find the maximum pixel intensity values for each # (x, y)-coordinate,, then set all pixel values less # than M to zero M = np.maximum(np.maximum(R, G), B) R[R < M] = 0 G[G < M] = 0 B[B < M] = 0 # merge the channels back together and return the image return cv2.merge([B, G, R])
gray_color_table = [QtGui.qRgb(i, i, i) for i in range(256)]
class CVCapture(QtCore.QObject):
#once the camera caputre is started from completion of the QML , send signal that the image capture has started
started = Signal()
imageReady = Signal()
indexChanged = Signal()def __init__(self,parent=None): super(CVCapture, self).__init__(parent) self._image = QtGui.QImage() self._index = 0 #self.image_handler = image_handler #self.m_videoCapture = cv2.VideoCapture() self.m_timer = QtCore.QBasicTimer() #self.m_filters = [] self.m_busy = False #self.testImage = None #self.frame = None @Slot() @Slot(int) def start(self, *args): print('start the image display') if args: self.setIndex(args[0]) self.m_timer.start(50, self) self.started.emit() @Slot() def stop(self): self.m_timer.stop() def timerEvent(self, e): #print(f'image handler data is {image_handler.oneSecondCounter}') if e.timerId() != self.m_timer.timerId(): return #print('timerEvent Happening') #grabbedImage = image_handler.thumbnailImage.copy() #ret, frame = self.m_videoCapture.read() #ret = False #print(f'testimage size{testImage.shape}') if myTestClasse.image is not None: self.testImage = myTestClasse.image.copy() #print(f'thumbnail image :{self.testImage}') #self.frame = self.testImage.copy() #ret = True #cv2.imwrite('test3.png',self.testImage) else: return #if not ret: #print('timerEvent Stopping') #self.m_timer.stop() #return if self.m_busy == False: #print('start thread show image') #cv2.imwrite('test4.png',self.testImage) if self.testImage is not None: #localTest = self.testImage.copy() #print('start thread show image2') #cv2.imwrite('test4to5.png',self.testImage) #threading.Thread(target=self.process_image, args=(np.copy(self.testImage),)).start() self.process_image(self.testImage.copy()) @Slot(np.ndarray) def process_image(self, frame): #print('process image') cv2.imwrite('test5.png',frame) self.m_busy = True #print(f'flag is{self.m_busy}') #for f in self.m_filters: # frame = f.process_image(frame) image = CVCapture.ToQImage(frame) #if self._image == image: # self.m_busy = False # return self._image = image self.m_busy = False self.setImage() #QtCore.QMetaObject.invokeMethod(self,"setImage",QtCore.Qt.QueuedConnection) @staticmethod def ToQImage(im): if im is None: return QtGui.QImage() if im.dtype == np.uint8: if len(im.shape) == 2: qim = QtGui.QImage(im.data, im.shape[1], im.shape[0], im.strides[0], QtGui.QImage.Format_Indexed8) qim.setColorTable(gray_color_table) return qim.copy() elif len(im.shape) == 3: if im.shape[2] == 3: w, h, _ = im.shape rgb_image = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) flip_image = cv2.flip(rgb_image, 1) qim = QtGui.QImage(flip_image.data, h, w, QtGui.QImage.Format_RGB888) return qim.copy() return QtGui.QImage() def getImage(self): return self._image @Slot() def setImage(self): self.imageReady.emit() def index(self): return self._index def setIndex(self, index): if self._index == index: return self._index = index self.indexChanged.emit() image = Property(QtGui.QImage, fget=getImage, notify=imageReady) index = Property(int, fget=index, fset=setIndex, notify=indexChanged)
"""
1/6