Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct
Efficient image rendering from Numpy
eovf last edited by
I just started playing around with PySide2 for visualizing some output from a neural network that produces video frames having never touched Qt before. The resolution can go up to 4K, but is mostly 1080p. I'm wondering if there is any way to make the code below run faster. The code is cobbled together from some examples from StackOverflow:
import sys import numpy as np from PySide2 import QtWidgets from PySide2 import QtGui from PySide2 import QtCore class Dataloader: def __init__(self): pass def __getitem__(self, idx): return np.random.randint(0, 255, size=(1080, 1920, 3), dtype=np.uint8) def __len__(self): return 1000 class Viewer(QtWidgets.QWidget): """Application main class.""" def __init__(self, app, dataloader): super().__init__() self._app = app self._dataloader = dataloader self._index = 0 self._image_widget = None self._refresh_timer = None self.setWindowTitle("Image Viewer") def run(self): self.show() self._start_playback(30) sys.exit(self._app.exec_()) def refresh(self): self._index += 1 data = self._dataloader[self._index] height, width = data.shape[:2] qimage = QtGui.QImage(data, width, height, 3 * width, QtGui.QImage.Format_RGB888) # Should we just update the image. update = self._image_widget is not None if not self._image_widget: self._image_widget = QtWidgets.QLabel() self._image_widget.setScaledContents(True) pixmap = QtGui.QPixmap(QtGui.QPixmap.fromImage(qimage)) self._image_widget.setPixmap(pixmap) if update: self._image_widget.update() else: self._image_widget.show() def _playback_loop(self): # Check if playback is still on. if not self._play: return # Still playing so draw new frame. self.refresh() # Set timer for next frame. self._timer = QtCore.QTimer() self._timer.timeout.connect(self._playback_loop) self._timer.start(self._msecs) def _start_playback(self, msecs): self._play = True self._msecs = msecs self._playback_loop() def main(): app = QtWidgets.QApplication() dataloader = Dataloader() viewer = Viewer(app, dataloader) viewer.run() if __name__ == "__main__": main()
Not sure if there's a way to do the actual image drawing part more directly from the Numpy array. I have no idea how many memory copies I end up making from the array, since I don't know what goes on under the hood. Can anyone tell me if this is the best I can do with PySide2 without resorting to an OpenGL canvas? (I need to keep the code easily readable by people who only know PyTorch/Numpy/Python)
(Still need to figure out how to actually draw the image into the main window instead of creating a new one...)
Hi and welcome to devnet,
Good question, I don't currently know what constructor for QImage is used so I can't answer the number of copies being generated. Out of curiosity, what performance do you currently have ?
PySide2 being pretty new, I suggest you bring this question to the PySide mailing list. You'll find there PySide's developers/maintainers. This forum is more user oriented.
You might also be interested in the TensorWatch project.