Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Qt Development
  3. General and Desktop
  4. Import Video via QThread
Forum Updated to NodeBB v4.3 + New Features

Import Video via QThread

Scheduled Pinned Locked Moved Unsolved General and Desktop
2 Posts 2 Posters 300 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Offline
    M Offline
    MAX001
    wrote on last edited by MAX001
    #1

    Who can help with this code?
    How to rebuild it correctly?
    Initially, I only made an object detector, then I decided to use QTread to display the recognized video in the application.
    All recognition functionality is in def onVideo how I can pass that part of code to Worker1(QThread) and connect it correctly

    I don't know how to pass the frame from the recognized video to QTread so that it is displayed.
    And for some reason, a video that is 25 seconds goes somewhere in 5 seconds.

    And how in this situation, when I import another video, so QTread terminates the old process and starts a new one.

    Where is some ### I tried to do that without QTread but also don't understand how to terminate process with displaying detected video

    np.random.seed(20)
    class Video_Detector_obj: 
        def __init__(self, videoPath, configPath, modelPath, classesPath):
            self.videoPath = videoPath
            self.configPath = configPath
            self.modelPath = modelPath
            self.classesPath = classesPath
    
            self.net = cv2.dnn_DetectionModel(self.modelPath, self.configPath)
            self.net.setInputSize(320, 320)
            self.net.setInputScale(1.0/127.5)
            self.net.setInputMean((127.5, 127.5, 127.5))
            self.net.setInputSwapRB(True)
    
            self.readClasses()
            
            self.load_video = QLabel()
            self.load_video.setAlignment(Qt.AlignmentFlag.AlignCenter)
            self.Worker1 = Worker1()
            self.Worker1.ImageUpdate.connect(self.ImageUpdatesSlot)
    
        def readClasses(self):
            with open(self.classesPath, 'r') as f:
                self.classesList = f.read().splitlines()
            
            self.classesList.insert(0, '__Background__')
    
            self.colorList = np.random.uniform(low=0, high=255, size=(len(self.classesList), 3))
            
            print(self.classesList)
    
        def ImageUpdatesSlot(self, Image):
            self.load_video.setPixmap(QPixmap.fromImage(Image))
    
    
        def CancelFeed(self):
            self.Worker1.stop()
        
        def onVideo(self, grid_video_detect):
            global image
            self.key = True
            cap = cv2.VideoCapture(self.videoPath)
            temp_path = self.videoPath
    
            if (cap.isOpened()==False):
                print("Error opening file...")
                return
    
            (success, image) = cap.read()
    
            startTime = 0
    
            while success:
                currentTime = time.time()
                fps = 1/(currentTime - startTime)
                startTime = currentTime
                classLabelIDs, confidences, bboxs =  self.net.detect(image, confThreshold = 0.5)
    
                bboxs = list(bboxs)
                confidences = list(np.array(confidences).reshape(1,-1)[0])
                confidences = list(map(float, confidences))
    
                bboxIdx = cv2.dnn.NMSBoxes(bboxs, confidences, score_threshold = 0.5, nms_threshold = 0.2)
    
                if len(bboxIdx) != 0:
                        for i in range(0, len(bboxIdx)):
    
                            bbox = bboxs[np.squeeze(bboxIdx[i])]
                            classConfidence = confidences[np.squeeze(bboxIdx[i])]
                            classLabelID = np.squeeze(classLabelIDs[np.squeeze(bboxIdx[i])])
                            classLabel = self.classesList[classLabelID]
                            classColor = [int(c) for c in self.colorList[classLabelID]]
    
                            displayText = "{}:{:.2f}".format(classLabel, classConfidence)
    
                            x,y,w,h = bbox
    
                            cv2.rectangle(image, (x,y), (x+w, y+h), color=classColor, thickness=1)
                            cv2.putText(image, displayText, (x, y-10), cv2.FONT_HERSHEY_PLAIN, 1, classColor, 2)
    
                cv2.putText(image, "FPS: " + str(int(fps)), (20, 70), cv2.FONT_HERSHEY_PLAIN, 2, (0, 255, 0), 2)
                #cv2.imshow("Result", image)
    
    
                
                #load_video = QLabel()
                #image = imutils.resize(image, width = 1500)
                #frame = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
                #image = QImage(frame, frame.shape[1], frame.shape[0], frame.strides[0], QImage.Format_RGB888)
                #load_video.setPixmap(QPixmap.fromImage(image))
                #load_video.setAlignment(Qt.AlignmentFlag.AlignCenter)
                #grid_video_detect.addWidget(load_video, 0, 1)
    
    
    
                self.Worker1.start()
                grid_video_detect.addWidget(self.load_video, 0, 1)
    
                
                key = cv2.waitKey(1) & 0xFF
                #if key == ord("q"):
                #    break
    
                (success, image) = cap.read()
            cv2.destroyAllWindows()
    
    
    class Worker1(QThread):
        ImageUpdate = pyqtSignal(QImage)
        def run(self):
            self.TreadActive = True
            Capture = cv2.VideoCapture("Road - 84970.mp4")
            while self.TreadActive:
                ret, frame = Capture.read()
                if ret:
                    Image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
                    #FlippedImage = cv2.flip(Image, 1)
                    ConvertToQtFormat = QImage(Image.data, Image.shape[1], Image.shape[0], QImage.Format_RGB888)
                    Pic = ConvertToQtFormat.scaled(1500, 3000, Qt.KeepAspectRatio)
                    self.ImageUpdate.emit(Pic)
        def stop(self):
            self.TreadActive = False
            self.quit()
    
    
    1 Reply Last reply
    0
    • SGaistS Offline
      SGaistS Offline
      SGaist
      Lifetime Qt Champion
      wrote on last edited by
      #2

      Hi,

      Don't access GUI elements in threads other than the main thread. Use the Mandelbrot Example to see how you can process images in a secondary thread and send the result to the main thread for display purpose.

      As for the speed of the video reading, that's up to you to read the file at the pace you want/need. To the best of my memory, OpenCV is not a media player and does not provide support for playback based on the file metadata.

      Interested in AI ? www.idiap.ch
      Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

      1 Reply Last reply
      1

      • Login

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • Users
      • Groups
      • Search
      • Get Qt Extensions
      • Unsolved