Unable to do multithreading on a Jetson Nano device, got error:QObject:startTimer
-
Unable to do multithreading using ThreadPoolExecutor for OCR and Face recognition using python on Jetson Nano device and IMX219 camera. Using PyQT version 5.
As we got error: “QObject:startTimer: Timers cannot be started from another thread”
Can you help in resolving this issue? -
Unable to do multithreading using ThreadPoolExecutor for OCR and Face recognition using python on Jetson Nano device and IMX219 camera. Using PyQT version 5.
As we got error: “QObject:startTimer: Timers cannot be started from another thread”
Can you help in resolving this issue?@sweety_12 said in Unable to do multithreading on a Jetson Nano device, got error:QObject:startTimer:
Can you help in resolving this issue?
The error already tells you how to fix: start the timers in threads where they are living (do not start a timer from other thread than the thread where the timer was created or moved to).
-
@sweety_12 said in Unable to do multithreading on a Jetson Nano device, got error:QObject:startTimer:
Can you help in resolving this issue?
The error already tells you how to fix: start the timers in threads where they are living (do not start a timer from other thread than the thread where the timer was created or moved to).
-
@jsulm can you please suggest an approach or give an example, how to handle this, tried multiple approaches but couldn't resolve
@sweety_12 You should rather show your code. You are handling threads wrongly.
-
@sweety_12 You should rather show your code. You are handling threads wrongly.
@jsulm Thank you for your response
from PyQt5 import QtCore
frame_mutex = QtCore.QMutex()
def process_frame(frame):
frame_mutex.lock()
# Create a ThreadPoolExecutor with 2 worker threads
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
# Submit the OCR and face recognition tasks to the executor
text_future = executor.submit(recognize_text, frame)
face_future = executor.submit(recognize_face, frame)# Wait for both tasks to complete text_result = text_future.result() face_result = face_future.result() # Print the results print("Text recognition result:", text_result) print("Face recognition result:", face_result) frame_mutex.unlock()
if name == "main":
# Open the USB camera
cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! appsink")while(True): # Capture frame-by-frame ret, frame = cap.read() process_frame(frame) frame_mutex.lock() # Display the resulting frame cv2.imshow('frame', frame) frame_mutex.unlock() if cv2.waitKey(1) & 0xFF == ord('q'): break # Release the camera cap.release() cv2.destroyAllWindows()
-
@jsulm Thank you for your response
from PyQt5 import QtCore
frame_mutex = QtCore.QMutex()
def process_frame(frame):
frame_mutex.lock()
# Create a ThreadPoolExecutor with 2 worker threads
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
# Submit the OCR and face recognition tasks to the executor
text_future = executor.submit(recognize_text, frame)
face_future = executor.submit(recognize_face, frame)# Wait for both tasks to complete text_result = text_future.result() face_result = face_future.result() # Print the results print("Text recognition result:", text_result) print("Face recognition result:", face_result) frame_mutex.unlock()
if name == "main":
# Open the USB camera
cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! appsink")while(True): # Capture frame-by-frame ret, frame = cap.read() process_frame(frame) frame_mutex.lock() # Display the resulting frame cv2.imshow('frame', frame) frame_mutex.unlock() if cv2.waitKey(1) & 0xFF == ord('q'): break # Release the camera cap.release() cv2.destroyAllWindows()
@sweety_12 Can you please formast the code properly? Especially in Python indentation is critical.
-
@sweety_12 Can you please formast the code properly? Especially in Python indentation is critical.
@jsulm sorry for that the format got changed while pasting it
from PyQt5 import QtCore frame_mutex = QtCore.QMutex() def process_frame(frame): frame_mutex.lock() # Create a ThreadPoolExecutor with 2 worker threads with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: # Submit the OCR and face recognition tasks to the executor text_future = executor.submit(recognize_text, frame) face_future = executor.submit(recognize_face, frame)! # Wait for both tasks to complete text_result = text_future.result() face_result = face_future.result() # Print the results print("Text recognition result:", text_result) print("Face recognition result:", face_result) frame_mutex.unlock() if __name__ == "__main__": # Open the USB camera cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! appsink") while(True): # Capture frame-by-frame ret, frame = cap.read() process_frame(frame) frame_mutex.lock() # Display the resulting frame cv2.imshow('frame', frame) frame_mutex.unlock() if cv2.waitKey(1) & 0xFF == ord('q'): break # Release the camera cap.release() cv2.destroyAllWindows()
-
Hi,
Why use QMutex ? There's nothing in that code that uses Qt directly.
Even more, the mutex does not make much sense since you are blocking process_frame to wait on the two futures.
-
Hi,
Why use QMutex ? There's nothing in that code that uses Qt directly.
Even more, the mutex does not make much sense since you are blocking process_frame to wait on the two futures.
-
With your current implementation, you can drop the mutex.
Everything is done sequentially since you are explicitly waiting on the futures.