Linking QGraphicsItems mouseMoveEvents together
Hello everyone, I am making an image analysis application using PyQt. The program loads up raw data and converts that into images, which consist out of multiple channels (so three different outputs). These images can be drawn on, which I implemented by making my custom circle and square classes that inherit from QGraphicsEllipseItem and QGraphicsRectItem. Here is an example of such an implementation:
class Rectangle(QGraphicsRectItem): def __init__(self, x, y, w, h): super(Rectangle, self).__init__(0, 0, w, h) super().setPen(QPen(Qt.red, 2)) super().setFlag(QGraphicsItem.ItemIsSelectable) super().setFlag(QGraphicsItem.ItemIsMovable) super().setFlag(QGraphicsItem.ItemIsFocusable) super().setFlag(QGraphicsItem.ItemSendsGeometryChanges) super().setFlag(QGraphicsItem.ItemSendsScenePositionChanges) super().setPos(QPointF(x, y)) def get_matrix(self): top_left = super().sceneBoundingRect().topLeft() bottom_right = super().sceneBoundingRect().bottomRight() x = int(bottom_right.x()) y = int(bottom_right.y()) print(top_left, bottom_right) arr = np.zeros(x * y).reshape(x, y) print(arr, arr.shape) def mouseMoveEvent(self, e): x = e.pos().x() y = e.pos().y() if e.buttons() == Qt.LeftButton: super().mouseMoveEvent(e) if e.buttons() == Qt.RightButton: super().setRect(QRectF(0, 0, x, y).normalized()) def itemChange(self, change, val): if change == QGraphicsItem.ItemPositionChange: return QPointF(val.x(), val.y()) return val
These shapes are drawn onto a scene that contains a pixmap (which is constructed from a QImage). The scenes are inside a custom Img class which inherits from QWidget. When a shape is drawn it will have to be broadcasted to all of the scenes inside the application by making use of the ImgHandler class:
def add_circle(self, x, y): for img in self.images: circle = Circle(self, x, y, 30, 30) img.scene.addItem(circle)
For example, we have 3 channels which are displayed in seperate scenes. When one of the scenes is clicked upon, then the Img class calls the ImgHandler class' method "add_circle". So each scene gets its own object. Now when these shapes are resized or moved I want the other shapes to do that as well. Is there a clean way to do this?
Another solution I thought of is: being able to draw the same image on multiple scenes. According to the documentation:
"If the item is already in a different scene, it will first be removed from its old scene, and then added to this scene as a top-level.
QGraphicsScene will send ItemSceneChange notifications to item while it is added to the scene. If item does not currently belong to a scene, only one notification is sent. If it does belong to scene already (i.e., it is moved to this scene), QGraphicsScene will send an addition notification as the item is removed from its previous scene."
So it appears that it is not possible to draw the same item on multiple scenes, sadly. Perhaps there is a workaround, but I cannot think of any.
Hi and welcome to devnet,
Why do you need that many scenes ? You could have one and your different view looking into it from different perspective.
Thank you very much!
I am using three to five scenes because our camera measures three to five different kind of frames simultaneously. Actually, a more scientifically accurate way to describe it would be: The camera uses multiple sensors to measure different spectra of wavelength simultaneously.
When displaying these three to five images I want to be able to get the RGB or grayscale values for each pixel for all three images individually, then perform some kind of mathematical operation.
My understanding was that one scene belongs to one view. Can I attach three scenes to one view, and access these pixels individually as I described?
One solution I have implemented now is that each time a scene is clicked, a collection object is instantiated and the same shape for all scenes gets packaged into this collection. The collection is shared by these shapes.
Click scene to make a circle -> instantiate collection object -> loop over scenes, create a circle object for each scene and add that to the scene and pass the collection to this circle.
Inside this collection object I then handle the moving and resizing.
class ShapeCollection: def __init__(self): self.point = QPointF(0, 0) # ROIs associated with this position instance self.members =  # redraw other items def move(self, shape): print(shape.pos().x(), shape.pos().y()) for member in self.members: # print("Circle x:", member.pos().x(), "Circle y:", member.pos().y()) if not member.pos() == shape.pos(): member.setPos(shape.pos())
The move method is called on the mouseMoveEvent of a shape. Honestly I am not sure whether this is the right way, I am still learning. This is my first internship.
A view indeed looks in one scene, however you can have several views on top of one scene which would allow you to have all your objects in that scene thus easier to synchronise and move around and you would use different views to look at different part of that scene.