Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct
QGraphicsScene with custom QPainter
I have the needs to render
QGraphicsScenebut using my own algorithms for rendering rectangles, text, circles and so on. This is because I am writing a Drag'n'Drop GUI designer tool for an embedded GUI library. To ensure that what the users sees in the
QGraphicsViewis the same as that he will see on his actual hardware afterwards, I'd like to use our own drawing algorithms that are used in our library.
As far as I understand I have to implement my own custom
QPainterclass. Is that correct?
If so, how do I do that? When I understand correct I cannot just subclass the existing
QPainterclass and overwrite the existing
drawXxx()methods because they are not virtual.
Should I copy the source from the existing
QPainterclass and modify it accordingly?
Also, once I wrote my own
QPainterclass, how do I assign it to the scene so that the
QPainterpointer that is passed to the
QGraphicsItem::paint()actually is a pointer to my own
I see a lot of problems in that regard. As I'd have to manually cast the pointer and so on. Maybe this is the wrong approach?
I would be thankful for any kind of comment.
How do your own algorithms for rendering rectangles etc work right now?
I was wondering if they could render to a QPixmap , you could cheat that way and
just display images in scene. (maybe)
Making a custom QPainter will be complex as you mention your self, not all are virtual.
Joel Bodenmann last edited by Joel Bodenmann
@mrjj Thank you for your reply!
Our rendering algorithms are optimized to use whatever hardware acceleration is available on the target system. If we want to draw a rectangle and the target hardware provides hardware acceleration for rectangle drawing we will use that. It's very similar to how
QPaintEngineworks. After all the library was inspired by Qt ;)
Anyway, all our rendering algorithms have a fallback to basic
drawPixel(). This means that the only interface we need to Qt is the ability to manually draw pixels. If we can use hardware acceleration routines for rendering rectangles and so on we will be happily using them, but if that's not (easily) possible, we can work with
drawPixel()only without any problem.
Ok. Im asking
as in a GUI Designer, often the needed refresh rate is not very high so
I wondering if you could hook up your algorithms with
and that way provide exact preview of what he will get.
Joel Bodenmann last edited by Joel Bodenmann
If I understand you correctly I would maintain my own framebuffer to which I render using my algorithms (which is very easy for me) and then create a QImage that spans the entire display area (the entire scene) and copy the framebuffer to the QImage?
You linked to the
QImage::scanLine()method but I don't know how that helps in my case. If anything I would need to provide a
scanLine()function to my own framebuffer which the
QGraphiscScenecan use, no?
Another issue I have is that I still need to use QGraphicsItems as I need to be able to move the items in the QGraphisScene, and use other features like that.
Maybe it would be possible that I maintain one framebuffer per QGraphicsItem myself and then dump that framebuffer to the scene in the
Oh, that sounds more advanced that I was thinking :)
I was thinking pr Designer Item/widget.
So we have a custom QGraphicsItem that will paint a image.
This image is painted by your algorithms
and the QGraphicsItem just show it in scene.
"Maybe it would be possible that I maintain one framebuffer per QGraphicsItem "
yes. That was my idea. Maybe its stupid but would be easy to test.
For me both is possible: I can either have a global framebuffer that contains the entire scene or I can have one buffer per
QGraphicsItem. Our existing software design allows using either without modifying our renderers. So whatever is easier in terms of Qt will work.
I would prefer to maintain one custom buffer per
QGraphicsItemand then just dump that on the scene using a
Well I sort of like the one buffer pr item better - also -
as it seems more flexible than render whole scene. :)
Is it correct that I should prefer using a
QPixmapinstead of a
QImagefor this purpose?
Well depends on what you need pixels wise.
QImage has better support for
image manipulation and QPixmap is optimized for fast drawing.
So you might need QImage to generate the buffer but later convert to pixmap for
Its a wide question as seen here
Thank you very much for your help, I appreciate it a lot!
I'll try to get this done in the next couple of days.
You are very welcome.
I hope it will work as fine as I imagine :)
Good luck and happy coding !