Qt World Summit: Submit your Presentation

QGraphicsScene with custom QPainter

  • I have the needs to render QGraphicsItem in a QGraphicsScene but using my own algorithms for rendering rectangles, text, circles and so on. This is because I am writing a Drag'n'Drop GUI designer tool for an embedded GUI library. To ensure that what the users sees in the QGraphicsView is the same as that he will see on his actual hardware afterwards, I'd like to use our own drawing algorithms that are used in our library.

    As far as I understand I have to implement my own custom QPainter class. Is that correct?
    If so, how do I do that? When I understand correct I cannot just subclass the existing QPainter class and overwrite the existing drawXxx() methods because they are not virtual.
    Should I copy the source from the existing QPainter class and modify it accordingly?

    Also, once I wrote my own QPainter class, how do I assign it to the scene so that the QPainter pointer that is passed to the QGraphicsItem::paint() actually is a pointer to my own QPainter?
    I see a lot of problems in that regard. As I'd have to manually cast the pointer and so on. Maybe this is the wrong approach?

    I would be thankful for any kind of comment.

  • Lifetime Qt Champion

    @Joel-Bodenmann said:

    How do your own algorithms for rendering rectangles etc work right now?

    I was wondering if they could render to a QPixmap , you could cheat that way and
    just display images in scene. (maybe)

    Making a custom QPainter will be complex as you mention your self, not all are virtual.

  • @mrjj Thank you for your reply!

    Our rendering algorithms are optimized to use whatever hardware acceleration is available on the target system. If we want to draw a rectangle and the target hardware provides hardware acceleration for rectangle drawing we will use that. It's very similar to how QPainter/QPaintEngine works. After all the library was inspired by Qt ;)
    Anyway, all our rendering algorithms have a fallback to basic drawPixel(). This means that the only interface we need to Qt is the ability to manually draw pixels. If we can use hardware acceleration routines for rendering rectangles and so on we will be happily using them, but if that's not (easily) possible, we can work with drawPixel() only without any problem.

  • Lifetime Qt Champion

    Ok. Im asking
    as in a GUI Designer, often the needed refresh rate is not very high so
    I wondering if you could hook up your algorithms with
    and that way provide exact preview of what he will get.

  • If I understand you correctly I would maintain my own framebuffer to which I render using my algorithms (which is very easy for me) and then create a QImage that spans the entire display area (the entire scene) and copy the framebuffer to the QImage?
    You linked to the QImage::scanLine() method but I don't know how that helps in my case. If anything I would need to provide a scanLine() function to my own framebuffer which the QGraphiscScene can use, no?

    Another issue I have is that I still need to use QGraphicsItems as I need to be able to move the items in the QGraphisScene, and use other features like that.
    Maybe it would be possible that I maintain one framebuffer per QGraphicsItem myself and then dump that framebuffer to the scene in the QGraphicsScene in my QGraphicsItem::paint() function?

  • Lifetime Qt Champion

    Oh, that sounds more advanced that I was thinking :)

    I was thinking pr Designer Item/widget.

    So we have a custom QGraphicsItem that will paint a image.

    This image is painted by your algorithms
    and the QGraphicsItem just show it in scene.

    "Maybe it would be possible that I maintain one framebuffer per QGraphicsItem "
    yes. That was my idea. Maybe its stupid but would be easy to test.

  • For me both is possible: I can either have a global framebuffer that contains the entire scene or I can have one buffer per QGraphicsItem. Our existing software design allows using either without modifying our renderers. So whatever is easier in terms of Qt will work.

    I would prefer to maintain one custom buffer per QGraphicsItem and then just dump that on the scene using a QPixmap or a QImage.

  • Lifetime Qt Champion

    Well I sort of like the one buffer pr item better - also -
    as it seems more flexible than render whole scene. :)

  • Is it correct that I should prefer using a QPixmap instead of a QImage for this purpose?

  • Lifetime Qt Champion


    Well depends on what you need pixels wise.
    QImage has better support for
    image manipulation and QPixmap is optimized for fast drawing.

    So you might need QImage to generate the buffer but later convert to pixmap for
    repeated drawing.

    Its a wide question as seen here


  • Thank you very much for your help, I appreciate it a lot!
    I'll try to get this done in the next couple of days.

  • Lifetime Qt Champion

    You are very welcome.
    I hope it will work as fine as I imagine :)

    Good luck and happy coding !

Log in to reply