Important: Please read the Qt Code of Conduct -

Using QML only as very thin ui layer and do the rest in C++

  • Hello,
    I have started to write an App for the usual mobile platforms and later also for Desktop level(this mainly for fast debugging at first).

    I'm now in the planning phase, mainly how the architecture of this app will work at all.
    I started working with a few examples of the "old" elements like QWidget and really liked all the possibilities. But it's a hard way I think, to do everything I'm planning with mainly Desktop designed objects. So I looked to the "hyped" QML language. At first well yeah it's a cakewalk to do nice stuff with it. But if you plan a lot low level stuff in Opengl, directly interacting with the UI, it gets to my point of view a little bit unflexible.

    Now at first what I'm planning:
    A few Layers:
    one (bottom) layer for the content(which has to be in Opengl)
    next layer for direct controls, that are visible all the time
    then some kind of dialog level, for some popping up windows.

    Everything handled in pure Opengl, because I want to maximize performance, and I also have some quite good ideas for Performance(caching with framebuffers etc.)
    ( I'm kind of performance junkie, who hates stuttering movement ;) )
    Well next thing, that makes all somehow more difficult(from the QML View):
    the windows and controls shall be blurry-transparent, like the IOS 7 theme.
    this blur is also cached in a framebuffer, because of mostly static content btw.

    Next thing, that really disappointed me is the touch gesture handling (at least on android).
    The Flickable does a really bad job, it doesn't fell at all that natural, like the flick gesture detection on native android. it somehow breaks a little bit before it starts to flick, there is no real flow.
    I experimented a lot with all settings of this element, but none of them made it really to nice feeling.
    I've written my own simple flick Gesture detection, just averaging the last 10 touchpoints as velocity(as Gesture in QtGui) and it worked much better.
    Well that would not be the biggest problem, if there was a simple way to bind own gestures, like the one described above...
    Btw. why is the GestureArea from Qt Quick1 discontinued?

    Now the main Question, like stated in the title:
    As I wrote I have the plan to do most stuff in pure Opengl, and some elements, that are too much to write them again, like text input(btw. text selection AND editing should be possible in Android, is this planned for Qt5.4?) and other similar stuff like that, I wanted to embed at the "window manager" I want to write.
    Is it possible to just write a thin QML Layer, that manages the events, to each graphics object, Im planning?
    E.g having a button in OpenGl with a texture based of a framebuffer, that gets the touch events, that QML detects at a given area(MultiPointTouchArea for example)?
    Is there a way to pass all Touchpoints(as Objects) directly to an Object in C++?

    Best regards

  • maximize performance? Why do you think the performance of your custom UI can outperform the scene graph of qtquick?

    I think you should implement your UI in QML first and then see the performance.

  • Well I'm not against QML and want the pain and additional work with writing my own window stack etc. Its just that I at first don't like at all the gesture handling right now, and the problem with correctly managing something like blurry opengl background(and their performance issues, like: I know exactly when the blur needs an update etc.) I just miss some flexibility that I would have, if I wanted to do it with desktop widgets...

Log in to reply