Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

Combining MouseArea and MultiPoint TouchArea



  • I have a QML app that uses a lot of MouseArea objects. Under Windows, I can run it on a touchscreen, and rely on Windows to translate single-touch events into mouse events, which work as expected. There are some problems, though:

    1. I'd like to be able to tap two buttons at once.
    2. I'd prefer it if using touch didn't cause the mouse cursor to move over onto the app, but stay where it is.
    3. I'd also prefer it if using touch didn't even give my app focus, so I could continue to do other things like type into another app.

    I'm not sure the last one is doable, but the first two certainly are. But I'm looking for a simple way to do it that doesn't involve a major rewrite, so I'm thinking of creating a MouseTouchArea object that I could quickly substitute for all my MouseArea objects. It would be implemented as a MouseArea object with a MultiPointTouchArea as a child, with signals from the latter translated into signals in the former. In the latter, I could set maximumTouchPoints and minimumTouchPoints to 1, and mouseEnabled to false, but it has a rather different API because its signals emit lists of TouchPoints rather than single MouseEvent objects.

    I could add signals to the MultiPointTouchArea like MousePressed that have a MouseEvent parameter, and then connect them to the corresponding signals of the parent like Pressed. But is there any way to get a MultiPointTouchArea handler written in JavaScript to create a MouseEvent object? Would I need a helper function written in C++?

    UPDATE

    I've been tinkering with this, and before even getting to the issue of how to create a MouseEvent (I'm thinking a JavaScript object with the appropriate properties might work), I'm finding I can't get the MultiPointTouchArea to work on Windows 7. If I create a MouseArea, then give it a MultiPointTouchArea as a child, set its anchors.fill to parent, set its minimumTouchPoints and maximumTouchPoints to 1, and set its mouseEnabled to false, touching it still activates the MouseArea, as if Windows has already translated the touch into a mouse operation. Is there something I need to do to tell Windows not to do this? I thought that perhaps Windows was sending both kinds of events, expecting the application to choose which to use, but it's moving the cursor to the touch point, which wouldn't make sense in a multi-touch environment. Or is it not possible to put a sensor object like a MultiPointTouchArea on top of a MouseArea by making it the child?


Log in to reply