Why not use native widgets?
-
Thanks for the explanation.
I did not know about the double buffering. I suppose the idea is that the toolkit allocates a region for each item on some off-screen panel, with drawing and pointer events propagated in each direction? All components would need to be provided a non-overlapping region, and layering would need to be resolved at the on-screen rendering stage.
However, it is reported that whenever a native look-and-feel changes, a Qt application will continue to look the same, until it is provided through a runtime supporting the change. Such an observation, if accurate, would tend to suggest that rendering is occurring through emulation rather than the technique you describe for off-screen buffering.
-
I suppose the idea is that the toolkit allocates a region for each item
No. It allocates a single surface for a window. When a given region is invalidated through events or explicit calls all widgets in that region get a paintEvent call (in proper z-order) and are responsible for painting themselves onto the surface. Then the surface is efficiently flipped to the native window.
However, it is reported that whenever a native look-and-feel changes
I don't know who's reporting what, but it's a bit of a blanket statement and the issue is far more complicated. I'm not that well versed in other platforms, but on Windows at least it's not really up to Qt. The problem is, again, that there's no real single native theme or mechanism for that.
Qt is a C++ framework and uses WinAPI on Windows. That does provide some information about changes in themes, e.g. there's a message sent to the app when system colors change and this is handled by Qt. Application gets a native message that is then exposed as Qt events (seeQEvent::StyleChange
andQEvent::ThemeChange
for example) and widgets do get redrawn.
Then there's stuff like light/dark theme in Windows 10/11, which are currently not directly exposed via pure C++ API. They are exposed through stuff like the Mica and Acrylic materials, that are implemented through XAML, WinUI and other WinRuntime related technologies. It's getting better every so few updates, but that is still not something that can be currently integrated into a pure WinAPI app, at least not without a ton of very dirty hacks. Until Microsoft gets their act together and provides a universal, stable, easy to integrate and lasting longer than a year or two API, there's not much that can be done.
I suspect other platforms have their own problems like this. -
@Chris-Kawa said in Why not use native widgets?:
No. It allocates a single surface for a window. When a given region is invalidated through events or explicit calls all widgets in that region get a paintEvent call (in proper z-order) and are responsible for painting themselves onto the surface. Then the surface is efficiently flipped to the native window.
What I meant is that each widget in a window must be allocated a dedicated region on some off-screen panel, a region not overlapping with any region allocated to another widget, even one whose on-screen representation overlaps with the former.
-
What I meant is that each widget in a window must be allocated a dedicated region on some off-screen panel
No. Widgets don't persist their presentation. It's an immediate type system, not retained. There's no composition.
Think of it this way - there's a QImage the size of a window. It's not actually a QImage, it's a platform specific buffer format, but lets say for the sake of simplicity it is.
Say there are two widgets. Widget A occupies QRect(0,0,10,10) and widget B is higher in z-order and occupies QRect(5,5,20,20), so they overlap a bit. Now some event happens that requires widget A to update. The rectangle (0,0,10,10) is invalidated. Rendering backend figures out that both A and B occupy at least part of that space, so it calls A.paintEvent() and then B.paintEvent() in the correct z-order. Both of these paint onto that single QImage of the window. After both widgets finish painting the QImage is rendered onto the native window. There's no access to the pixels drawn by A at this point. They are overwritten by B's painting. There is no painting to separate surfaces and then composing that.You might ask why it is done this way. Imagine a 4K screen with 32bit color. A single fullscreen buffer like that takes almost 32MB of memory. Now imagine a complicated app with bunch of nested widgets that occupy entire space - lets say a dock with a frame, and a couple of nested groupboxes etc. Each of those would take additional 32MB. In a moderate sized app that would quickly go into GBs. Not to mention each resize of the app would incur a big cost of reallocating all those buffers. The way Qt does it there's just one buffer for the window. It's still more costly than if you painted directly onto the window, but that's the double buffering I mentioned. The cost is worth it to fix a lot of issues like flicker.
-
The off-screen representation has the same layout as targeted for on screen, but retains an raster representation of only those regions that have been changed before being copied to the display? I'm not sure that is what is meant by double buffering, normally. Rather, it tends to indicate the case of multiple off-screen buffers maintained for copying on screen at arbitrary relative locations. What you have described strikes me as off-screen rendering, which does reduce flicker because of not displaying any intermediary state corresponding to the final construction of any region. I'm not sure why this would be a matter for the application framework more than the platform toolkit.
-
Double buffering means there's a front and back buffer. In APIs like DirectX the app allocates both and performs a swap. That's not exactly the case here. The front buffer is the native surface of the window (allocated by the OS when you create a window). The back buffer is allocated and painted to by Qt. Because those are different surfaces you can't simply swap them. A fast copy is performed instead. For example on Windows the BitBlt function is used to execute that.
It's double buffering in the sense that widgets don't draw directly onto the native window surface.What you have described strikes me as off-screen rendering
That's just another name for painting to the backbuffer. Yes, painting happens offscreen and then a fast copy is performed.
-
@Chris-Kawa said in Why not use native widgets?:
The front buffer is the native surface of the window
I missed that. I didn't understand that content would be added by the toolkit and need to be preserved.
-
It's still murky why Qt can't just render the same as the fully native applications.
-
It's hard to talk about this without writing a thick book. I listed a couple reasons in my first response.
But lets pick further on some. First of all what is "native" e.g. on Windows and how do they render? WinAPI? MFC? WinUI? WinForms? XAML? The OS itself uses all of those (and more!) for different things.Lets say you get the lowest level WinAPI, so you use CreateWindow for everything - buttons, frames, lists etc. It's not a good pick for modern apps. Swallows a ton of resources and has flicker issues. Just look at any old MFC app and how poorly it looks and behaves in modern Windows. Btw. you can force something like that in Qt. If you call winId() on a widget it gets its own native window. You can try that and see how performance tanks when you add a bunch of widgets.
Lets say you pick XAML - how would you integrate that in Qt ecosystem? There's no direct translation between widgets api and XAML. What about designer, QUiLoader etc.? how would you make that cross-platform?
Lets say you pick WinUI - now you have to deal with NuGet, C++/WinRT, bunch of marshaling between C++ and Windows Runtime. Qt absorbs a lot of heavy dependencies (GBs!) and becomes MSVC only (no WinUI for MinGW or ICC). It's all abstracted over DirectX, Direct2D and Windows Composition APIs. How do you integrate that with something like QOpenGLWidget? Becomes super convoluted, hacky and very platform specific.
Lets say you pick WinForms - well that's .Net so out of the question for Qt.
Also, how do you integrate QPainter with any of that? Translate painter calls dynamically into GDI, DirectX etc.? Good luck getting any performance out of that. How do you switch styles, use non-native styles or use stylesheets with that? Do a giant, very platform specific switch/case in every drawing function?
The way Qt does it is almost entirely platform agnostic. All the logic, all the functionality is common. All the client area painting and event handling is Qt internal and cross-platform. Stuff like stylesheets or painter algorithms don't need any platform specific code. Only the look of controls and backing surface is provided by platform plugins.
Btw. can you give an example of an app that you consider "native" and which technology it uses?
-
Perhaps the overall issue is that Windows has undergone various iterations of new frameworks, but many, especially the newer ones, are very tightly integrated, and lack the low-level access that would allow functionality to correspond close to one-to-one with the low-level calls defined by Qt.
It would remain yet an open question why Qt is able to render successfully off screen, despite the challenges accessing the low-level features of the framework, but has difficulty doing the same applied directly to the window.
-
@brainchild Like I said, when using low level API each control is its own window. It's an OS allocated object the app gets a handle to (HWND on windows). Each interaction with that object is a system level call. Having a large amount of these objects is slow, heavy and very memory consuming.
Qt doesn't create any such objects. It uses APIs like DrawThemeBackgroundEx to just "paint a picture" of a control on its surface. It doesn't require allocating resources or keeping system objects around. It's a lot lighter approach.
As for drawing offscreen or directly to window - Before Qt 4.4 drawing was done directly and it had massive issues with refresh and flicker, especially on resize. Qt 4.4 introduced a concept of native and alien widgets. Alien widgets would draw to that offscreen buffer while native ones would get their own window. By default only top level widgets are native now and everything inside is alien. If you need you can force a widget to become native like I mentioned above, withwinId()
, but it has performance penalty attached. -
@Chris-Kawa said in Why not use native widgets?:
It uses APIs like DrawThemeBackgroundEx to just "paint a picture" of a control on its surface. It doesn't require allocating resources or keeping system objects around. It's a lot lighter approach.
There is no allocation of external, stateful resources? I have never seen such a use of widgets. How would they respond to events?
Before Qt 4.4 drawing was done directly and it had massive issues with refresh and flicker, especially on resize.
Why would it be different than the case of applications built directly on the platform tool chain?
-
@brainchild said:
There is no allocation of external, stateful resources?
There might be some state allocated by the platform plugin here and there, but there's no per widget OS level objects. Widgets are Qt objects. Only painting them makes these light API calls that don't require persistent state.
I have never seen such a use of widgets
All widgets work like that. If you've seen a widget you've seen it working like that.
How would they respond to events?
There are two types of events - OS level events like input, resizing etc. and internal Qt events. OS level events are sent to a window (remember - top level widget is native) via messages. Qt translates them into Qt counterparts and dispatches them to appropriate widgets. For example a QPushButton doesn't deal with system level messages like
WM_LBUTTONDOWN
. Those are sent to the window and Qt translates them intoQEvent::MouseButtonPress
and the button gets it in this platform agnostic form.Why would it be different than the case of applications built directly on the platform tool chain?
It's not. Like I mentioned above - a lot of old applications built this way look and behave terribly. Modern apps use one of these other technologies I mentioned, which mostly do the same as Qt does - bypass the low level API.
-
@Chris-Kawa said in Why not use native widgets?:
Those are sent to the window and Qt translates them into
QEvent::MouseButtonPress
and the button gets it in this platform agnostic form.Right, but the appearance of the pressed button is created by a resource that knows its a button with certain properties that has received a mouse-down event, and now considers itself pressed, right? Widgets are intrinsically stateful resources that may change their own state based on events (or properties being set externally), right?
-
@brainchild Yes, but the state is held by the widget, which is a Qt object, platform agnostic, internal to your process and invisible to the system. When time comes to paint a widget its state is turned into the appropriate parameters to that system API call. Other than that the OS knows nothing about the existence of that widget.
A low level API in contrast uses system objects for each control and holds state in these objects.
If you use a tool like Spy++ you'll see that to the OS Qt app is just a singular blank window, where if you code entire app in system objects you can inspect each and every control separately. -
@Chris-Kawa said in Why not use native widgets?:
When time comes to paint a widget its state is turned into the appropriate parameters to that system API call.
That is surprising. Without internal state for the widget, it cannot determine its own behavior by way of internal transition, or process events by its own handlers. Meanwhile, a widget will often need to paint itself only partially, for example, due to the removal of an incomplete occlusion, and an ability to resolve an optimization might provide value in certain cases.
-
@brainchild said in Why not use native widgets?:
Why would it be different than the case of applications built directly on the platform tool chain?
Lots of applications written in raw Win32 have flicker issues. It isn't a difference.
-
@brainchild said:
Without internal state for the widget, it cannot determine its own behavior by way of internal transition
A QWidget is an application side object. It does hold and has access to its own state. It does process Qt events synthetized from native messages and dispatched to it by Qt. It does not create a system object. It does not process native system messages. It only uses stateless system API to draw itself.
In an app using low level native API a widget would not hold its own state or have a cross-platform event handling solution. It would only hold a handle to a native object that holds the state and handles native messages. This is not what's happening here.
Meanwhile, a widget will often need to paint itself only partially
A
paintEvent
of a widget has aQPaintEvent*
parameter that passes such information. One of them is the rectangular area that needs update. It doesn't have to cover entire widget if only partial change happened. The implementation of the paintEvent has an opportunity for optimization here if full update is not required. -
What you are describing is Qt interpreting low-level input as high-level state changes for the widget, then expressing those changes into stateless calls of the widget rendering engine. The result is that Qt is the gatekeeper of which real effects (e.g. left mouse button pressed and held longer than 300 ms) represent which high-level state changes (e.g. the button entering a pressed state from unpressed). The button cannot express itself, through a life of its own, by hearing the mouse events and deciding when to explain that it has entered into a pressed state and that specific regions within itself need to be refreshed. Augmentation or refinement within the platform toolkit is not possible. Meanwhile, repainting occurs only when requested by Qt, and only from scratch. It is not helpful that the repaint call may constrain a region for the widget to determine which regions it determines have change based on its own analysis of internal internal state.
-
@brainchild I don't really see where you get those ideas from. I'm kinda lost what are we discussing here. Are you asking something or suggesting? I'm not sure what you want me to do.
As for how it works. An example of what happens when you click a button:
- User presses a mouse button and OS sends a message
WM_LBUTTONDOWN
with global mouse position. - Qt translates that into
QEvent::MouseButtonPress
, looks up widget at given coords and posts aQMouseEvent
to that widget - Widget executes a
mousePressEvent(QMouseEvent*)
handler. In it a pressed state is determined and stored. - Since the state changed widget requests an
update()
. This marks it dirty and schedules a repaint. - Widget executes a
paintEvent(QPaintEvent*)
handler in which it calls the platform plugin abstraction to paint its state. If it's animated or anything like that it can start the animation, timers, schedule further updates or whatever it needs. - If some further updates were scheduled widget executes further
paintEvent(QPaintEvent*)
handlers. - At some point (3s or 300ms later, doesn't matter) user releases the button and OS sends
WM_LBUTTONUP
- Qt translates that into
QEvent::MouseButtonRelease
and posts anotherQMouseEvent
to the widget. - Widget executes a
mouseReleaseEvent(QMouseEvent*)
handler. In it the pressed state is terminated and new state stored. - Since both press and release occurred widget emits a
clicked()
signal. - Since the state changed widget requests an
update()
. This marks it dirty and schedules a repaint. - Widget executes a
paintEvent(QPaintEvent*)
handler in which it calls the platform plugin abstraction to paint its new state. Other housekeeping can be performed here e.g terminating animations. - and so on...
Maybe you should just read the Qt's code if you're interested in such details?
- User presses a mouse button and OS sends a message