Why not use native widgets?
-
Why does Qt not invoke widget toolkits native to a target desktop platform such as Windows or macOS, instead preferring emulation of the look-and-feel?
-
I wouldn't call that emulation. It's more of a hybrid. Platform plugins do invoke native APIs to render basic controls, but then paint them onto a double buffered Qt surface that is presented on a native window. There's a lot of reasons for this approach.
One is better performance. There's only one native window resource allocation and all content of the window is seen as a single surface, that can be efficiently rendered with any type of API like Vulkan, DirectX, OpenGL, WinAPI or whatever the platform provides. It makes less system calls, is faster and uses fewer resources.
Around Qt4 the double buffer approach introduction helped to resolve a lot of flickering issues that previously occurred due to integration of native and non-native widgets. Keep in mind that different platforms don't have full parity when it comes to controls they provide. You also get greater customization of some widgets that is uniformly handled across supported platforms.
Another reason is that it's far easier to develop multi-platform framework this way. You have a common code for all the functionality and only the presentation and hinting is provided by the platform plugin. Otherwise you'd have a spaghetti code of ifs and switches to handle all the platform differences.
Yet another is that this way you can switch styles at will, even at runtime. You can also combine and make proxy styles, stylesheets, graphical effects and other things that wouldn't be possible. Some platforms don't even have a single native theme. Windows has at least 5 or 6 major technologies, all vastly different, that don't result in similar look and feel. Ever wondered why notepad, calculator, calendar or mail apps look nothing alike? Yeah.
Integration with accessibility tools, custom painting, hardware acceleration etc. etc. The list goes on and on.
-
Thanks for the explanation.
I did not know about the double buffering. I suppose the idea is that the toolkit allocates a region for each item on some off-screen panel, with drawing and pointer events propagated in each direction? All components would need to be provided a non-overlapping region, and layering would need to be resolved at the on-screen rendering stage.
However, it is reported that whenever a native look-and-feel changes, a Qt application will continue to look the same, until it is provided through a runtime supporting the change. Such an observation, if accurate, would tend to suggest that rendering is occurring through emulation rather than the technique you describe for off-screen buffering.
-
I suppose the idea is that the toolkit allocates a region for each item
No. It allocates a single surface for a window. When a given region is invalidated through events or explicit calls all widgets in that region get a paintEvent call (in proper z-order) and are responsible for painting themselves onto the surface. Then the surface is efficiently flipped to the native window.
However, it is reported that whenever a native look-and-feel changes
I don't know who's reporting what, but it's a bit of a blanket statement and the issue is far more complicated. I'm not that well versed in other platforms, but on Windows at least it's not really up to Qt. The problem is, again, that there's no real single native theme or mechanism for that.
Qt is a C++ framework and uses WinAPI on Windows. That does provide some information about changes in themes, e.g. there's a message sent to the app when system colors change and this is handled by Qt. Application gets a native message that is then exposed as Qt events (seeQEvent::StyleChange
andQEvent::ThemeChange
for example) and widgets do get redrawn.
Then there's stuff like light/dark theme in Windows 10/11, which are currently not directly exposed via pure C++ API. They are exposed through stuff like the Mica and Acrylic materials, that are implemented through XAML, WinUI and other WinRuntime related technologies. It's getting better every so few updates, but that is still not something that can be currently integrated into a pure WinAPI app, at least not without a ton of very dirty hacks. Until Microsoft gets their act together and provides a universal, stable, easy to integrate and lasting longer than a year or two API, there's not much that can be done.
I suspect other platforms have their own problems like this. -
@Chris-Kawa said in Why not use native widgets?:
No. It allocates a single surface for a window. When a given region is invalidated through events or explicit calls all widgets in that region get a paintEvent call (in proper z-order) and are responsible for painting themselves onto the surface. Then the surface is efficiently flipped to the native window.
What I meant is that each widget in a window must be allocated a dedicated region on some off-screen panel, a region not overlapping with any region allocated to another widget, even one whose on-screen representation overlaps with the former.
-
What I meant is that each widget in a window must be allocated a dedicated region on some off-screen panel
No. Widgets don't persist their presentation. It's an immediate type system, not retained. There's no composition.
Think of it this way - there's a QImage the size of a window. It's not actually a QImage, it's a platform specific buffer format, but lets say for the sake of simplicity it is.
Say there are two widgets. Widget A occupies QRect(0,0,10,10) and widget B is higher in z-order and occupies QRect(5,5,20,20), so they overlap a bit. Now some event happens that requires widget A to update. The rectangle (0,0,10,10) is invalidated. Rendering backend figures out that both A and B occupy at least part of that space, so it calls A.paintEvent() and then B.paintEvent() in the correct z-order. Both of these paint onto that single QImage of the window. After both widgets finish painting the QImage is rendered onto the native window. There's no access to the pixels drawn by A at this point. They are overwritten by B's painting. There is no painting to separate surfaces and then composing that.You might ask why it is done this way. Imagine a 4K screen with 32bit color. A single fullscreen buffer like that takes almost 32MB of memory. Now imagine a complicated app with bunch of nested widgets that occupy entire space - lets say a dock with a frame, and a couple of nested groupboxes etc. Each of those would take additional 32MB. In a moderate sized app that would quickly go into GBs. Not to mention each resize of the app would incur a big cost of reallocating all those buffers. The way Qt does it there's just one buffer for the window. It's still more costly than if you painted directly onto the window, but that's the double buffering I mentioned. The cost is worth it to fix a lot of issues like flicker.
-
The off-screen representation has the same layout as targeted for on screen, but retains an raster representation of only those regions that have been changed before being copied to the display? I'm not sure that is what is meant by double buffering, normally. Rather, it tends to indicate the case of multiple off-screen buffers maintained for copying on screen at arbitrary relative locations. What you have described strikes me as off-screen rendering, which does reduce flicker because of not displaying any intermediary state corresponding to the final construction of any region. I'm not sure why this would be a matter for the application framework more than the platform toolkit.
-
Double buffering means there's a front and back buffer. In APIs like DirectX the app allocates both and performs a swap. That's not exactly the case here. The front buffer is the native surface of the window (allocated by the OS when you create a window). The back buffer is allocated and painted to by Qt. Because those are different surfaces you can't simply swap them. A fast copy is performed instead. For example on Windows the BitBlt function is used to execute that.
It's double buffering in the sense that widgets don't draw directly onto the native window surface.What you have described strikes me as off-screen rendering
That's just another name for painting to the backbuffer. Yes, painting happens offscreen and then a fast copy is performed.
-
@Chris-Kawa said in Why not use native widgets?:
The front buffer is the native surface of the window
I missed that. I didn't understand that content would be added by the toolkit and need to be preserved.
-
It's still murky why Qt can't just render the same as the fully native applications.
-
It's hard to talk about this without writing a thick book. I listed a couple reasons in my first response.
But lets pick further on some. First of all what is "native" e.g. on Windows and how do they render? WinAPI? MFC? WinUI? WinForms? XAML? The OS itself uses all of those (and more!) for different things.Lets say you get the lowest level WinAPI, so you use CreateWindow for everything - buttons, frames, lists etc. It's not a good pick for modern apps. Swallows a ton of resources and has flicker issues. Just look at any old MFC app and how poorly it looks and behaves in modern Windows. Btw. you can force something like that in Qt. If you call winId() on a widget it gets its own native window. You can try that and see how performance tanks when you add a bunch of widgets.
Lets say you pick XAML - how would you integrate that in Qt ecosystem? There's no direct translation between widgets api and XAML. What about designer, QUiLoader etc.? how would you make that cross-platform?
Lets say you pick WinUI - now you have to deal with NuGet, C++/WinRT, bunch of marshaling between C++ and Windows Runtime. Qt absorbs a lot of heavy dependencies (GBs!) and becomes MSVC only (no WinUI for MinGW or ICC). It's all abstracted over DirectX, Direct2D and Windows Composition APIs. How do you integrate that with something like QOpenGLWidget? Becomes super convoluted, hacky and very platform specific.
Lets say you pick WinForms - well that's .Net so out of the question for Qt.
Also, how do you integrate QPainter with any of that? Translate painter calls dynamically into GDI, DirectX etc.? Good luck getting any performance out of that. How do you switch styles, use non-native styles or use stylesheets with that? Do a giant, very platform specific switch/case in every drawing function?
The way Qt does it is almost entirely platform agnostic. All the logic, all the functionality is common. All the client area painting and event handling is Qt internal and cross-platform. Stuff like stylesheets or painter algorithms don't need any platform specific code. Only the look of controls and backing surface is provided by platform plugins.
Btw. can you give an example of an app that you consider "native" and which technology it uses?
-
Perhaps the overall issue is that Windows has undergone various iterations of new frameworks, but many, especially the newer ones, are very tightly integrated, and lack the low-level access that would allow functionality to correspond close to one-to-one with the low-level calls defined by Qt.
It would remain yet an open question why Qt is able to render successfully off screen, despite the challenges accessing the low-level features of the framework, but has difficulty doing the same applied directly to the window.
-
@brainchild Like I said, when using low level API each control is its own window. It's an OS allocated object the app gets a handle to (HWND on windows). Each interaction with that object is a system level call. Having a large amount of these objects is slow, heavy and very memory consuming.
Qt doesn't create any such objects. It uses APIs like DrawThemeBackgroundEx to just "paint a picture" of a control on its surface. It doesn't require allocating resources or keeping system objects around. It's a lot lighter approach.
As for drawing offscreen or directly to window - Before Qt 4.4 drawing was done directly and it had massive issues with refresh and flicker, especially on resize. Qt 4.4 introduced a concept of native and alien widgets. Alien widgets would draw to that offscreen buffer while native ones would get their own window. By default only top level widgets are native now and everything inside is alien. If you need you can force a widget to become native like I mentioned above, withwinId()
, but it has performance penalty attached. -
@Chris-Kawa said in Why not use native widgets?:
It uses APIs like DrawThemeBackgroundEx to just "paint a picture" of a control on its surface. It doesn't require allocating resources or keeping system objects around. It's a lot lighter approach.
There is no allocation of external, stateful resources? I have never seen such a use of widgets. How would they respond to events?
Before Qt 4.4 drawing was done directly and it had massive issues with refresh and flicker, especially on resize.
Why would it be different than the case of applications built directly on the platform tool chain?
-
@brainchild said:
There is no allocation of external, stateful resources?
There might be some state allocated by the platform plugin here and there, but there's no per widget OS level objects. Widgets are Qt objects. Only painting them makes these light API calls that don't require persistent state.
I have never seen such a use of widgets
All widgets work like that. If you've seen a widget you've seen it working like that.
How would they respond to events?
There are two types of events - OS level events like input, resizing etc. and internal Qt events. OS level events are sent to a window (remember - top level widget is native) via messages. Qt translates them into Qt counterparts and dispatches them to appropriate widgets. For example a QPushButton doesn't deal with system level messages like
WM_LBUTTONDOWN
. Those are sent to the window and Qt translates them intoQEvent::MouseButtonPress
and the button gets it in this platform agnostic form.Why would it be different than the case of applications built directly on the platform tool chain?
It's not. Like I mentioned above - a lot of old applications built this way look and behave terribly. Modern apps use one of these other technologies I mentioned, which mostly do the same as Qt does - bypass the low level API.
-
@Chris-Kawa said in Why not use native widgets?:
Those are sent to the window and Qt translates them into
QEvent::MouseButtonPress
and the button gets it in this platform agnostic form.Right, but the appearance of the pressed button is created by a resource that knows its a button with certain properties that has received a mouse-down event, and now considers itself pressed, right? Widgets are intrinsically stateful resources that may change their own state based on events (or properties being set externally), right?
-
@brainchild Yes, but the state is held by the widget, which is a Qt object, platform agnostic, internal to your process and invisible to the system. When time comes to paint a widget its state is turned into the appropriate parameters to that system API call. Other than that the OS knows nothing about the existence of that widget.
A low level API in contrast uses system objects for each control and holds state in these objects.
If you use a tool like Spy++ you'll see that to the OS Qt app is just a singular blank window, where if you code entire app in system objects you can inspect each and every control separately. -
@Chris-Kawa said in Why not use native widgets?:
When time comes to paint a widget its state is turned into the appropriate parameters to that system API call.
That is surprising. Without internal state for the widget, it cannot determine its own behavior by way of internal transition, or process events by its own handlers. Meanwhile, a widget will often need to paint itself only partially, for example, due to the removal of an incomplete occlusion, and an ability to resolve an optimization might provide value in certain cases.
-
@brainchild said in Why not use native widgets?:
Why would it be different than the case of applications built directly on the platform tool chain?
Lots of applications written in raw Win32 have flicker issues. It isn't a difference.
-
@brainchild said:
Without internal state for the widget, it cannot determine its own behavior by way of internal transition
A QWidget is an application side object. It does hold and has access to its own state. It does process Qt events synthetized from native messages and dispatched to it by Qt. It does not create a system object. It does not process native system messages. It only uses stateless system API to draw itself.
In an app using low level native API a widget would not hold its own state or have a cross-platform event handling solution. It would only hold a handle to a native object that holds the state and handles native messages. This is not what's happening here.
Meanwhile, a widget will often need to paint itself only partially
A
paintEvent
of a widget has aQPaintEvent*
parameter that passes such information. One of them is the rectangular area that needs update. It doesn't have to cover entire widget if only partial change happened. The implementation of the paintEvent has an opportunity for optimization here if full update is not required.