Why not use native widgets?
-
The off-screen representation has the same layout as targeted for on screen, but retains an raster representation of only those regions that have been changed before being copied to the display? I'm not sure that is what is meant by double buffering, normally. Rather, it tends to indicate the case of multiple off-screen buffers maintained for copying on screen at arbitrary relative locations. What you have described strikes me as off-screen rendering, which does reduce flicker because of not displaying any intermediary state corresponding to the final construction of any region. I'm not sure why this would be a matter for the application framework more than the platform toolkit.
-
Double buffering means there's a front and back buffer. In APIs like DirectX the app allocates both and performs a swap. That's not exactly the case here. The front buffer is the native surface of the window (allocated by the OS when you create a window). The back buffer is allocated and painted to by Qt. Because those are different surfaces you can't simply swap them. A fast copy is performed instead. For example on Windows the BitBlt function is used to execute that.
It's double buffering in the sense that widgets don't draw directly onto the native window surface.What you have described strikes me as off-screen rendering
That's just another name for painting to the backbuffer. Yes, painting happens offscreen and then a fast copy is performed.
-
@Chris-Kawa said in Why not use native widgets?:
The front buffer is the native surface of the window
I missed that. I didn't understand that content would be added by the toolkit and need to be preserved.
-
It's still murky why Qt can't just render the same as the fully native applications.
-
It's hard to talk about this without writing a thick book. I listed a couple reasons in my first response.
But lets pick further on some. First of all what is "native" e.g. on Windows and how do they render? WinAPI? MFC? WinUI? WinForms? XAML? The OS itself uses all of those (and more!) for different things.Lets say you get the lowest level WinAPI, so you use CreateWindow for everything - buttons, frames, lists etc. It's not a good pick for modern apps. Swallows a ton of resources and has flicker issues. Just look at any old MFC app and how poorly it looks and behaves in modern Windows. Btw. you can force something like that in Qt. If you call winId() on a widget it gets its own native window. You can try that and see how performance tanks when you add a bunch of widgets.
Lets say you pick XAML - how would you integrate that in Qt ecosystem? There's no direct translation between widgets api and XAML. What about designer, QUiLoader etc.? how would you make that cross-platform?
Lets say you pick WinUI - now you have to deal with NuGet, C++/WinRT, bunch of marshaling between C++ and Windows Runtime. Qt absorbs a lot of heavy dependencies (GBs!) and becomes MSVC only (no WinUI for MinGW or ICC). It's all abstracted over DirectX, Direct2D and Windows Composition APIs. How do you integrate that with something like QOpenGLWidget? Becomes super convoluted, hacky and very platform specific.
Lets say you pick WinForms - well that's .Net so out of the question for Qt.
Also, how do you integrate QPainter with any of that? Translate painter calls dynamically into GDI, DirectX etc.? Good luck getting any performance out of that. How do you switch styles, use non-native styles or use stylesheets with that? Do a giant, very platform specific switch/case in every drawing function?
The way Qt does it is almost entirely platform agnostic. All the logic, all the functionality is common. All the client area painting and event handling is Qt internal and cross-platform. Stuff like stylesheets or painter algorithms don't need any platform specific code. Only the look of controls and backing surface is provided by platform plugins.
Btw. can you give an example of an app that you consider "native" and which technology it uses?
-
Perhaps the overall issue is that Windows has undergone various iterations of new frameworks, but many, especially the newer ones, are very tightly integrated, and lack the low-level access that would allow functionality to correspond close to one-to-one with the low-level calls defined by Qt.
It would remain yet an open question why Qt is able to render successfully off screen, despite the challenges accessing the low-level features of the framework, but has difficulty doing the same applied directly to the window.
-
@brainchild Like I said, when using low level API each control is its own window. It's an OS allocated object the app gets a handle to (HWND on windows). Each interaction with that object is a system level call. Having a large amount of these objects is slow, heavy and very memory consuming.
Qt doesn't create any such objects. It uses APIs like DrawThemeBackgroundEx to just "paint a picture" of a control on its surface. It doesn't require allocating resources or keeping system objects around. It's a lot lighter approach.
As for drawing offscreen or directly to window - Before Qt 4.4 drawing was done directly and it had massive issues with refresh and flicker, especially on resize. Qt 4.4 introduced a concept of native and alien widgets. Alien widgets would draw to that offscreen buffer while native ones would get their own window. By default only top level widgets are native now and everything inside is alien. If you need you can force a widget to become native like I mentioned above, withwinId()
, but it has performance penalty attached. -
@Chris-Kawa said in Why not use native widgets?:
It uses APIs like DrawThemeBackgroundEx to just "paint a picture" of a control on its surface. It doesn't require allocating resources or keeping system objects around. It's a lot lighter approach.
There is no allocation of external, stateful resources? I have never seen such a use of widgets. How would they respond to events?
Before Qt 4.4 drawing was done directly and it had massive issues with refresh and flicker, especially on resize.
Why would it be different than the case of applications built directly on the platform tool chain?
-
@brainchild said:
There is no allocation of external, stateful resources?
There might be some state allocated by the platform plugin here and there, but there's no per widget OS level objects. Widgets are Qt objects. Only painting them makes these light API calls that don't require persistent state.
I have never seen such a use of widgets
All widgets work like that. If you've seen a widget you've seen it working like that.
How would they respond to events?
There are two types of events - OS level events like input, resizing etc. and internal Qt events. OS level events are sent to a window (remember - top level widget is native) via messages. Qt translates them into Qt counterparts and dispatches them to appropriate widgets. For example a QPushButton doesn't deal with system level messages like
WM_LBUTTONDOWN
. Those are sent to the window and Qt translates them intoQEvent::MouseButtonPress
and the button gets it in this platform agnostic form.Why would it be different than the case of applications built directly on the platform tool chain?
It's not. Like I mentioned above - a lot of old applications built this way look and behave terribly. Modern apps use one of these other technologies I mentioned, which mostly do the same as Qt does - bypass the low level API.
-
@Chris-Kawa said in Why not use native widgets?:
Those are sent to the window and Qt translates them into
QEvent::MouseButtonPress
and the button gets it in this platform agnostic form.Right, but the appearance of the pressed button is created by a resource that knows its a button with certain properties that has received a mouse-down event, and now considers itself pressed, right? Widgets are intrinsically stateful resources that may change their own state based on events (or properties being set externally), right?
-
@brainchild Yes, but the state is held by the widget, which is a Qt object, platform agnostic, internal to your process and invisible to the system. When time comes to paint a widget its state is turned into the appropriate parameters to that system API call. Other than that the OS knows nothing about the existence of that widget.
A low level API in contrast uses system objects for each control and holds state in these objects.
If you use a tool like Spy++ you'll see that to the OS Qt app is just a singular blank window, where if you code entire app in system objects you can inspect each and every control separately. -
@Chris-Kawa said in Why not use native widgets?:
When time comes to paint a widget its state is turned into the appropriate parameters to that system API call.
That is surprising. Without internal state for the widget, it cannot determine its own behavior by way of internal transition, or process events by its own handlers. Meanwhile, a widget will often need to paint itself only partially, for example, due to the removal of an incomplete occlusion, and an ability to resolve an optimization might provide value in certain cases.
-
@brainchild said in Why not use native widgets?:
Why would it be different than the case of applications built directly on the platform tool chain?
Lots of applications written in raw Win32 have flicker issues. It isn't a difference.
-
@brainchild said:
Without internal state for the widget, it cannot determine its own behavior by way of internal transition
A QWidget is an application side object. It does hold and has access to its own state. It does process Qt events synthetized from native messages and dispatched to it by Qt. It does not create a system object. It does not process native system messages. It only uses stateless system API to draw itself.
In an app using low level native API a widget would not hold its own state or have a cross-platform event handling solution. It would only hold a handle to a native object that holds the state and handles native messages. This is not what's happening here.
Meanwhile, a widget will often need to paint itself only partially
A
paintEvent
of a widget has aQPaintEvent*
parameter that passes such information. One of them is the rectangular area that needs update. It doesn't have to cover entire widget if only partial change happened. The implementation of the paintEvent has an opportunity for optimization here if full update is not required. -
What you are describing is Qt interpreting low-level input as high-level state changes for the widget, then expressing those changes into stateless calls of the widget rendering engine. The result is that Qt is the gatekeeper of which real effects (e.g. left mouse button pressed and held longer than 300 ms) represent which high-level state changes (e.g. the button entering a pressed state from unpressed). The button cannot express itself, through a life of its own, by hearing the mouse events and deciding when to explain that it has entered into a pressed state and that specific regions within itself need to be refreshed. Augmentation or refinement within the platform toolkit is not possible. Meanwhile, repainting occurs only when requested by Qt, and only from scratch. It is not helpful that the repaint call may constrain a region for the widget to determine which regions it determines have change based on its own analysis of internal internal state.
-
@brainchild I don't really see where you get those ideas from. I'm kinda lost what are we discussing here. Are you asking something or suggesting? I'm not sure what you want me to do.
As for how it works. An example of what happens when you click a button:
- User presses a mouse button and OS sends a message
WM_LBUTTONDOWN
with global mouse position. - Qt translates that into
QEvent::MouseButtonPress
, looks up widget at given coords and posts aQMouseEvent
to that widget - Widget executes a
mousePressEvent(QMouseEvent*)
handler. In it a pressed state is determined and stored. - Since the state changed widget requests an
update()
. This marks it dirty and schedules a repaint. - Widget executes a
paintEvent(QPaintEvent*)
handler in which it calls the platform plugin abstraction to paint its state. If it's animated or anything like that it can start the animation, timers, schedule further updates or whatever it needs. - If some further updates were scheduled widget executes further
paintEvent(QPaintEvent*)
handlers. - At some point (3s or 300ms later, doesn't matter) user releases the button and OS sends
WM_LBUTTONUP
- Qt translates that into
QEvent::MouseButtonRelease
and posts anotherQMouseEvent
to the widget. - Widget executes a
mouseReleaseEvent(QMouseEvent*)
handler. In it the pressed state is terminated and new state stored. - Since both press and release occurred widget emits a
clicked()
signal. - Since the state changed widget requests an
update()
. This marks it dirty and schedules a repaint. - Widget executes a
paintEvent(QPaintEvent*)
handler in which it calls the platform plugin abstraction to paint its new state. Other housekeeping can be performed here e.g terminating animations. - and so on...
Maybe you should just read the Qt's code if you're interested in such details?
- User presses a mouse button and OS sends a message
-
Let's make it simple, even if it means being silly.
Suppose the next version of the platform's widget toolkit is changed such that every button will change its text color every 10 seconds, automatically, following a round-robin selection of six colors. If a button is allocated as a system resource, then it may implement this behavior by setting a timer for 10 seconds, and when receiving a callback, change its internal state for the new color, and then notify that the region containing its text is has become invalid. Such behavior cannot occur, from what I understand, through Qt, because Qt does not recognize the particular internal behavior as a feature of the widget. The widget state and events are only that which Qt keeps and follows, based on its generic, system-agnostic understanding of a button, which does not include the timer, repainting, or color selection, which a system resource could manage itself.
Surely there are non-silly, and quite real, examples of such limitations.
Has my presentation exposed some misunderstanding?
-
@brainchild Yes, it's a limitation, but not very important in practice. The situation you describe doesn't happen very often. A lot of windowing frameworks work like that, not just Qt, and OS manufacturers don't have any real incentive to break most applications look with new releases. They rather go out of their way not to do that, even when they change the look and feel. Windows APIs and behavior for example haven't changed since Windows XP (over 20 years). The look of the controls changed, some colors maybe, but not the fundamental mechanic. If it happens Qt can just update its platform plugin. The current one didn't need updating for a long time (it's still called WindowsVistaStyle if I'm not mistaken).
If you don't like that limitation anyway, you can provide your own platform plugin that will instantiate a bunch of system objects and get rid of that limitation that way. It's a plugin, so it's flexible. Can be updated independently or entirely replaced.
-
I am wondering whether the more serious limitation relates to lack of optimization for repainted regions, as a stateful resource would be able to compute the bounds of a physical region having become invalid, based on changes in conceptual state, and its own understanding of how such changes relate to its own paint process.
-
@brainchild I think you overestimate that optimization potential.
Simple controls do simple things e.g. a panel will usually just fill it with a color or a gradient. If control has text, like buttons or menu item, it's ballpark the same work to calculate proper text positioning, kerning etc. as simply drawing the whole thing clipped to the region.
If you think about it the set of basic UI controls is not that large and they are all pretty basic. More complicated ones like lists, tables etc. are just composition of the basic ones. If you have a very complicated control, like a chart or 3D scene, there's no native controls for that anyway, so all painting is custom one way or another. If you have highly animated UI toolkit it probably draws using an accelerated API like OpenGL or Vulkan, and the cost of figuring out on the CPU sub-changes needed is often a pessimization, as the GPU can draw them faster than the CPU can feed it anyway, so it's actually more performant to just draw it all.
I don't think many toolkits do the sub-control optimizations that you think of.