Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct
The good old microseconds problem [SOLVED]
I know i have a lot of questions! But I have a huge project which walks through all categories of QT (From sockets to graphics)
So lets say i have an event that happens frequently. Now i want to measure the time which has elapsed between these events. Atm i use QTime::currentTime() to do that. Some listeners are listening to that time and trigger own events when a previously defined amount of milliseconds elapsed. So far so good! It works fine. Tested it on my notebook and my second pc.
BUT: My new workstation is too fast!! The time difference between two events is allways smaller than 1ms.
So the elapsed time will allways be 0. Atm i made a dirty patch and set the elapsed value to 1 if it is 0 but this cannot be the solution.
Any tricks or solutions with QT? Or do i have to implement my own class for each OS?
Thanks in advance.
FreddeB last edited by
Hello Seraph !
I think this might solve Your ptoblem ..
@ QElapsedTimer timer;
//something happens here nanoSec = timer.nsecsElapsed(); //printing the result(nanoSec) //something else happening here timer.restart(); //some other operation nanoSec = timer.nsecsElapsed();@
DerManu last edited by
FreddeB's solution adresses the technical issue correctly. But I sense that there might be a design flaw here. What exactly is that event that is so critical that it must be allowed to execute in intervals of the microsecond regime, when the system it is running on can handle it? Why not make sure (e.g. via a QTimer), that there is an upper limit to the event frequency?
MuldeR last edited by
Also: While QElapsedTimer::nsecsElapsed() returns the time that has elapsed in units of 1 nanosecond, it does not mean that the resolution of the timer actually is 1 nanosecond. In practice, the computer's timer has a much worse resolution! It's usually in the range of 16 milliseconds. There may be some low-level system functions to improve the timer resolution, like NtSetTimerResolution() on Windows, but that is system-specific, of course. So if you query the timer at very short intervals, it may not always return a different value! After all, your code should be prepared for the case that the delay between two events is smaller than the system's timer resolution. In that case, the time elapsed between two events, as measured by the system, will be zero, no matter what.
That makes sense. My first solution was to just skip resetting the QTime instance until it returns a measurable value.
@elapsed = time.elapsed();
//... pass elapsed to all event listeners
if(elapsed > 0)
OK. What MuldeR just explained makes now sense to me. Because I have the problem that 1 ms is not allways 1 ms. So you mean if i trigger an interval of 16 ms it would work for windows. But it might not work with unix systems?
The thing is i have a scenegraph which offers time triggered functionality. I have to feed it with the time which elapsed since the last tick. Some events are triggered in 60Hz frequency some in 1000Hz. Grrrr
MuldeR last edited by
What I mean is that you just cannot make the assumption that the system's timer has a resolution of 1 millisecond or even smaller. On Windows the system's timer has a resolution of ~16 milliseconds by default and it can be changed by using the (undocumented) NtSetTimerResolution() function. I'm not sure about Linux/Unix. Anyway, Qt will unavoidably depend on the underlying system's timer. So even if the API is specified in units of 1 nanosecond, like QElapsedTimer::nsecsElapsed(), you don't know about the actual timer precision. Thus, if you constantly query the timer value (without any delay), you might notice that the timer returns the exactly same value several times and then makes a "jump" to the next value. That "jump" might be as big as 16 milliseconds (16000000 nanoseconds). And thus, if you calculate the delay between two events by subtracting timer values, you could get a result of zero, if the delay between those events was smaller than 16 milliseconds (or whatever the precision of the timer is on the individual system). Upshot: Make your code handle that case gracefully.
Understood. Thanks for the explanation! I will be more careful.