Poor QThread::sleep() and ::usleep() resolution on Windows
-
Hi,
when you're using QThread's protected methods sleep(), msleep() or usleep() on Windows, be very careful! Calling
@
msleep(1); // 1ms?
@
or
@
usleep(100); // 0.1ms?
@
does not necessarily wait the specified amount of time. It can wait up to 15 ms (using my Windows XP and my Windows 7 machines, both 32bit). This is probably due to limitations of the Sleep() API-function, see http://msdn.microsoft.com/en-us/library/ms686298(v=vs.85).aspxI've seen this problem before in other applications. If you're trying to do exact timing in an inner loop, your measurements (though precise) can "clump up" or "be rasterized" into multiples of roughly 15 ms.
For example, if you have something like this:
@
// record time here:
CustomTimer.start();
for(int i = 0; i < 100; ++i) {
// do something, not CPU-heavy
someCall();
usleep(100);
}
float elapsedMs = CustomTimer.elapsedMs();
@
You'd think that elapsedMs is about 1 ms (neglecting the small CPU-time needed for someCall() ). But you'll find that elapsedMs is 15ms, or a multiple of it some time!If you change the code to
@
// record time here:
CustomTimer.start();
for(int i = 0; i < 100; ++i) {
// do something, not CPU-heavy
someCall();
// do busy waiting
BusyWaitTimer.restart();
while(BusyWaitTimer.getMs() < 1.0f) {
// do nothing but wasting CPU
}
}
float elapsedMs = CustomTimer.elapsedMs();
@
then indeed elapsedMs will be close to 1ms all the time.Really annoying, but it basically means that on Windows, you can't free less CPU time in a thread than 15ms (well, you can, but it will be divided up into chunks of 15ms, so you'll have to wait until the next chunk comes).
I don't know of any way to do this in Windows at all. Anybody?
P.S.: If you don't want to experiment and reproduce this: Some time ago even the MATLAB-function etime (elapsed time) was somehow hit by this 15ms-"barrier", a plot is still available here: http://www.mindstorms.rwth-aachen.de/trac/attachment/ticket/29/TicTocVsEtime.png
-
This topic is nearly the same as "that one,":http://developer.qt.nokia.com/forums/viewthread/5904/ which was discussed some time ago. The point is, Winodw is no realtime OS, even not a soft realtime OS, so why should it be that precise for timers and sleeps.
And for typical windows programs, that's absolutly enough :-)
All this sleep comes from windows API which means, QThread has no influence on the resolution.
-
[quote author="Gerolf" date="1311619202"]This topic is nearly the same as "that one,":http://developer.qt.nokia.com/forums/viewthread/5904/ which was discussed some time ago.
[/quote]
Thanks, didn't know that one -- though I think I could still contribute :-)[quote author="Gerolf" date="1311619202"]
The point is, Winodw is no realtime OS, even not a soft realtime OS, so why should it be that precise for timers and sleeps.And for typical windows programs, that's absolutly enough :-)
[/quote]
If you count games as typical Windows programs, then no: 15ms timing resolution is not enough. Also not enough for apps that talk to some sort of device and need to know the timings, or network things (think about ping times).In fact, precise timing on Windows has easily been possible for ages. Just use GetTickCount() ( http://msdn.microsoft.com/en-us/library/ms724408(v=vs.85).aspx ), which simply returns the milliseconds since system start. Works reliably with 1ms-precision. For better precision, you can use QueryPerformanceCounter() ( http://msdn.microsoft.com/en-us/library/ms644904(v=vs.85).aspx ).
My own custom timer uses OpenCV's getTickCount function (see here, multi-platform source: https://code.ros.org/trac/opencv/browser/tags/2.3.0/opencv/modules/core/src/system.cpp#L212 )
So, precise timing on Windows is not a problem at all. I'm not saying I'm "angry" at QTimer or something. Visual Basics Timer has only a resolution of roughly 50ms, that's ok, too...
[quote author="Gerolf" date="1311619202"]
All this sleep comes from windows API which means, QThread has no influence on the resolution.[/quote]
That's indeed right. My guess is that this problem originates in the WinAPI "Sleep". So that's very unfortunate, as I'd really like to let my thread sleep for a short but definite amount of time (so I don't burn as much CPU by busy waiting). Unfortunately, I don't know of any workaround for pauses that have to be more precise than 15ms, so I have to do busy waiting after all...The thing is, QThread documentation says:
[quote]
msleep
Causes the current thread to sleep for msecs milliseconds.
usleep
Causes the current thread to sleep for usecs microseconds.
[/quote]
This thread was just a warning: Be careful, this documentation is not true, and it can cause subtle bugs. -
LinusA, of course there are different ways in non-RTOS systems to get current precise time. But you forget about context switching when OS switches between two threads. Also OS should wait for other thread to finish some op. All these reasons (and much more) lead to that 15ms delay.
-
I think the current implementation of QThread::sleep() is a feasible and good tradeoff between performance and precision. However, a seperate class which provides platform independent high resolution timing would be a good addition.
Keep in mind that no desktop system will guarantee any response time. Even when using tick count or performance counters the scheduler still might schedule your thread at any time from immediately to never. If you really need guaranteed response times you will need a (hard) real-time kernel.
-
[quote author="LinusA" date="1311649062"]
If you count games as typical Windows programs, then no: 15ms timing resolution is not enough. Also not enough for apps that talk to some sort of device and need to know the timings, or network things (think about ping times).In fact, precise timing on Windows has easily been possible for ages. Just use GetTickCount() ( http://msdn.microsoft.com/en-us/library/ms724408(v=vs.85).aspx ), which simply returns the milliseconds since system start. Works reliably with 1ms-precision. For better precision, you can use QueryPerformanceCounter() ( http://msdn.microsoft.com/en-us/library/ms644904(v=vs.85).aspx ).
...
[/quote]Sorry, I meant precision regarding timers and sleeps. I know it's possible to use GetTickCount etc to get a higher precision, windows even has a high resolution timer. But for sleep and timer events, the precision is not that high.
This comes from proces / thread switches, priorities etc. which windows has. even if the timer fires precisely, you have no guarantee when you will receive the event, as the current process might be not yours...
-
Thanks everybody, I see your point. It makes sense: Once you somehow let your thread sleep, it's the scheduler's choice when to wake it.
I had to learn this the hard way, I hope others may not.
If I knew a way how to improve this situation on Windows, I'd surely use it.
-
linux has a 1 ms resolution but windows XP has like 10ms resolution. From qt doc on timers on "timers:":http://doc.qt.nokia.com/latest/timers.html:
bq. The upper limit for the interval value is determined by the number of milliseconds that can be specified in a signed integer (in practice, this is a period of just over 24 days). The accuracy depends on the underlying operating system. Windows 2000 has 15 millisecond accuracy; other systems that we have tested can handle 1 millisecond intervals.
-
ok, I havent had time to read your reply fully, will do it when i get back home, but I am curious to know what your code looks like. For one I wonder if the accuracy is different if you call moveToThread(this) on your qthread, cuz if you did not then it is not a "true" thread and is still part of the gui main thread.
-
[quote author="yan bellavance" date="1311727349"]but I am curious to know what your code looks like.
[/quote]
This is a minimal example that I used to verify this problem:@
const int numTimes = 10000;
QVector<float> tickTimes;
QVector<float> myTimes;
tickTimes.reserve(numTimes);
myTimes.reserve(numTimes);
CPerformanceTimer MyTimer;for (int i = 0; i < numTimes; ++i) {
DWORD startTicks = GetTickCount();
MyTimer.start();
quSleep(100);
//busyWaitMs(0.1f);
tickTimes.append(floatCast(GetTickCount() - startTicks));
myTimes.append(MyTimer.getMs());
}//end forCGnuPlotter Plotter;
Plotter.setXLabel("Measured time [ms]");
Plotter.plotHistoFromVec(tickTimes, 100, 0.0f, 100.0f, "Using GetTickCount()");
Plotter.setXLabel("Measured time [ms]");
Plotter.plotHistoFromVec(myTimes, 100, 0.0f, 100.0f, "Using custom timer");
@The histograms look very different, depending on if you use busy waiting or the sleep functions. My custom timer and the busy-waiting-timer are based on OpenCV's platform independent high resolution timer (which internally uses QueryPerformanceCounter() on Windows).
@
inline void quSleep(const unsigned long usecs) {
CEmptyThread::uSleep(usecs);
}//end inline
@
CEmptyThread just inherits QThread and makes usleep public:
@
static void inline uSleep(const unsigned long usecs) { usleep(usecs); };
@[quote author="yan bellavance" date="1311727349"]
For one I wonder if the accuracy is different if you call moveToThread(this) on your qthread, cuz if you did not then it is not a "true" thread and is still part of the gui main thread.[/quote]
As I found out before, and as I linked: No, moveToThread(this) is not a good thing. Good pages to read are:- http://labs.qt.nokia.com/2010/06/17/youre-doing-it-wrong/
- http://blog.exys.org/entries/2010/QThread_affinity.html
(Sorry that I'm posting this again, this is the 3rd thread I'm posting these links in, but it came up).
If you want more of my code snippets and things I tried, have a look at my thread here: http://developer.qt.nokia.com/forums/viewthread/7884/
I currently have a worker-thread where I subclassed QThread and re-implemented ::run() and protect everything with mutexes (because I didn't want signals & slots for several reasons).
-
-
LinusA: Of course you get lags there, too: Windows will stop the process when it used up its CPU time and then you run into the same scheduling delays you have when sleeping.
-
Guys, there are endless threads on the internets discussing why Windows has this 15,6 ms resolution for Sleep or "ordinary" timers. It's absolutely not a matter of cost of the context switch (15ms per context switch? are we serious?), rather than simple fact that Sleep uses the system tick clock for rescheduling, and that clock runs at 64Hz [1]. That's it. For timers there are workarounds like multimedia timers or so (which uses the underlying RTC / HPET hardware for providing fine-tuned timers).
Again: if you are aware of a possible, different, better implementation, please discuss it with the trolls (freenode, #qt-labs) and eventually submit it. I still think that using sleep() in any code is usually wrong, and relying on the fact that it's somehow accurate is even more wrong -- noone gives you that guarantee.
[1] http://msdn.microsoft.com/en-us/windows/hardware/gg463266
-
[quote author="Tobias Hunger" date="1311755297"]LinusA: Of course you get lags there, too: Windows will stop the process when it used up its CPU time and then you run into the same scheduling delays you have when sleeping.[/quote]
Well, in my tests with precise timing, Windows did not stop my process -- at least not noticably: I get continuous results with busy waiting, and no interruption. Maybe because I'm on a multicore machine?[quote author="peppe" date="1311757068"]
[1] http://msdn.microsoft.com/en-us/windows/hardware/gg463266[/quote]
Thanks, great link, good read.Anyway, here the histograms I promised:
!http://img600.imageshack.us/img600/6314/timingtestbusywaitcusto.png(busy waiting, custom timer)!
This looks how it's supposed to! No significant interruptions, small standard deviation, perfect mean of 0.1ms.Now the same busy waiting, measured with GetTickCount():
!http://img10.imageshack.us/img10/3640/timingtestbusywaitgetti.png(busy waiting, GetTickCount)!
Note the bump at around 15ms, and the bigger standard deviation! However, the mean is still "clean", perfect 0.1ms.With usleep(), the picture looks different (and it took a while longer):
"Real" results with custom timer, this is what happened in reality:
!http://img339.imageshack.us/img339/2710/timingtestqusleepcustom.png(usleep, custom timer)!
So it took 10ms for the usleep() call almost every time.And finally, measured with GetTickCount(), the picture is skewed again:
!http://img828.imageshack.us/img828/7318/timingtestqusleepgettic.png(usleep, GetTickCount)!Hopefully this is not too off topic and not annoying anybody -- I find it quite interesting.
[quote author="peppe" date="1311757068"] I still think that using sleep() in any code is usually wrong, and relying on the fact that it's somehow accurate is even more wrong -- noone gives you that guarantee. [/quote]
Now this is interesting, could you maybe elaborate or point me to some article etc? Let me describe my problem:
I'm doing heavy image processing in a 10ms time "slot", since my cameras are running at 100FPS. Sometimes I finish early, i.e. within 3ms. Now my working thread is idle for 7ms and keeps polling the cameras until a new frame arrives. I'd very much like to not burn CPU during this time, so it would be good to pause the polling for 0.1ms each iteration. Right now I'm back to busy waiting, as I can't afford to to "miss" a frame by some ms. Unfortunately this burns CPU time, which the camera driver's thread could probably use on its own (as it's doing some MJPEG decompressing).If you saw a better design, this would be interesting...
Thanks everybody!
-
You have to be carefull when people tell you not to call a function because it's "dangerous". If we listen to them then we cannot call: QCoreApplication::processEvents(), QThread::sleep(), QObject::moveToThread() or others I have not read of. They say this because of the context in which they are programming, they have guidelines that limit what they can do.
Of course you should never call sleep() in the GUI main thread (if you don't moveToThread() your QThread then it's still in the main thread and then I would see why calling sleep should be avoided) and if you call processEvents(), do it from the GUI main thread preferably.
I built a qt app on Linux that uses all of these and it's been running for a year now without ever crashing. Also, my app often show its CPU usage as 0% in the system monitor (on windows this would be windows task manager),its very efficient. I do have an i7 intel cpu.
I checked the accuracy of my sleep periods and they are pretty much on the dot so I don't know why you are having these problems, perhaps because you are on a windows platform
-
[quote author="yan bellavance" date="1311786486"]
Of course you should never call sleep() in the GUI main thread (if you don't moveToThread() your QThread then it's still in the main thread and then I would see why calling sleep should be avoided)
[/quote]
A QThread has only one single place where execution is done in another system thread: The run() method. If you call exec() within run(), you have an event loop running on another system thread. Using moveToThread() just tells an QObject that it should use a specifc QThread's event loop. If you create a new QThread, then it's events (signals & slots) will be processed by the main applications system thread. moveToThread(this) tells the QThread "hey, use your own event loop for your own events".[quote author="yan bellavance" date="1311786486"]
I built a qt app on Linux that uses all of these and it's been running for a year now without ever crashing. Also, my app often show its CPU usage as 0% in the system monitor (on windows this would be windows task manager),its very efficient.
[/quote]
Yes, sleeping frees system CPU time, that's why it's "cool".[quote author="yan bellavance" date="1311786486"]
I checked the accuracy of my sleep periods and they are pretty much on the dot so I don't know why you are having these problems, perhaps because you are on a windows platform[/quote]
Yes, look at the thread title or the previous posts or the links that have been posted. This is a Windows-related problem. -
bq. As I found out before, and as I linked: No, moveToThread(this) is not a good thing. Good pages to read are:
I have read those articles and I am not convinced. If you know what your doing its not a problem. I know what the pitfalls are and have implemented my program baring that in mind. Be aware that using QtConcurrent is not a solution that replaces completely QThread, there are many things it can't do and can actually be more "buggy"
-
[quote author="LinusA" date="1311777241"]
[quote author="peppe" date="1311757068"] I still think that using sleep() in any code is usually wrong, and relying on the fact that it's somehow accurate is even more wrong -- noone gives you that guarantee. [/quote]
Now this is interesting, could you maybe elaborate or point me to some article etc? Let me describe my problem:
I'm doing heavy image processing in a 10ms time "slot", since my cameras are running at 100FPS. Sometimes I finish early, i.e. within 3ms. Now my working thread is idle for 7ms and keeps polling the cameras until a new frame arrives. I'd very much like to not burn CPU during this time, so it would be good to pause the polling for 0.1ms each iteration. Right now I'm back to busy waiting, as I can't afford to to "miss" a frame by some ms. Unfortunately this burns CPU time, which the camera driver's thread could probably use on its own (as it's doing some MJPEG decompressing).If you saw a better design, this would be interesting...
Thanks everybody![/quote]
Yes, the "usually" was specifically referred to hardware programming. I don't have enough information, but are you sure that you can't switch to a some kind of interrupt-driven design? That is, being notified by the camera that a new frame is available? Otherwise just use a QTimer which has 1ms resolution on pretty much any modern system.