QElapsedTimer possibly broken?



  • I've been profiling an interpreter I am implementing and I used QElapsedTimer and the results were just not making any sense. Then I noticed shortening the program had little effect on the measurement with QElapsedTimer, so I switched to profiling with raw processor clocks, which confirmed QElapsedTimer numbers were way off for some reason.

    Qt 4.8 with GCC on windows 7



  • How did you use it? Show us some code please.
    Also, consider using "QTestLib":http://qt-project.org/doc/qt-4.8/qtestlib-manual.html and "QBENCHMARK":http://qt-project.org/doc/qt-4.8/qtest.html#QBENCHMARK macro.



  • Well, I wasn't doing anything extraordinary, if that's what you mean...

    @QElapsedTimer timer;
    .
    .
    timer.restart();
    //do some work
    cout << "work took " << timer.nsecelapsed();@



  • Weird. I just checked in different scenarios and modifying code definitely influences QElapsedTimer results which seems quite accurate... Have you tried simple Sleep() tests?
    I'm on windows 7, Qt 4.8.2, GCC 4.6.2



  • The timer is sometimes accurate, sometimes not. I was profiling the performance of this switch:

    @ static void* table[] = {
    &&do_add,
    &&do_sub,
    &&do_mul,
    &&do_div,
    &&do_end,
    &&do_err,
    &&do_fin};

    #define jump() goto *table[op[c++]]

        jump();
    

    do_add:
    add(in1[c], in2[c], out[c]); jump();
    do_sub:
    sub(in1[c], in2[c], out[c]); jump();
    do_mul:
    mul(in1[c], in2[c], out[c]); jump();
    do_div:
    div(in1[c], in2[c], out[c]); jump();
    do_end:
    // cout << "end of program" << endl;
    goto *table[6];
    do_err:
    cout << "ERROR!!!" << endl; goto *table[6];
    do_fin:@

    The switch goes through a byte array (op) symbolizing bytecode, and what I noticed is modifying the length of the array all the way from 100 million to 100 always returned a varying but still astronomically high nsecsElapsed().

    I profiled against the same workload written and compiled to native machine code, and instead of getting a constant ratio between interpreted and native performance as program length varied, the difference got preposterously more pronounced as program got shorter. It was then when I suspected QElapsedTimer might be the culprit, switched to using the clock() function from the standard C library and got consistent performance between interpreted and native as program length varied.

    Timing the switch was broken each time, whereas timing the native code was adequate.


Log in to reply
 

Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.