Nominate our 2022 Qt Champions!

Introducing QtMetrics

  • We are ready to launch our QtMetrics page for the public now.

    Description from its wiki page:
    Purpose of the Qt Metrics web portal is to visualize and communicate the progress of Qt quality, focusing first to Continuous Integration (CI) but to cover e.g. code coverage, autotest, release test automation and manual test metrics and reports later on. Goal is to automate laborious manual work that is required e.g. in the CI release cycle to report the key information, as well as providing real time data with good performance. Target audience is Qt integration teams (in Digia) and the global Qt developer community.

    The main page can be found here:

    The wiki page describing it in more detail can be found here:

    Note 1: Loading it the first time can take a while. It caches data in your current session, making browsing more user friendly. If it hasn’t loaded the windows in a minute, hit reload. It sometimes keeps trying to load when it’s indexing new data in the background.

    Note 2: The data you see is currently manually kept up to date about once a day. This will be changed as soon as we have time to implement an automated function for this that triggers after a build.

  • Moderators

    Cool, thank you for that!

  • Would you mind if I moved this into the Announcements forum? It is a bit hidden here.

  • Ok, I wasn't aware of that. Thanks for the tip. I added this there.

  • New Report builder version (v1.1) released to include the database status (new top right box). It will also show if there is a database rebuild in progress, which may cause some slowness in using the database. Times are shown in local time, please let me know if you notice any erroneous calculation.

  • New report builder version v1.4 now available. See the backlog for list of updated functonality (at the bottom of the page).

    In addition, the description wiki page updated accordingly, including instructions for couple of use cases for your study and analysis purposes.

    Hope this all helps you in using the page.

  • The database behind the Qt metrics page has been cleaned today to include only the data since 2013-04-01. This removes also the few obsolete master branches from the views.

  • Thank you for that. Really nice!

  • Looking at the Qt metrics output, I'd appreciate some clarification.

    • Qt5_dev_Integration, ID 302 shows as a FAILURE on 2013-12-11 with 5 significant failed autotests and 35 insignificant failed autotests
    • Qt5_release_Integration, ID 286 shows as a SUCCESS on 2013-12-08 with 45 significant failed autotests and 99 insignificant failed autotests

    What is the process to handle significant (and insignificant) failed autotests?

    How can a test be considered a success when it has 45 (or even 1) significant failed autotest?

    The Qt Metrics is a great addition to Qt development, but Is there a "formal" process in place or under development that explains what to do with the Qt Metrics?



  • Hi, thanks for your question.

    The answer is not fully visible in the 1st level view. Please click Qt5_dev_Integration and Qt5_release_Integration to view their details.

    Here, the Qt5_dev shows that three confs have significant autotest failures. Two of those confs are tagged insignificant ('Yes' in the related column) while one is not. This one (win32-msvc2010_Windows_7) is causing the dev build to fail.

    Then, the Qt5_release shows that six confs have significant autotest failures. However, all those six confs are tagged insignificant. This results the release build still to be successful.

    The tagging of autotests and confs as insignificant may be somewhat complicated. The Autotest dashboard (the last box) shows all the combinations. You can sort the table by clicking the header titles. The Failure category link describes each combination. In this case:

    2) Failed Significant Autotests in Insignificant CI Configurations
    Description: This Autotest fails but does not block the CI Configuration build because the CI Configuration is set insignificant (possibly failing Autotests will not result to build failure).
    Corrective action: These Autotests or code under test should be fixed; or the failed Autotests should be marked individually insignificant for relevant configurations to be able to improve CI coverage.

    Hope this helps. Please don't hesitate asking further questions if needed.


  • Juha,

    Your reply helps. It's just strange to see Qt5_release_Integration marked as a success with 45 significant failed autotests. Perhaps the configuration headings in the last detail table (i.e., Autotest Dashboard) should be called "blocking" and "non-blocking." I would also recommend that some verbiage be added to section 3 of "Qt Metrics Wiki": to provide more clarification.

    I'd also still like to know if there's a “formal” Qt process in place or under development for the build/release process, etc., including information that explains what to do with the Qt Metrics.



  • Hi all. Dramatic performance improvement now available with the new report builder version v2.1.

  • Indeed dramatic is the correct word. Kudos for that!

  • Qt Metrics has been improved with couple of new features (v2.4).

    See build and autotest results for previous project builds:
    Select a Project and see the data for any listed build by clicking the build number (the latest build is shown by default). You may also combine with the timescale filter to see the build results since a specific date.

    Autotest dashboard to list autotests by failure %:
    Select “show” from Autotest dashboard and see the autotests that fail most often in the builds that they are run. Calculating this may take time especially when the timescale filter is used, therefore the “show/hide” selection. Once calculated, you may dive into autotest view (level 2) and back to autotest list (level 1) without recalculation, until you change the project or timescale filters.

    Test case (function) data in Autotest dashboard:
    Select a Project and it’s Autotest (tst_xxx) and see the test cases that failed and caused the test set to fail. Hopefully this helps indicating the problems and thus making the needed corrections. You may also use the build selection in the Project dashboard to see previous build results. Data is read from the test result xml files and may therefore take a moment to update.

    There is also a new page how to read autotest metrics from QtMetrics:

Log in to reply