Debug: Performance issue with QJsonDocument destructor
-
Hello
while evaluating performances of Qt framework I developed a small parser for a JSON document whose content is very similar to a CSV file. I am runnign a small Windows desktop application compiling with MSVC 2022.
The file I am parsing contains approximately 800k numbers, arranged in a handful of different JSON arrays.
Parsing is implemented ina very straightforward way as highlighted in several different examples:
QFile file(m_fullPath); int rc = file.open(QIODevice::ReadOnly); if (rc) { QByteArray content = file.readAll(); QJsonParseError parseError; QJsonDocument doc; doc = QJsonDocument::fromJson(content, &parseError); file.close(); if (parseError.error == QJsonParseError::NoError) { //actual content parsing with QJsonValue and QJsonArray } }
when I run the Qt Creator but without Debugger (debug compiled binary): the call to QJsonDocument::fromJson(content, &parseError) takes approximately 2 seconds to read the whole file and approximately 100 msec to call doc destructor when it goes out of scope.
when I run within Qt Creator Debugger (debug compiled binary): the call to QJsonDocument::fromJson(content, &parseError) takes approximately 3 seconds to read the whole file ; problem is that in this configuration the excution of QJsonDocument destructor for doc takes more than 30 seconds to complete.
I might be wrong but it looks like the debugger undertakes a big effort to keep track of all the memory being allocated for each QJsonValue being created (and all of their references) during the parse process.
In case my guess is correct: is there any way to instruct the debugger to trust memory allocations/deallocations within the QJsonDocument tree, and avoid tracking them all?
In case my guess is wrong: is there anything I should be aware of to avoid stumbling in this behaviour? (of course I can work with a smaller file when running in debug mode, and will do if there is no valid alternative)
Thanks in Advance
-
Hello
while evaluating performances of Qt framework I developed a small parser for a JSON document whose content is very similar to a CSV file. I am runnign a small Windows desktop application compiling with MSVC 2022.
The file I am parsing contains approximately 800k numbers, arranged in a handful of different JSON arrays.
Parsing is implemented ina very straightforward way as highlighted in several different examples:
QFile file(m_fullPath); int rc = file.open(QIODevice::ReadOnly); if (rc) { QByteArray content = file.readAll(); QJsonParseError parseError; QJsonDocument doc; doc = QJsonDocument::fromJson(content, &parseError); file.close(); if (parseError.error == QJsonParseError::NoError) { //actual content parsing with QJsonValue and QJsonArray } }
when I run the Qt Creator but without Debugger (debug compiled binary): the call to QJsonDocument::fromJson(content, &parseError) takes approximately 2 seconds to read the whole file and approximately 100 msec to call doc destructor when it goes out of scope.
when I run within Qt Creator Debugger (debug compiled binary): the call to QJsonDocument::fromJson(content, &parseError) takes approximately 3 seconds to read the whole file ; problem is that in this configuration the excution of QJsonDocument destructor for doc takes more than 30 seconds to complete.
I might be wrong but it looks like the debugger undertakes a big effort to keep track of all the memory being allocated for each QJsonValue being created (and all of their references) during the parse process.
In case my guess is correct: is there any way to instruct the debugger to trust memory allocations/deallocations within the QJsonDocument tree, and avoid tracking them all?
In case my guess is wrong: is there anything I should be aware of to avoid stumbling in this behaviour? (of course I can work with a smaller file when running in debug mode, and will do if there is no valid alternative)
Thanks in Advance
@BaroneAshura
You talk about a debugger, but at no time do you mention which one/toolchain/platform you are using, which I would have thought is germane to your question.Speaking at least for gcc/gdb. MSVC should be similar, unless you are using some special feature of it of which I am not aware. C++ debuggers (as opposed potentially to, say, a Python one) do not do any kind of "keep track of all the memory being allocated" (a tool like valgrind would do that, but that is not what you are using). Rather, when you compile for debug the compiled code (and in particular the debug versions of the runtime libraries you are linked with) have already-generated code to do whatever to support debugging. That might, or might not, include some extra work on memory allocations/deallocations at the lowest level in the runtime library code. But it is not something controlled by the debugger. So you are not going to be able to do something to then debugger to affect this.
The first thing to try is running the debug-compiled code outside of the debugger itself. I would expect that to take about the same time as when run under the debugger?
30 seconds to do something about 800k numbers/strings/structures sounds a bit slow, but it is what it is. Code compiled for debug can indeed be a lot slower. As an example, people report that running Qt
QWebEngine
code compiled for debug can be painfully slow. While you develop you may indeed want to use a smaller input file. -
@BaroneAshura
You talk about a debugger, but at no time do you mention which one/toolchain/platform you are using, which I would have thought is germane to your question.Speaking at least for gcc/gdb. MSVC should be similar, unless you are using some special feature of it of which I am not aware. C++ debuggers (as opposed potentially to, say, a Python one) do not do any kind of "keep track of all the memory being allocated" (a tool like valgrind would do that, but that is not what you are using). Rather, when you compile for debug the compiled code (and in particular the debug versions of the runtime libraries you are linked with) have already-generated code to do whatever to support debugging. That might, or might not, include some extra work on memory allocations/deallocations at the lowest level in the runtime library code. But it is not something controlled by the debugger. So you are not going to be able to do something to then debugger to affect this.
The first thing to try is running the debug-compiled code outside of the debugger itself. I would expect that to take about the same time as when run under the debugger?
30 seconds to do something about 800k numbers/strings/structures sounds a bit slow, but it is what it is. Code compiled for debug can indeed be a lot slower. As an example, people report that running Qt
QWebEngine
code compiled for debug can be painfully slow. While you develop you may indeed want to use a smaller input file.@JonB quick responses:
my bad in not mentioning the toolchain and/or platform: I am building a Desktop application for Windows using MSVC as compiler (will update the post).to get to your question: As I wrote in my original post, when running the same binary "compiled for debug" within Qt Creator but without Qt Creator debugger connected, the descturctor takes approximately 100 ms.
-
@JonB quick responses:
my bad in not mentioning the toolchain and/or platform: I am building a Desktop application for Windows using MSVC as compiler (will update the post).to get to your question: As I wrote in my original post, when running the same binary "compiled for debug" within Qt Creator but without Qt Creator debugger connected, the descturctor takes approximately 100 ms.
@BaroneAshura
Then that is indeed a big difference. I will bow out because I do not know about the details/features of the MSVC debugger, or how Creator communicates with that rather than gdb. I will say that when I used to develop for Windows with MSVC (no Qt) I found no particular slowness issues when debugging. I don't know how easy this is from where you are now, but can you run the debugging session from Visual Studio instead of from Qt Creator to compare?