Memory leak when using queued signal&slot connections
-
example code:
// haha.hpp #ifndef HAHA_HPP #define HAHA_HPP #include <QObject> #include <QDebug> class haha : public QObject { Q_OBJECT signals: void send(int num); private slots: void receive(int num) { if (num != ++count) qDebug()<< "!"; } public: haha() { connect(this, &haha::send, this, &haha::receive, Qt::QueuedConnection); } int count = 0; }; #endif // HAHA_HPP
// main.cpp #include <QCoreApplication> #include <QDebug> #include "haha.hpp" #include <malloc.h> #include <iostream> int main(int argc, char *argv[]) { { QCoreApplication a(argc, argv); { haha ha; for (int i = 1; i < 45000001; i++) ha.send(i); qDebug()<< ha.count; QCoreApplication::processEvents(); qDebug()<< ha.count; // I can do this again, and the memory use will still be 9.1 GB before trim // for (int i = 45000001; i < 90000001; i++) // ha.send(i); // QCoreApplication::processEvents(); } // 9.1 GB RAM used std::cout << "before trim" << std::endl; std::cin.get(); malloc_trim(0); QMetaObject::invokeMethod(&a, "quit", Qt::QueuedConnection); a.exec(); // 1.0 GB RAM used std::cout << "QApp end" << std::endl; std::cin.get(); } // 1.0 GB RAM used std::cout << "QApp dest" << std::endl; std::cin.get(); malloc_trim(0); // 1.0 GB RAM used std::cout << "trim again" << std::endl; std::cin.get(); }
I was wondering if Qt's signal&slot mechanism will call slots in the same sequence of queued signals being fired (Wonderful English of mine! IDK how to express it correctly, pardon me!), so I wrote a simple test to figure it out. Then I noticed the crazy use of memory in that test and it never went down!
I tried to rewrite the test in other ways to see if it was my fault that caused the memory leak. But it seems Qt is to blame, not me. :vThe code above is the last test I did. Tested on Linux with both GCC 8 and Clang 8 compilers and on Windows with MinGW 7.3.0, the issue appeared in all 3 environments.
The funny thing is, on Windows there is no malloc_trim(), but after all queued signals were processed, the test program occupied roughly the same amount of memory as that on Linux after malloc_trim().IDK if these are useful:
-
if I kept the test program running (after trim), and trigger system out of memory (and swap) on purpose, the memory occupied by the test program will reduce same hundreds MB (about 340 MB, from 1.0 GB to 660 MB, IIRC).
-
if I do another ha.send(i) 45000000 times after QCoreApplication::processEvents();, the memory use will still be 9.1 GB before trim.
I do know there is a dedicated place to submit Qt bug reports, but I didn't use it before, it seems complicated. And I just wanted to know if there is any temporary solution for now, so I posted it here.
PS: memory use data is grabbed from gnome system monitor and Windows task manager.
-
-
Hi,
When using queued connection, a copy of the parameters is sent. What you are currently doing is creating 45000000 int object. Since you do that in a tight loop before Qt's event loop is started, they just are queued up, hence your memory usage. There's no leak.
As for the memory staying the same, when an application releases memory, the OS does not necessarily reclaim it immediately.
-
I am not an operating system expert or a real programmer (just getting started with it).
But I assume this "when an application releases memory, the OS does not necessarily reclaim it immediately" thing is a unix-like OS specific thing and so they have an extra malloc_trim() function.If I create a huge STL container object, then trim after it's deconstructed, the OS actually have all the memory allocated for the container back.
This does look weird to me: in main() if I put the whole original test in a for loop and do it for no matter how many times, the memory uses remain the same. 9.1 GB before trim and 1.0 GB after. That means this memory the OS does not reclaim is reusable by the Qt framework? But if I do other memory allocations, like creating huge STL containers, they just request for new memory, in addition to this already allocated 1.0 GB.
int main() { for (int x = 0; x < 10; x++) { // the orignal test code } // memory uses are the same: 9.1 GB before trim and 1.0 GB after { std::vector<int> hehe; hehe.resize(90000000); std::cout << "vector!" << std::endl; std::cin.get(); // this adds extra 0.3 GB use, 1.3 GB in total now } }
Plus, as mentioned in the first post, even if I force a full out-of-memory&swap situation, the OS only reclaim a small proportion of that 1.0 GB.
edit: F me! my expression looks confusing to myself.
Two main points here:-
In my test, STL containers can release all their memory allocated, but Qt's queued signal&slot connections cannot
-
The memory allocated by Qt seems only reusable by Qt itself
edit again! I'm saying if it's just the OS that doesn't reclaim the freed memory, then this memory should be able to be used by other memory allocation operations, right? But in the test the vector doesn't reuse that 1.0 GB "released" memory.
-
-
Because there is nothing to catch, you can't compare the handling of a big vector of data with the creation of huge amounts of disconnected objects. One thing I missed in my explanation is that beside the copy of the parameter, the queued connection also implies the creation of an event that is sent to the object in the receiving end of the connection.
-
Because there is nothing to catch, you can't compare the handling of a big vector of data with the creation of huge amounts of disconnected objects. One thing I missed in my explanation is that beside the copy of the parameter, the queued connection also implies the creation of an event that is sent to the object in the receiving end of the connection.
@sgaist said in Memory leak when using queued signal&slot connections:
Because there is nothing to catch, you can't compare the handling of a big vector of data with the creation of huge amounts of disconnected objects.
Ok I think I understand this part. If by "disconnected objects" you actually meant "discontinuous objects". :P
One thing I missed in my explanation is that beside the copy of the parameter, the queued connection also implies the creation of an event that is sent to the object in the receiving end of the connection.
But this one is tricky for me.
I just did some new tests try to fill in this 1.0 GB released memory with discontinuous objects.... but I don't know how to do it correctly.I tried with
-
hash map with int* as key and a for-loop insert a whole lot of them
-
just randomly creating raw pointers of int/char objects
But seemed both just allocated new memory blocks instead of using that 1.0 GB.
Were my methods terribly wrong or something? :(
I understand in real life programs it won't be possible to queue 45000000 events (I don't believe people will do this kinda terrible design), so this should not be a concern. But I'm just getting more and more curious about this "issue".
-
-
What exactly is your purpose with toying with memory management ?