Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct
Memory leak in thread with QTcpSocket
OSX 10.6.8 and later up to Mavericks (no testing has been done with Yosemite as yet.)
I'm using ->read() and ->write() in a thread, within the context of a continually open QTcpSocket TCP connection (no disconnect and reconnects... just stays connected) that was opened in the thread's ->run()
I have to send a message very couple of seconds or so in order that the remote device keep its end connected. When I send the keepalive message, the device responds with an ACK message. To do the timing, I use the ->waitForReadyRead(delay) call, because in this state, no messages originating from the device without the stimulus of a keepalive are expected, and a certain number of full ->waitForReadyRead(delay) calls add up to my delay. Once the delay time has passed, I send a single keepalive message with ->write().
The entire process, from obtaining the QTcpSocket with new to repeatedly calling ->read() and ->write() and finally (though not relevantly) closing the QTcpSocket is done in the thread's ->run() loop.
There's no communication between this thread and any other part of the program until the program closes, at which point this thread will end due to a variable in the parent class becoming true (that's what the read/write loop while is conditional upon.)
This all works fine, in the sense that the messages go out and come back as expected and on time, and the device stays with me, IE doesn't drop its connection, and the app overall does what it is supposed to, the way it is supposed to. In other words, I have the functionality I require.
However -- either the ->write() or the ->read() (or both) creates a fairly severe memory leak on the order of about a megabyte every 20 seconds, in a context where I'm sending and receiving about a message a second. The messages as sent and received are about 4 bytes, but of course in the context of a network packet they will be padded to be much larger. I have checked bytes sent and it is correct, and ->bytesAvailable() is always zero after I've caught my message (there won't be another for a couple seconds.)
If I comment out the write, but leave all the rest of the logic running so the waitForReadyRead() is still running, the leak stops. That seems to imply it's the ->write() but then again, it may be message reception, as opposed to just waiting for one. Without the ->write(), though, no message will arrive.
To look for the leak, I have created a minimal test application where the code does the same thing, sends keepalives and catches ACKS in the same way as far as the ->read() / ->write() logic and timing goes, but in the test case it is all called from main(), no thread, and there are no leaks in this case -- so I presume the leak is something to do with the TCP code being in a thread.
But in the real app, I'm not using any qt objects other than QTcpSocket, and that's not being used across threads. I don't have the option of doing this in the top level event loop; there's a great deal of other stuff going on, and I can't be sure to have enough CPU at the times I need it as the user's demands are unpredictable. So I must have this functionality in a thread.
I should also point out that the app typically runs between 9 and 16 threads doing other things, and it all works great, no leaks. And the app is good sized -- I do all manner of things. Just this TCP thing leaks. And it's probably the simplest thread in the whole thing.
Apparently I'm doing something really wrong. But I have no clue as to what. Can anyone help?
Qt 4.7 is pretty old, you should test against at least 4.8.6 (which is the official version supporting Mavericks)
Also, without seeing your code it's pretty much crystal ball debugging. Can you share your thread class ?
As to the version of Qt: I'm using 4.7 because it has proven it has excellent compatibility with OSX 10.6 for both development and application deployment and later OSX versions as well as Windows XP and later. I'm not interested in adopting a later version of Qt that may either disenfranchise the users of older operating systems, or could break my code. This is not a small application. The app has to run on 10.6; the development has to be on 10.6. I do wish they had fixed the audio bugs I reported before going on to a later Qt, but I can live with that.
As to the code, I can give you only fragments, and it seems of little purpose, as the code isn't really at issue, rather than technique. The whole thread is quite a bit of code, it just isn't running most of it under these circumstances.
The part that is running here, however, is a very simple subset just as I described, and also as I mentioned, when built in a test app to do the same thing from main() works just fine.
The difference here is that the real thing is running in a thread. Under startup conditions, the app does nothing other than run the keepalive sequence; and under those conditions the only qt object is the QTcpSocket which is hit with new (from NULL) when the thread starts. What happens after the thread is done is non-relevant, because the leak occurs during the thread's ->run()
There are zero allocations or deallocations or new or delete operations during the thread ->run()
I should mention that my next task is to implement the test app with it's own subthread, and see what happens there. That'll take a few hours though to make sure I'm doing the same things and nothing else. I expect it to leak under those conditions, though I'll be delighted if it doesn't (and confused, lol)
And now for the code. startup in ->run() :
m_pTcpSocket = new QTcpSocket;
m_TcpState = TCP_NOT_CONNECTED;
getInLowQueue(); // send anything qued to go
ManageTcpClientConnection(); // catch anything incoming
For the above, the getInLowQueue() does the transmission, but first time though does nothing because there's nothing in the queue and the socket isn't valid. Then first time through ManageTcpClientConnection() the socket is fired up, a read is done, and off we go. The loop continues until the thread is terminated externally by the flag in the parent class, which in turn resides in the mainwindow class.
Read code runs like this:
while ((bytesavailable = m_pTcpSocket->bytesAvailable()))
AssembleMsg(Buf, bytesavailable); // state machine empties Buf
So what I'm asking here isn't a code review so much as it is any pointers about running QTcpSocket in a thread.
I didn't imply that a review was needed. It's just that between your code functionality description, current written code, and my understanding of the description, we could be loosing time hunting a bug in different directions.
The threaded fortune server example shows how to write a thread using a QTcpSocket, that might be of some help.
Qt 4's latest version runs on 10.6
Yes, the example runs fine. But my stuff doesn't. So there's something tricky about it at the edges somewhere. My environment is complex -- many threads, huge GUI, etc. Something has been done wrong, by me no doubt, but it's subtle -- subtle in the sense that everything works perfectly, but is still broken. Leaks.
Kind of like how the audio in Qt 4 works to 200khz and past under OSX, but only to 48 khz under Windows. That really ruined my day -- but I sure thought it was me for a long time. Just turned out that Qt was fundamentally broken under Windows. Eventually, I just had to tell my customers that if Qt didn't fix it, it wasn't going to get fixed.
This is likely, I think, to be similarly buried in arcana. Because heck, it works fine... it just leaks like a sieve.
Might not even be the networking. Might be something else that breaks when networking runs. What I do know is that if the networking is quiescent, there's no leak.
There appears to be no way to track memory use in a many-threaded, real time application.
Did you run your application with Instruments active ?