[Solved] Memory freed but stays reserved... why?



  • Hello
    I have a strange memory issue. My software creates objects using "new", these objects (representing samplesets) contains QStrings as a list of another types of objects (containing QStrings, representing samples) also created with "new". More samples are added to the sampleset (kept in a QList) until the sampleset is completed and thereby processed and deleted.

    I've added debug print to the destructors of both the sampleset and the samples. When issuing a delete on the sampleset, alla the child samples are deleted as well (as they should). My issue is that, when I watch the memory allocated by the program, it goes up as samples are added (as expected), but doesn't change as the set is deleted ....

    Also, when I then start a new set, the memory allocated seems to stay the same until the amount of samples as exceeded the amount in the last set, after which it starts to increase. This to me would indicate that when the sampleset is deleted, the memory is internally freed, but not available for other processes. This would explain why it stays the same although samples are added to the new set.

    • My question is why?

    Is this a feature of Qt? have I done an error? I have a separate Process as well, and in this the memory is truly freed, I just can't figure out the difference...Is there a way to force free the memory? and what happens when it runs low (especially if the "internal memory" isn't in use)?

    Thank you
    Richard



  • How do you watch the memory allocated by the program? On what OS are your running it?



  • Linux - using "top". Watching the "VSZ" memory.



  • Ok...I don't know how reliable top is.
    I know that watching the "memory usage" in the windows task manager will not give you even a hint of what's really going on memory-wise.



  • Hello,

    Take a look at "QList documentation":http://developer.qt.nokia.com/doc/qt-4.8/qlist.html#details

    bq. Internally, QList<T> is represented as an array of pointers to items of type T. If T is itself a pointer type or a basic type that is no larger than a pointer, or if T is one of Qt's shared classes, then QList<T> stores the items directly in the pointer array. For lists under a thousand items, this array representation allows for very fast insertions in the middle, and it allows index-based access. Furthermore, operations like prepend() and append() are very fast, because QList preallocates memory at both ends of its internal array. (See Algorithmic Complexity for details.) Note, however, that for unshared list items that are larger than a pointer, each append or insert of a new item requires allocating the new item on the heap, and this per item allocation might make QVector a better choice in cases that do lots of appending or inserting, since QVector allocates memory for its items in a single heap allocation.
    Note that the internal array only ever gets bigger over the life of the list. It never shrinks. The internal array is deallocated by the destructor, by clear(), and by the assignment operator, when one list is assigned to another.

    I think this explains your problem.



  • Excellent, thank you!
    -Richard


  • Moderators

    I doubt that QList never shrinking is the issue here, assuming that your data is bigger than the list used to manage it.

    I would guess there is one of these two effects at work here:

    Memory fragmentation: You get memory in pages of 4k (depending on OS/cpu architecture other sizes are also possible). This page of memory is assigned to your application and stays allocated to it for as long as any byte in it is in use. You can end up in situations were huge areas of the memory assign to your application are actually unused, but still all the pages have some data in it that prevent the OS from reclaiming the page. Doing lots of new/delete operations to request different sized memory areas for objects of different life spans can cause trouble here.

    Allocating a page of memory to an application or back to the OS does require some bookkeeping. So the OS tends to avoid reclaiming memory pages if it does not need them. Just imagine an application that is constantly reserving some memory, does some calculation and frees it again. Why should the OS bother assigning memory to the application and back from the application all the time if there is enough memory around to make everybody else happy?



  • Issue is more basic then Qt itself. stdlib which is responsible for heap management doesn't return memory to the system immediately, when memory is freed.
    Memory requesting and returning from/to a system is a quite costly. Also if program have reached some level of memory consumption there is a good chance that soon it will needed again. So stdlib treys to avoid unnecessary releasing memory to the system.
    To detecting memory leak you have to use spatial tools like "Electric Fence" or "Valgrind". Top or memload are useless for that purposes.



  • [quote author="ThaRez" date="1325678328"]Hello
    I have a strange memory issue. My software creates objects using "new", these objects (representing samplesets) contains QStrings as a list of another types of objects (containing QStrings, representing samples) also created with "new". More samples are added to the sampleset (kept in a QList) until the sampleset is completed and thereby processed and deleted[/quote]

    It is better not to create new objects and delete stale ones, but just screw the data through several "container" objects keeping their number as small as possible. This way you keep memory consumption low (thus avoiding fragmentation) and speed up things significantly because heap allocation is expensive in terms of cpu cycles



  • [quote author="deisik" date="1327011957"]
    It is better not to create new objects and delete stale ones, but just screw the data through several "container" objects keeping their number as small as possible. This way you keep memory consumption low (thus avoiding fragmentation) and speed up things significantly because heap allocation is expensive in terms of cpu cycles[/quote]

    Premature optimization is a cause of many evil.

    Whether using "containers" is more efficient than something else depends on the implementation of the containers and your objects. Giving such advice as above does more harm than it helps.



  • [quote author="Volker" date="1327017378"]Premature optimization is a cause of many evil[/quote]

    I think, you are abusing the meaning of the phrase "premature optimization" here, let alone it being a truism which is not false but rather meaningless by itself

    [quote author="Volker" date="1327017378"]Whether using "containers" is more efficient than something else depends on the implementation of the containers and your objects. Giving such advice as above does more harm than it helps.[/quote]

    We have a memory issue in this case, so it is not about "optimization" to start with


  • Moderators

    deisik: Do we really have a memory issue or just a perceived memory issue? I think it is the latter, considering that the memory is getting reused according to ThaRez.

    If that is indeed the case (measurements with tools better for the task would be needed here), then changes to the code are premature.

    I am with Volker here: Whether using container objects help or hinder depends a lot on the implementation of those objects and your memory management. I'd go and try different memory allocators first before changing my data structures (and that only after taking some measurements to make sure there is indeed a significant amount of memory fragmentation!).



  • [quote author="Tobias Hunger" date="1327070801"]Whether using container objects help or hinder depends a lot on the implementation of those objects and your memory management. I'd go and try different memory allocators first before changing my data structures (and that only after taking some measurements to make sure there is indeed a significant amount of memory fragmentation!)[/quote]

    This is not quite so. Think of it as a choice between sorting algorithms. If your data set is small it does not matter which one you use and it is surely much easier to write a bubble sort than reflect over a heapsort (provided you have none at hand). But if the number of items grows really large (which is the case here as I can guess) you have actually no other option left but to change your sorting routine


Log in to reply
 

Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.