How to handle dynamic memory allocation failure
-
[quote author="koahnig" date="1307539373"]
BTW: Is it just in my browser that the last paragraph after the example is NOT shown in the original post?
I just came across it when quoting the post.[/quote]Forget that part. Suddenly the text was there when I had posted my reply.
@Lukas Was your original post updated? -
[quote author="koahnig" date="1307539373"]
But what sense would the check of the pointer make then?
[/quote]No sense, because the pointer returned by new will never be 0.
Even if you override the operator new the compiler won't let you return 0.[quote author="koahnig" date="1307539373"]
BTW: Is it just in my browser that the last paragraph after the example is NOT shown in the original post?
[/quote]
[quote author="koahnig" date="1307539373"]
Forget that part. Suddenly the text was there when I had posted my reply.
@Lukas Was your original post updated?
[/quote]This might be the reason ;-)
-
[quote author="Lukas Geyer" date="1307537741"]
Please keep also in mind, that even if new is returning a valid pointer (not returning null) this does not mean that there is actually memory available for the application. If I remember correctly new on Linux systems always returns a valid pointer, as memory is acquired on access, not on allocation!So the following snippet will crash even due to proper error handling
@
int* integer = new int;
if(integer != 0)
{
*integer = 42; // crash
}
@
[/quote]This would mean that Linux is not implementing the C++ standard, I can't beleive that.
-
[quote author="Gerolf" date="1307556594"]
This would mean that Linux is not implementing the C++ standard, I can't beleive that. [/quote]It is called overcommit. The address space is expanded immediately but physical memory pages are assigned at the moment the memory is accessed. If there are none, your process or any other process is silently killed by the kernel, depending on the selected strategy.
I have to admit that i don't know if anything has changed in recent kernel versions, but usually overcommit is enabled. The behaviour can be controlled using the vm.overcommit_memory sysctl or /proc/sys/vm/overcommit_memory where you can only restrict, but not disable overcommit.
I've read an article about it just a few days ago. I'll see if i can find it.
-
Quoting [i386] linux-2.6.38/Documentation/vm/overcommit-accounting
@
The Linux kernel supports the following overcommit handling modes0 - Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. root is allowed to
allocate slighly more memory in this mode. This is the
default.1 - Always overcommit. Appropriate for some scientific
applications.2 - Don't overcommit. The total address space commit
for the system is not permitted to exceed swap + a
configurable percentage (default is 50) of physical RAM.
Depending on the percentage you use, in most situations
this means a process will not be killed while accessing
pages but will receive errors on memory allocation as
appropriate.
@On LMDE 2.6.32-5-amd64 for example, which is based on debian testing, heuristic overcommit handling is enabled per default.
-
That is really interesting. However it cannot comply with new(nothrow). As Lukas has pointed out, the check of the pointer will not work. So there should be also a mechanism to secure the situation. Is anybody aware, that a compiler complains when the new (nothrow) is used?
-
bq. koahnig: Is anybody aware, that a compiler complains when the new (nothrow) is used?
If i understand you clearly then my observation is - when i used std::nothrow with new and if memory allocation failed then it is returning NULL, safe way to handle memory allocation failure. Also compiler(i tested on Ubuntu/Qt 4.7) has no issue.
@char *buffer = new (std::nothrow) char[LARGE_MEMORY_CHUNK];
if ( NULL == buffer )
{
//terminate gracefully
}@ -
[quote author="Meraj Ahmad Ansari" date="1307703395"]If i understand you clearly then my observation is - when i used std::nothrow with new and if memory allocation failed then it is returning NULL, safe way to handle memory allocation failure. Also compiler(i tested on Ubuntu/Qt 4.7) has no issue.
@char *buffer = new (std::nothrow) char[LARGE_MEMORY_CHUNK];
if ( NULL == buffer )
{
//terminate gracefully
}@
[/quote]This is possibly due to
@
0 - Heuristic overcommit handling. ... It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. ...This is the default.
@If you absolutely have to ensure that your application recovers from out of memory situations (working both on Linux and Windows) you will have to use either
- std::nothrow and the null check consistently or
- set a new handler, which should be called even with exceptions disabled (/EHsc)
In addition, you should include Linux specific code which
- disables overcommit (vm.overcommit_memory = 2) which might lead to thrashing and
- instructs the kernel to not kill your process in case of out of memory (/proc/self/oom_score_adj = -17), which is an absolute must because even if your process does not run out of memory he might be selected by the kernel oom killer and will be assassinated without any notice.
And you should reserve enough "emergency memory" at startup when needed, because if there is no memory the kernel can't give you some.
Of course none of this procedures will save you from kernel panics or power outages - so you should possibly add some disaster recovery code to your list ;-)
In my opinion a standard application should
- not use std::nothrow and null checks because this absolutely bloats your code in every single way
- set a new handler or use an (outermost) exception handler which handles out of memory situations (by saving the current state and exiting)
-
bq. Lukas Geyer wrote.. In my opinion a standard application should
- not use std::nothrow and null checks because this absolutely bloats your code in every single way
- set a new handler or use an (outermost) exception handler which handles out of memory situations (by saving the current state and exiting)
Just a query Lukas-
you have function and you are allocating some runtime memory in it. At some point of time allocation failed and your new handler function get called. Then how you deallocate pre allocated memory in function. Check following code snippet -
@
void method()
{
SomeClass1* obj1 = new SomeClass1;
// allocation is successful and call some
// function with obj1 , every thing fine upto this pointSomeClass2* obj2 = new SomeClass2;
obj2->SomeFunctionCall(); //Suppose allocation failed and your new handler function get called
// My question is how you are going to deallocate "obj1" memory?
}
@ -
@Meraj: One option would be to use smart pointers. Or to declare the relevant pointers outside the try catch block.
Btw.. is the original question solved? Simplistically, if the system's out of memory, we should just gracefully quit, or give an error. Usually, one can just look at the code of new (its mostly just about 10-15 lines of handling). The allocation is best done in a try catch block if you fear an exception is going to occur in allocation. I'm not sure about the pointer returned, but that could probably be inferred from the implementation of new.
"Smart pointers in Qt":http://labs.qt.nokia.com/2009/08/25/count-with-me-how-many-smart-pointer-classes-does-qt-have/
-
bq. jim_kaiser wrote .. The allocation is best done in a try catch block if you fear an exception is going to occur in allocation.
Have you tried it in Qt (specially on Linux)? if you are using try/catch block and an out of memory exception occurs, does your catch block hits? Certainly not in normal exception handling.
bq. jim_kaiser wrote .. Btw.. is the original question solved?
Frankly speaking I never considered it as a question. I really want to know What type of technique other developers are using.
-
bq. Have you tried it in Qt (specially on Linux)? if you are using try/catch block and an out of memory exception occurs, does your catch block hits? Certainly not in normal exception handling.
Honestly, I have never had to use it or rather test it.. specifically because running out of memory is not a concern in my project. I will test it on Windows and Linux in Qt and let you know. Though i do find it strange that the out of memory exception is not caught but i'm just talking about pure C++. I could be mistaken but why should it be any different with Qt objects, when the 'operator new' used is the same?
On Windows MSVC2005
@
void *__CRTDECL operator new(size_t size) _THROW1(_STD bad_alloc)
{ // try to allocate size bytes
void *p;
while ((p = malloc(size)) == 0)
if (_callnewh(size) == 0)
{ // report no memory
static const std::bad_alloc nomem;
_RAISE(nomem);
}return (p); }
@
The code here quite clearly returns 0 or valid pointer or throws a std::bad_alloc exception. But the concerns about lazy allocation and returning a valid pointer without allocation could be valid depending on the platform.
bq. Frankly speaking I never considered it as a question. I really want to know What type of technique other developers are using.
Okay, thanks for clearing that. I found the ideas quite insightful too, the approach with reserving emergency memory at the start of execution is a nice approach.
-
[quote author="Meraj Ahmad Ansari" date="1307953504"]
Frankly speaking I never considered it as a question. I really want to know What type of technique other developers are using.[/quote]As I think this is directly related to your last question I'll answer right here.
To me, there are usually two types of heap allocations in an application:
small, but frequent allocations required for the program to flow, like creating widgets or objects on the heap for the user interface or eg. network processing
large, but infrequent allocations required for some specific tasks and not necessarily required for the program to flow, like a buffer for loading eg. a large image into memory which has been selected by the user in an image processing application
The way you react to out of memory situations will differ for both types.
- I see no way a program can or should recover from such situations. It should save any data which has to be saved to a persistent storage and then exit. For this task, you will usually need just a limited amount of information, which then has - of course - to be available to your new handler / exception handler. This allows you to handle out of memory situations at a single point in your code.
Usually the required information is available through your "design" anyways, because you'll need it for your normal program flow too (eg. Application::instance()->openedImage()->data() to stay at our image processing application example). There is no need to access all the other data not relevant for you recovery code, especially there is no need to delete anything because your application gets aborted anyways.
However, if you need to allocate further memory in your recovery code you should have reserved some already (the "emergency heap" discussed earlier).
- There is no need to abort the application in such situations. Allocations failures can be caught using std::nothrow and a null pointer check or local exception handling.
Probably my last post was a bit misleading. Of course there are situations where it is absolutely legitimate to use a std::nothrow / null pointer check, but you should not guard every single allocation with it.