Find location of memory corruption [Solved].
I have a program using Qt5.1.1 for android that segfaults (still build and executed on ubuntu precise 64bit). The method in which the segfault ultimately occurs is malloc_consolidate performed in the process allocating an object via 'new'. According to different internet sources, the most probable explanation for that is, that at some point the data structures used by libc memory management have been corrupted. This is supported by the fact, that minor changes such as adding local varible within a method changes the location of the segfault.
I turned my code upside down, but the only way I could imaging how this could happen, involves some kind of pointer modification/arithmetics. Which I didn't do anywhere. I work with std and Qt5 classes mostly. Is there a way identify the code location where this occurs? Or more generally: Is there a way to test whether and from where in my code the memory containing memory management data structures are modified? Can valgrind do that? And how? Is there another possibility?
I use gcc 4.6.3.
Backtrace of segfault:
#0 malloc_consolidate (av=0x7ffff578a720) at malloc.c:4271
#1 0x00007ffff5452446 in malloc_consolidate (av=0x7ffff578a720) at malloc.c:4226
#2 _int_malloc (av=0x7ffff578a720, bytes=9217) at malloc.c:3543
#3 0x00007ffff54536e6 in malloc_check (sz=9216, caller=<optimized out>) at hooks.c:233
#4 0x00007ffff5d58ded in operator new(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff5d58f09 in operator new(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x000000000041c403 in TNT::i_refvec<double>::i_refvec (this=0x7fffffffd040, n=1152) at ../smartphonebrainscanner2-core/src/jama125/tnt_i_refvec.h:105
#7 0x00000000004153e0 in TNT::Array1D<double>::Array1D (this=0x7fffffffd040, n=1152) at ../smartphonebrainscanner2-core/src/jama125/tnt_array1d.h:105
#8 0x000000000040fe7e in TNT::Array2D<double>::Array2D (this=0x7fffffffd040, m=192, n=6) at ../smartphonebrainscanner2-core/src/jama125/tnt_array2d.h:102
#9 0x000000000040cb6e in DTU::DtuArray2D<double>::DtuArray2D (this=0x7fffffffd040, m=192, n=6) at ../smartphonebrainscanner2-core/src/dtu_array_2d.h:65
#10 0x000000000040e255 in DTU::DtuArray2D<double>::multiplyR (this=0x31cabb0, B=..., out=...) at ../smartphonebrainscanner2-core/src/dtu_array_2d.h:332
#11 0x000000000040abab in Classification::createClassificator (this=0xbb5790, label=0xbb3610, values=0xbb6460) at ../miMaze/classification.cpp:53
#12 0x000000000041ecfc in dataReader::dataReader (this=0xbb5450, classi=0xbb5790) at ../miMaze/datareader.cpp:106
#13 0x0000000000409fe6 in main (argc=1, argv=0x7fffffffd978) at ../miMaze/main.cpp:25
Thanks for your help.
Hi and welcome to devnet,
Just to be sure: are you allocating your big objects on the heap ?
thanks for the welcome!
These DTUArray objects contain EEG data actually. So they contain quite some data (like 20-30 MB). But as far as I know (they are DTU-Code is third-party) the data is held as a plain C double array (so the data is not part of the actual DTU Array object and allocated on the heap).
In case that was the point of your question: I ran the program in a debugger and checked memory consumption of the process (with htop) at the Segfault and it was at about 120 MB. So, far away from my systems working mem capacities. And far away from the per process memory limitation (which should play no role on a 64bit system anyway).
But is your double array allocated with malloc/new ? Or created with
@double array[xxx][yyy]@ ?
In the second case, it's on the stack so it probably exhausts it
It's allocated via new.
Btw. what is the limit for the stack? I know that process cannot allocate more than 2^32 or 2^64 Bytes respectivly. Shouldn't heap and stack grow from different ends of the memory towards another? And only be exhausted when they meet?
It depends on the OS you're on. No they don't work like that, the stack size is limited and won't grow.
Can you show the code where you use this array ?
This DtuArray2D is a third party class specifically designed for use in EEG context. Hence I'd assume they build it in a way, that it won't overburden the stack in normal use.
Anyway, I just solved my problem on my own. I had an bad access to one of the DTUArray2D. I accessed an Array with one row with the fixed index 1 (instead of 0). This happened probably while transferring matlab code to C++. I will never understand why matlab guy chose to enumerate their arrays starting at 1. Anyway, because DTUArray2D is intended for platforms with low computation power, it does not check access-parameters. As a result I indeed wrote on memory outside the normal borders. I read this piece of code at about twenty time in the last few days, but never noticed.
Thank you for your active help on that matter. I guess I will start working into valgrind anyway when I have some spare time. It's a powerful tool for sure.
IIRC it's a context problem and an implementation optimization for programming languages.
Anyway, glad you found out :)
Please update the thread title prepending [solved] so other forum users may know a solution has been found :)