Nominate our 2022 Qt Champions!

std::vector, dinamic size array allocation, Compiler flags to improve speed ?

  • Hi i am making a software for multivariable fuzzy k means cluster.

    It work over big matrix´s 10 variables by 50.000 observations of double data types.
    Write the same code two times one with std::vector the other with dynamic array allocation.
    Now when use std::vector i get 20 minutes of process time.
    when use dynamic array allocation double *a = new double[size] the proccess time
    is very fast, a few seconds, but some time i get std::bad_alloc.

    The matrix not need to grow up o shrink or bounds check.
    Only make a resize to the needed size, load items and then make a lot of access.

    In the web i see many low quality answer about comparation between that or what container
    but not have a clear idea.

    1. There is any flag for the mingw compiler to improvee std::vector speed.

    2. I am running from the Run command of Qt creator, i read that some sanity checking is performed by some compilers,
      but in release will be more faster.
      So i put the cluster.exe file in the qt dll folder "C:\Qt\Qt5.4.1\5.4\mingw491_32\bin"
      and run but the std::vector performance does not change
      Question: if it is true, How make the release in the QtCreator

    3. What is the origin of std::bad_alloc, there is any way to make a memory reservation
      to avoid std::bad_alloc?

    4. Memory usage comparision,
      When the program run using the same dataset, i open the "Resource Monitor" and see:
      64mb with std::vector
      400mb!!! with dynamic allocation ?
      also if run again without closing, the memory increase 600,
      then 1.0gb then 1.4 then crash.
      I make the propers deletes, but delete does not free the memory.

    Any help will be welcome.

  • Lifetime Qt Champion

    Did you try with std::vector reserve() to pre-allocate.
    Or test with std::array

    1. not that i know of.

    2. In the Projects tab, you must switch to Release and rebuild all.

    3. If you call new with a size it can not give you it will throw std::bad_alloc
      You could catch this error and have a look at size at that moment.

    4. maybe your Size in "double *a = new double[size] " is bigger that it needs to be and
      std::vector reflect the real count/size?

    also if run again without closing, the memory increase 600
    Without code , its hard to really think of something if you delete "a"
    and/or call vector.clear()

    you could use a tool like
    to double check the values from "Resource Monitor"

  • Thanks for the reply:

    Before to load the vector i do:
    std::vector<double> U(NumClusters * NumObs,0.0);
    Then load with the index: U[index]=10.0;

    1. When calling to new the needed size can by of 100.000 items.
      I know hot to catch, but does not know how can reserve memory
      to avoid the std::bad_alloc.

    For the same dataset the memory usage with std::vector always is 64mb,
    with std::vector does not have memory problems.
    But in the other version, using dynamic allocation double *a = new double[size];
    and making the proper delete statament : delete [] a;
    grow very high 600mb, then if run again 1.0Gb then 1.4Gb and then crash.
    Why delete does not free the memory.

    So the alternatives are speed up std::vector or try to avoid std::bad_alloc.


  • Lifetime Qt Champion

    I tried this test program in Release mode and vector was faster than the "new" array?!
    (qt 5.5)

    #include <QCoreApplication>
    #include <QDebug>
    #include <QElapsedTimer>
    #include <vector>
    int main ( int argc, char *argv[] )
        QElapsedTimer timer;
        const long int ArrayTestSize = 100000L;
        double *array = new double[ArrayTestSize];
        std::vector<double> array2 ( ArrayTestSize );
        for ( int c = 0; c < ArrayTestSize; c++ )
        { array[c] = 10.0; }
        qDebug() << "new nsecs\t" << timer.nsecsElapsed();
        for ( int c = 0; c < ArrayTestSize; c++ )
        { array2[c] = 11.0; }
        qDebug() << "vector nsecs\t" << timer.nsecsElapsed();
        delete [] array;

    The reason for std::bad_alloc can also be that it cannot get a continuous block of the size you request. This can
    happen quite fast on 32 bit systems due to fragmentation and 2gb limit pr process.

    Often when Vector is slower is when one inserts objects due to constructors etc, but you are only inserting doubles?

    if you not using new, will it still fail?

    long int ArrayTestSize = 100000L;
    double array[ArrayTestSize];

    -Why delete does not free the memory.
    It should free it on delete.
    Note. using "Resource Monitor" does not always tell the complete story as might
    still show memory use for cached files etc.

    Where you get your data from ? Loading from files or ?

Log in to reply