C++ fastest sequential containers
-
Hi,
What would be faster sequential container in c++ than vectors?
Container would need to expand during runtime,no items deleted from middle.
One way would be to have arrays likedouble arg[500]
but then it would need fixed max size?
It is for genetic algo and overall about 10-20 containers each containing max 50 items -
@Q139
QLinkedList
for example. Here are all elements linked together. So in case of appending only the last item has to be updated to point to it's next item.But each container has it's advantages and disadvantages. So it is here. Accessing an item of the middle of the list is slower.
As always the decision of the container type depends on the use case to get the optimum.
-
@raven-worx Algo would acess items troughout the full range many times but additional items added rarely.
-
@Q139
you asked for a fast container right?
In this case (many access requests, little inserts) a B-Tree or some other Trie implementation would be the best for you.Qt doesn't provide implementations for it directly, but there are some C++ implementations available.
But if you want to stick to Qt give the Qt containers a try and check if the performance is acceptable for you.
-
Are Qt containers faster in sequential or random acess than std::vector?
Would :
double arg[50]; for(uint at=0; at<50; at++){ double get = arg[at]; }
Work significantly faster than:
vector <double>vec[50]; for(uint at=0; at<50; at++){ double get = vec[at]; }
Or the differences in performance are minimal?
-
Inside std::vector<double> there is the same double[] array. Overhead occurs when you use at(), because it checks out of bound access, and it's more costly to construct, destruct, and copy, because it has additional data fields. There is std::experimental::dynarray which will become a part of next C++ stnard version, but you can already start using it in some implementations
-
Most important for "fast" is that it fits into the processor's cache and is accessed linearly.
-
Thanks for the internal info about vector and long materials about containers.
Not sure yet how does compiler optimize the containers for cache and what coding techiniques to use for overriding what goes to cache. -
If you are not running some weird microcontroller, you don't have control over what goes to cache and what does not. All you can do is to make efficient caching possible, e.g. sequential data array smaller than cache line size has a good chance
-
Youtube has a nice talk on this, Scott Meyers: Cpu Caches and Why You Care.
-
cachegrind
-
-
Some have suggested lists or trees - I would highly discouraged you from using these. Traversal in these is sloooow because of constant pointer chasing and cache misses caused by that. Also each insertion is a memory allocation in many implementations. Terrible for performance.
I'd say any linear structure is fine, be it std::vector, std::array, QVector or anything like that. I'd avoid naked C arrays just because it's easier to make out-of-bound access mistakes with them.
With vectors it's important that you don't allocate each time you insert. Since you know your cap (50 items) reserve the memory up front (reserve or reserve). With std:array just make it big enough to hold those 50 items.
If you use std::vector use a custom allocator not to request memory directly from OS each time you create a vector. You can find some cool allocators that use pooling or stack.It's also very important to know your access patterns. Design your algorithms so that they access the data in a linear fassion i.e. items close to each other (ideally consecutive). If you reserve the space insertion is almost as cheap as in the array. If you use std::vector prefer emplace_back instead of push_back, as it creates an item in place instead of copying it into the vector.
Also be aware of your const usage, e.g.
double get = vec[at];
If you're not modifyingget
later make it const. Since your vector is not const itself you're using a non-constoperator[]
. Usestd::as_const
orqAsConst
in these cases. This is especially important with Qt containers like QVector, because they use implicit sharing and you might make an accidental detach (copy) by using non-const member and that would be very costly. -
Started testing with linux profilers and found that vmware and native linux boot perform almost same.
And more interestingly found that running software under openSUSE or ubuntu under vmware under win 7 is significantly faster then just run in win7x64.
What could be the cause , is it linux kernel benefit and does vmware bypass windows kernel under VT-x virtualization mode?
Is there other linux compilers to try also besides gcc?
Some tests multiple times repeated and results vary very little between runs.
Software for the benchmark results is running 1 minute for each test and resut is number of calculations made. -
@Q139
The clang compiler is available for Linux. I believe there are others. It also has some neat tools for formatting and doing static analysis of your code. It generally performs quite well and produces good code.As for what container, benchmarking different ones will be the true test. @Chris-Kawa gave good advice though.
Mike
-
@Chris-Kawa If emplace_back is faster , is there any scenario when pushback would be needed instead?
If i have right info , emplace_back writes new item directly to array , while pushback makes new object and then writes it to array and probly destructs object later