Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Special Interest Groups
  3. C++ Gurus
  4. C++ fastest sequential containers
Forum Updated to NodeBB v4.3 + New Features

C++ fastest sequential containers

Scheduled Pinned Locked Moved Solved C++ Gurus
21 Posts 7 Posters 11.9k Views 4 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • raven-worxR raven-worx

    @Q139
    QLinkedList for example. Here are all elements linked together. So in case of appending only the last item has to be updated to point to it's next item.

    But each container has it's advantages and disadvantages. So it is here. Accessing an item of the middle of the list is slower.

    As always the decision of the container type depends on the use case to get the optimum.

    Q Offline
    Q Offline
    Q139
    wrote on last edited by
    #3

    @raven-worx Algo would acess items troughout the full range many times but additional items added rarely.

    Q 1 Reply Last reply
    0
    • Q Q139

      @raven-worx Algo would acess items troughout the full range many times but additional items added rarely.

      Q Offline
      Q Offline
      Q139
      wrote on last edited by
      #4

      @Q139 Would overhead of vector container be bigger than double arg[50]?

      raven-worxR 1 Reply Last reply
      0
      • Q Q139

        @Q139 Would overhead of vector container be bigger than double arg[50]?

        raven-worxR Offline
        raven-worxR Offline
        raven-worx
        Moderators
        wrote on last edited by
        #5

        @Q139
        you asked for a fast container right?
        In this case (many access requests, little inserts) a B-Tree or some other Trie implementation would be the best for you.

        Qt doesn't provide implementations for it directly, but there are some C++ implementations available.

        But if you want to stick to Qt give the Qt containers a try and check if the performance is acceptable for you.

        --- SUPPORT REQUESTS VIA CHAT WILL BE IGNORED ---
        If you have a question please use the forum so others can benefit from the solution in the future

        Q 1 Reply Last reply
        1
        • raven-worxR raven-worx

          @Q139
          you asked for a fast container right?
          In this case (many access requests, little inserts) a B-Tree or some other Trie implementation would be the best for you.

          Qt doesn't provide implementations for it directly, but there are some C++ implementations available.

          But if you want to stick to Qt give the Qt containers a try and check if the performance is acceptable for you.

          Q Offline
          Q Offline
          Q139
          wrote on last edited by Q139
          #6

          Are Qt containers faster in sequential or random acess than std::vector?

          Would :

          double arg[50];
          for(uint at=0; at<50; at++){
          double get = arg[at];
          }
          

          Work significantly faster than:

          vector <double>vec[50];
          for(uint at=0; at<50; at++){
          double get = vec[at];
          }
          

          Or the differences in performance are minimal?

          1 Reply Last reply
          0
          • K Offline
            K Offline
            Konstantin Tokarev
            wrote on last edited by
            #7

            Inside std::vector<double> there is the same double[] array. Overhead occurs when you use at(), because it checks out of bound access, and it's more costly to construct, destruct, and copy, because it has additional data fields. There is std::experimental::dynarray which will become a part of next C++ stnard version, but you can already start using it in some implementations

            1 Reply Last reply
            1
            • SGaistS Offline
              SGaistS Offline
              SGaist
              Lifetime Qt Champion
              wrote on last edited by
              #8

              Hi,

              Here you have a very good article about Qt vs STL containers.

              Interested in AI ? www.idiap.ch
              Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

              1 Reply Last reply
              2
              • ? Offline
                ? Offline
                A Former User
                wrote on last edited by
                #9

                Most important for "fast" is that it fits into the processor's cache and is accessed linearly.

                Q 1 Reply Last reply
                2
                • ? A Former User

                  Most important for "fast" is that it fits into the processor's cache and is accessed linearly.

                  Q Offline
                  Q Offline
                  Q139
                  wrote on last edited by Q139
                  #10

                  Thanks for the internal info about vector and long materials about containers.
                  Not sure yet how does compiler optimize the containers for cache and what coding techiniques to use for overriding what goes to cache.

                  1 Reply Last reply
                  0
                  • K Offline
                    K Offline
                    Konstantin Tokarev
                    wrote on last edited by
                    #11

                    If you are not running some weird microcontroller, you don't have control over what goes to cache and what does not. All you can do is to make efficient caching possible, e.g. sequential data array smaller than cache line size has a good chance

                    1 Reply Last reply
                    1
                    • ? Offline
                      ? Offline
                      A Former User
                      wrote on last edited by
                      #12

                      Youtube has a nice talk on this, Scott Meyers: Cpu Caches and Why You Care.

                      1 Reply Last reply
                      2
                      • Q Offline
                        Q Offline
                        Q139
                        wrote on last edited by Q139
                        #13

                        Good info on caches , It disencourages use of pointers and wise tips for parralel threads.
                        Is there profiler to simulate cache/ram acess times also?

                        1 Reply Last reply
                        0
                        • K Offline
                          K Offline
                          Konstantin Tokarev
                          wrote on last edited by
                          #14

                          cachegrind

                          1 Reply Last reply
                          1
                          • ? Offline
                            ? Offline
                            A Former User
                            wrote on last edited by
                            #15

                            RightMark Memory Analyzer

                            1 Reply Last reply
                            2
                            • Chris KawaC Offline
                              Chris KawaC Offline
                              Chris Kawa
                              Lifetime Qt Champion
                              wrote on last edited by
                              #16

                              Some have suggested lists or trees - I would highly discouraged you from using these. Traversal in these is sloooow because of constant pointer chasing and cache misses caused by that. Also each insertion is a memory allocation in many implementations. Terrible for performance.

                              I'd say any linear structure is fine, be it std::vector, std::array, QVector or anything like that. I'd avoid naked C arrays just because it's easier to make out-of-bound access mistakes with them.
                              With vectors it's important that you don't allocate each time you insert. Since you know your cap (50 items) reserve the memory up front (reserve or reserve). With std:array just make it big enough to hold those 50 items.
                              If you use std::vector use a custom allocator not to request memory directly from OS each time you create a vector. You can find some cool allocators that use pooling or stack.

                              It's also very important to know your access patterns. Design your algorithms so that they access the data in a linear fassion i.e. items close to each other (ideally consecutive). If you reserve the space insertion is almost as cheap as in the array. If you use std::vector prefer emplace_back instead of push_back, as it creates an item in place instead of copying it into the vector.

                              Also be aware of your const usage, e.g. double get = vec[at]; If you're not modifying get later make it const. Since your vector is not const itself you're using a non-const operator[]. Use std::as_const or qAsConst in these cases. This is especially important with Qt containers like QVector, because they use implicit sharing and you might make an accidental detach (copy) by using non-const member and that would be very costly.

                              1 Reply Last reply
                              7
                              • Q Offline
                                Q Offline
                                Q139
                                wrote on last edited by Q139
                                #17

                                Started testing with linux profilers and found that vmware and native linux boot perform almost same.
                                And more interestingly found that running software under openSUSE or ubuntu under vmware under win 7 is significantly faster then just run in win7x64.
                                What could be the cause , is it linux kernel benefit and does vmware bypass windows kernel under VT-x virtualization mode?
                                Is there other linux compilers to try also besides gcc?

                                alt text
                                Some tests multiple times repeated and results vary very little between runs.
                                Software for the benchmark results is running 1 minute for each test and resut is number of calculations made.

                                Q M 2 Replies Last reply
                                0
                                • Q Q139

                                  Started testing with linux profilers and found that vmware and native linux boot perform almost same.
                                  And more interestingly found that running software under openSUSE or ubuntu under vmware under win 7 is significantly faster then just run in win7x64.
                                  What could be the cause , is it linux kernel benefit and does vmware bypass windows kernel under VT-x virtualization mode?
                                  Is there other linux compilers to try also besides gcc?

                                  alt text
                                  Some tests multiple times repeated and results vary very little between runs.
                                  Software for the benchmark results is running 1 minute for each test and resut is number of calculations made.

                                  Q Offline
                                  Q Offline
                                  Q139
                                  wrote on last edited by Q139
                                  #18

                                  With valgrind when using callgrind , is it possible to see overall times for functions instead of call counts also?
                                  All logging features are enabled in settings.

                                  1 Reply Last reply
                                  0
                                  • Q Q139

                                    Started testing with linux profilers and found that vmware and native linux boot perform almost same.
                                    And more interestingly found that running software under openSUSE or ubuntu under vmware under win 7 is significantly faster then just run in win7x64.
                                    What could be the cause , is it linux kernel benefit and does vmware bypass windows kernel under VT-x virtualization mode?
                                    Is there other linux compilers to try also besides gcc?

                                    alt text
                                    Some tests multiple times repeated and results vary very little between runs.
                                    Software for the benchmark results is running 1 minute for each test and resut is number of calculations made.

                                    M Offline
                                    M Offline
                                    mjsurette
                                    wrote on last edited by
                                    #19

                                    @Q139
                                    The clang compiler is available for Linux. I believe there are others. It also has some neat tools for formatting and doing static analysis of your code. It generally performs quite well and produces good code.

                                    As for what container, benchmarking different ones will be the true test. @Chris-Kawa gave good advice though.

                                    Mike

                                    1 Reply Last reply
                                    0
                                    • Q Offline
                                      Q Offline
                                      Q139
                                      wrote on last edited by Q139
                                      #20

                                      @Chris-Kawa If emplace_back is faster , is there any scenario when pushback would be needed instead?
                                      If i have right info , emplace_back writes new item directly to array , while pushback makes new object and then writes it to array and probly destructs object later

                                      1 Reply Last reply
                                      0
                                      • Chris KawaC Offline
                                        Chris KawaC Offline
                                        Chris Kawa
                                        Lifetime Qt Champion
                                        wrote on last edited by Chris Kawa
                                        #21

                                        @Q139 emplace_back is not always faster. It can be faster in certain scenarios ;)

                                        If you have a vector of basic types (ints, floats, pointers etc.) you won't see a difference because no matter which one you use there's just gonna be a value written to memory location (probably just right from a CPU register so you can't go faster than that).

                                        Now for the cases it does matter. Lets say you have a big, expensive to construct or copy structure:

                                        struct BigStruct {
                                            BigStruct(int) { qDebug() << "Construction";  }
                                            BigStruct(const BigStruct&) { qDebug() << "Copy construction";  }
                                            BigStruct& operator=(const BigStruct&) { qDebug() << "Copy assignment"; return *this; }
                                        };
                                        

                                        Consider you push_back an element:

                                        std::vector<BigStruct> foo;
                                        foo.reserve(100);
                                        
                                        foo.push_back(BigStruct(42));
                                        
                                        // prints:
                                        // Construction
                                        // Copy construction
                                        

                                        so a temporary instance is created, copied to the vector and then destroyed.
                                        You might be smart and provide move semantics for your type to make it less expensive:

                                        struct BigStruct {
                                           ...
                                           BigStruct(BigStruct&&) { qDebug() << "Move construction";  }
                                           BigStruct& operator=(BigStruct&&) { qDebug() << "Move assignment"; return *this; }
                                        };
                                        

                                        and now you have:

                                        std::vector<BigStruct> foo;
                                        foo.reserve(100);
                                        
                                        foo.push_back(BigStruct(42));
                                        
                                        // prints:
                                        // Construction
                                        // Move construction
                                        

                                        This is better, but not all types are movable and you're still doing two things while you could be doing just one:

                                        std::vector<BigStruct> foo;
                                        foo.reserve(100);
                                        
                                        foo.emplace_back(42);
                                        
                                        // prints:
                                        // Construction
                                        

                                        This doesn't copy or move anything into the vector. It creates the thing already inside (usually via placement new). If the constructor takes parameters (like in this example 42) they are passed using perfect forwarding so everything is nice and optimizable.

                                        For a scenario a push_back is needed - that's whenever you have an item already created and you want to put it in a vector. For example a usual case:

                                        void SomeClass::addItem(const Item& item) {
                                           some_items_container.push_back(item);
                                        }
                                        

                                        which makes a copy of the item via copy construction.

                                        1 Reply Last reply
                                        3

                                        • Login

                                        • Login or register to search.
                                        • First post
                                          Last post
                                        0
                                        • Categories
                                        • Recent
                                        • Tags
                                        • Popular
                                        • Users
                                        • Groups
                                        • Search
                                        • Get Qt Extensions
                                        • Unsolved